path
stringlengths
7
265
concatenated_notebook
stringlengths
46
17M
03-Gaussians.ipynb
###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem 概率、高斯函数和贝叶斯理论 ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. 简介上一章的末尾,我们讨论了许多离散贝叶斯滤波器的缺点。當時我們說到,对很多跟踪和滤波问题,我们需要**单峰**和**连续**的滤波器。即是说,我们希望我们的系统使用(连续的)浮点数运算,并且置信度最高的可能状态唯一(单峰)。举例来说,我们可以说飞行器处于(12.34, -95.54, 2389.5)的位置,三个数字分别代表纬度、经度和海拔。我们不希望滤波器给出这样的回答:“它可能位于(1.65, -78.01, 2100.45),同時可能位於(34.36, -98.23, 2543.79).” 这類表述同我们对世界的直觀認識相悖。而且如我们之前讨论过的,多峰的情形会带来无法承受的计算代价。当然,我们也无法在有多个可能位置的情况下实现导航。我们需要一个单峰,连续的概率表达方式来建模世界,这能提高计算的效率。高斯分布提供了我们所需的所有功能。 Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. 均值,方差和正态分布大多数读者都已经接触过统计学,但无论如何,还是请允许我介绍一下这方面的资料。即使你在这方面很有自信,我还是希望你了解一下。有两点原因。首先,我希望我们使用一致的術語。其次,我希望努力通过建立一种对统计学的直觉的理解来帮助你学习后面的章节。借记忆公式和计算方式来通过考试很容易,但课后学生往往还是对所学内容的真实含义迷惑不解。 随机变量每次随机投掷一个骰子,你都会得到一个介于1到6之间的**结果**。如果我投掷一百万次理想骰子,我预期六分之一的输出为1。因此我们说输出为1的**概率**或者**可能性**为1/6. 同样的,如果问你下一次结果是1的可能性,你也会回答是1/6。值和其对应概率的结合叫做[**随机变量**](https://en.wikipedia.org/wiki/Random_variable)。这里**随机**的意思不是说過程是不确定的,而只是表示我们缺乏关于结果的信息。比如骰子的投掷是确定过程,但我们缺乏能计算出该结果的充分信息。我们不知道会发生什么样的事件,所以只能以概率的方式去理解。我们现在定义一些术语,将值的范围称为[**采样空间**](https://en.wikipedia.org/wiki/Sample_space)。骰子的采样空间是{1, 2, 3, 4, 5, 6}。硬币的采样空间是{正、反}。所谓**空间**,是一种数学术语,表示一种集合的数学结构。骰子的采样空间是自然数集合在1到6范围内的子集。另一个随机变量的例子时大学中所有学生的身高。在这里,采样空间是由生物学决定的两个界限之间内的所有实数。投硬币和掷骰子产生的随机变量是**离散随机变量**。意思是说,他们的采样空间是由有限个,或者可数的变量构成的集合,例如自然数集合。而人类身高构成**连续随机变量**,因为身高可以是限定区间内任意实数。不要混淆**测量值**和随机变量的真实值。如果我们只以0.1米的精度测量人的身高,我们可能得到如下记录:0.1, 0.2, 0.3, ……2.7,总共27个离散的选择。尽管如此,人的身高可能是偏离这些测量值的任意实数,所以人的身高是连续随机变量。统计学中用大写字母表示随机变量,而且通常使用字母表的后半部分。所以,我们可以用$X$表示掷骰子的随机变量,$Y$表示大一唱诗班的身高。后面的章节使用线性代数解决这些问题,所以我们继续遵循使用小写字母表示向量,大写字母表示矩阵的惯例。不幸的是这两条惯例互相冲突,所以你需要通过上下文推断作者使用的是哪一惯例。我总是用加粗字体表示向量和矩阵,这有助于辨别两者。 Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write: 概率分布[**概率分布**](https://en.wikipedia.org/wiki/Probability_distribution)随机变量在采样空间中所有可能取值对应的概率。以一个均一的骰子为例,我们可以说概率分布是:|值|概率||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|我们用小写的p表示这个分布:$p(x)$. 像普通函数的记号那样,我们可以这样写: $$P(X{=}4) = p(4) = \frac{1}{6}$$ This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as 该式表示骰子给出4的概率是$\frac{1}{6}$. $P(X{=}x_k)$表示“$X$等于$x_k$的概率”。請注意符号上微小的差异。大写$P$表示单一事件的概率,而小写的$p$表示概率分布函数。如果不注意观察,你可能会對兩者的區別感到迷惑。一些书本使用$Pr$來代替$P$來改善這一問題。再舉一個均勻硬幣的例子。硬幣的采样空间是{正,反}。因为硬币是均一的,所以正面(H)的概率是50%,反面(T)的概率同樣也是50%。於是可以作如下描述: $$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$ Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: 采样空间不是唯一的。采样空间对于骰子来说是{1, 2, 3, 4, 5, 6}. 另一种合法的采样空间也可以是{奇, 偶}. 还可以是{所有角落都有点, 不是所有角落都有点}. 只要一个采样空间覆盖了所有的可能,且其中每一事件都由一对应元素表示,那么这个采样空间就是合法的。{偶, 1, 3, 4, 5}对于骰子来说不是合法的状态空间,因为数字4朝上的事件对应“偶”和“4”两个元素。**离散随机变量**的所有取值的概率构成**离散概率分布**,**连续随机变量**的所有取值的概率构成**连续概率分布**。合法的概率分布必须满足对于任意取值$x_i$都有$x_i \ge0$,这是因为不存在负的概率。其次,所有事件的概率之和为1。这一点从投硬币实验中可以直观地看出来:如果有70%的概率正面朝上,那么反面朝上的概率就是30%. 上述条件可以形式化如下 对于离散性随机变量,有$$\sum\limits_u P(X{=}u)= 1$$对于连续性随机变量,有$$\int\limits_u P(X{=}u) \,du= 1$$上一章中我们用概率分布来估计了狗在走廊中的位置。例如: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. 每个位置都有一个0到1之间的概率数值,且它们的和为1,所以该数组构成一个概率分布。因为各个可能是离散的,所以我们可以更准确地将其称为离散概率分布。实践中我们省略“离散”或“连续”,除非有理由需要明确区分。 The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. 均值、中位数和随机变量的众数给定一个数据的集合,我们常常需要知道其中有代表性的数值,或者求其均值。许多度量方法都能达到此目的,它们被称为[**中心趋势的度量**](https://en.wikipedia.org/wiki/Central_tendency)。举例来说我们可能想知道某个班级平均身高的**平均数**。我们通过求和,然后除以数量的方式计算平均数。如果以米为单位,学生的高度是$$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$我们用这种方式计算均值$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$传统上,人们使用$\mu$符号表示均值。该计算方式可以形式化为$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy提供了`numpy.mean()`函数来计算均值。 ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown 为了方便,Numpy数组提供了`mean()`方法。 ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. **众数**是一个数的集合中出现次数最多的数。如果只有一个数的出现次数最多,那么这个集合是**单峰**集合,反之若有两个或更多的数字同时是出现次数最多的,那么该集合是**多峰**集合。例如集合 {1, 2, 2, 2, 3, 4, 4, 4}有众数2和4,是一个多峰集合。集合{5, 7, 7, 13}的众数为7,是一个单峰集合。本书不会像这样计算众数,而是在更一般的层面上使用单峰和多峰的概念。例如,在**贝叶斯**一章中我们关于狗的讨论中,认为狗位置的置信度是一个多峰分布,原因是我们为不同的位置分配了不同的概率。最后,**中位数**是将数的集合均分为两半的数,其中一半在该数之前,而另一半的值在该数之后。这里的先后关系是集合的排序方式决定的。如果集合有偶数元素的话,那么取两个最靠近中心的数的平均作为中位数。 ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. 随机变量的期望值随机变量的[**期望值**](https://en.wikipedia.org/wiki/Expected_value)是无穷多次采样的条件下,所有样本的平均。例如我们有随机变量$x=[1,3,5]$,且每个取值都是等可能的,我们对$x$的期望是如何呢?取平均吗?答案当然是1,3,5的平均,即3。这是有道理的。我们认为1,3,5三个数有均等的概率出现,故而$(1+3+5)/3=3$显然等于无穷次采样得到的所有样本的平均。换句话说,这里的期望值是样本空间的“平均”。现在假设每个值出现的概率不一样。例如1有80%的概率出现,而3有15%的概率,而5只有5%的概率。这种情况下,我们通过计算每个值$x$与其出现概率的成绩,然后求和的方式计算期望值。对于这个例子,我们计算$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$这里我们引入记号$\mathbb E[X]$来表示$x$的期望。一些书本使用$E(x)$。$x$的期望是1.5,这是符合直觉的,因为$x$是1的可能性大过3和5,而为3的可能性也大过5。我们可以按如下方式形式化该过程。令$x_i$表示$X$的第$i$个值,$p_i$是其对应的出现概率。$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$通过简单的代数运算我们可以看到,當所有取值的概率相等,期望值是各值的平均:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$若$x$是連續的,那麼求和符號要替換成積分符號,就像這樣$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$其中$f(x)$是$x$的概率分布函數。我們現在暫時不會用到這個公式,但下一章就會使用它。我們可以用一些Python代碼來做一個仿真。這裡我們取一百萬個樣本,並計算我們方才分析計算的該分布的期望值。 ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. 你可以看到計算出來的結果接近解析解。然而得到的結果並不完全準確,這是因為想要得到完全準確的結果需要無窮多的樣本。 ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence 練習擲骰子遊戲的期望值是多少? 解每一面朝上的概率都是均等的,即概率都是1/6。故而 $$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution 練習給定一均勻連續分布 $$f(x) = \frac{1}{b - a}$$ compute the expected value for $a=0$ and $b=20$. Solution 對於$a=0$,$b=20$的情況,計算該分布的期望值。(譯者註:$a,b$是分佈的起始和終止點) 解 $$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: 隨機變量的方差上面的計算方式可以告訴我們學生身高的平均值,但它無法給出所有我們需要的信息。例如,設有三個班級,記號分別是$X$、$Y$、$Z$,各班身高是: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. 通過NumPy,我們可以看到各個班級的平均身高都是一樣的。 ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: 各班平均身高都是1.8米,但請注意第二班級中各人的身高相較於第一個班級的差异更大,而第三個班級中所有人的身高都沒有任何差別。均值告诉我们一些关于数据的信息,但这并不是全部。我们还需要了解学生身高的“差异”程度。你可想象到许多可能的理由。比如某学区需要预定五千张课桌,希望他们买到的尺寸能符合当地学生的身高。统计学将差异程度的度量形式化为[**标准差**](https://en.wikipedia.org/wiki/Standard_deviation)和[**方差**](https://en.wikipedia.org/wiki/Variance)。计算方差的公式是$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$先不管這的平方符号,可以看到方差是采样空间$X$與均值$\mu:$ ($X-\mu)$的偏離程度的期望值。我隨後將解釋公式中平方的意義。計算期望值的公式是$\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$,替換入上式即得$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ 我們來計算一下各班身高的方差。通過觀察結果,我們能更熟悉這個概念。$X$的均值是1.8($\mu_x = 1.8$)所以有$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy提供了`var()`函數來計算方差: ###Code print(f"{np.var(X):.2f} meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. 高度的單位是米,而方差的單位是米的平方。這可能有點難理解。所以我們更常用的度量是“標準差”。標準差定義為方差的平方根:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$一般用$\sigma$表示“標準差”,而用$\sigma^2$表示“方差”。本書大多時候使用$\sigma^2$來表示方差,而不是$\mathit{VAR}(X)$。它們的意義是相同的。對於第一個班級,可以用如下方式計算標準差$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$可以用NumPy提供的函數`numpy.std()`來驗證我們算得的標準差。“std”常用來作為標準差的縮寫。 ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: 順便一提, 理所當然的有$0.1414^2 = 0.02$,這與我們先前算得的方差一致。標準差表示什麼呢?它告訴我們身高記錄的差異是多少。這種說法不使用數學術語。我們將在下一章介紹高斯分布的時候給出更精確的數學定義。現在,我先告訴你,約有68%的值位於以均值為中心,標準差決定的半徑內。換句話說,隨機給定的一個班級,有68%的學生的身高在1.66(1.8-0.1414)到1.94(1.8-0.1414)米之間。我們可以通過畫圖觀察這個現象: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. 因為只有五個學生,所以我們無法準確地得到68%位於一個標準差範圍內這個結果。實際上我們看到五個學生中有三個學生(60%)的身高落在$\pm1\sigma$的範圍內。在只有五個學生的情況下,你不可能得到一個比60%更接近68%的結果。我們看一個100個學生的例子。> 我們將一個標準差寫作$1\sigma$,讀作“一標準差”,不要讀成“一西格瑪”。兩個標準差寫作$2 \sigma$,以此類推。 ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. 雖然目測大約有68%的高度落在均值1.8附近$\pm1\sigma$範圍內,不過我們還是用代碼驗證一下。 ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for 我們很快會對此進行更深入的討論。現在我們先算出如下數組的標準差 $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$ The mean of $Y$ is $\mu=1.8$ m, so $Y$的均值是$\mu=1.8$米,所以 $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$ We will verify that with NumPy with 我們可以用NumPy通過下式作驗證。 ###Code print(f'std of Y is {np.std(Y):.2f} m') ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. 這同我們的預期相吻合。因為Y的數據中含有更多的差異,所以它的標準差也相對大些。最後,計算$Z$的標準差。數值間沒有任何差異,故而標準差應為0. $$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. 在繼續之前,我想指出我一直忽略了一點,男性的平均身高要高於女性。通常如果一個班級只有男性或者女性的話,它的身高的方差就會比有兩種性別的班級的身高方差要低。同樣考慮其他因素也有類似效果。健康的孩子比營養不良的孩子高。斯堪的纳维亚人比意大利人高。設計實驗時要考慮這一點。我先前提議利用這套方法來分析為學區預定課桌的問題。但是對於每個年齡組可能會有兩個不同的均值——一個集中在女性平均身高附近,另一個集中在男性身高附近。整個班級的平均身高會在二者之間。如果我們用所有學生的均值作為參考取購買課桌,我們買到的課桌可能即不適合男生,也不適合女生!我們不會在本書中討論這個問題。如果你想學習相關的技巧,解決這個問題,你可以隨便找一本標準概率學課本看看。 Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ 為什麼使用差值的平方為什麼在計算方差時,我們使用的是差值的“平方”?我可以用許多數學公式來說明這個問題,但我們還是從一種更簡單的角度去理解。下圖繪製了數組$X=[-3, -3, 3, -3]$及其均值。 ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: 若不取平方,那麼由於正負號,這些數字會全部相互抵消:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$顯然這是不對的。該數據的方差應該大於0。或許我們可以取絕對值?通過檢查我們可以看出結果是$12/4=3$,是正確的——每個數偏移均值的程度都是3。但如果有$Y=[6, -2, -3, 1]$,情況會如何呢?在這種情況下我們得到$12/4=3$。然而顯然$Y$的差異比$X$的顯著得多,計算得到的方差卻是一樣的。如果我們在公式中使用平方,我們將得到$Y$的標準差等於3.5,正確反應了其較大的差異。當然這不是一個嚴格的證明。實際上,這項技術的發明人卡尔·费里特立奇·高斯承認方差的設計有點隨意。假如數據中有離群點,那麼差值的平方可能會給予該項以不成比例的權重,這是不合適的。例如,讓我們看看這個例子: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Variance of X with outlier = {np.var(X):6.2f}') print(f'Variance of X without outlier = {np.var(X[:-1]):6.2f}') ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. 這個結果“正確”嗎?由你來判斷。如果沒有值為100的離群點,那麼方差$\sigma^2=2.03$,準確反映了在沒有離群點的情況下,$X$的差異情況。該離群點的存在主導了方差計算。我們是否希望離群點主導方差,以便確定離群點的存在;抑或穩健地合併異常值,提供一個接近於沒有異常值的方差估計?同樣的,這由你來決定。顯然這取決於你手頭的具體問題。我不再深入討論這個問題。如果你感興趣,你可以看看詹姆斯·伯杰在“貝葉斯穩健性”領域的工作,以及彼得J·胡伯「4」以“穩健統計學”為主題的優秀出版物。本書仍然使用高斯定義的方差及標準差。 從這裡可以總結出一個要點,即數據的“摘要”總是無法為我們講述一個完整的故事。這個例子中,高斯定義的方差不能告訴我們單個離群點的存在。儘管如此,它還是一個強大的工具,利用它我們可以簡潔地用少數參數描述大批數據。如果我們有十億數據,我們就無法肉眼觀察圖像,一個個核對列表中的數字。統計摘要提供了一種描述數據形貌的有用方法。 GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. 高斯分布我們現在準備好,可以學習[高斯分布](https://en.wikipedia.org/wiki/Gaussian_function)了。請牢記本章的動機。> 我们需要一个单峰,连续的概率表达方式来建模世界,这能提高计算的效率。我們先看一張圖來建立對我們討論對象的直觀感受。 ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: 該曲線叫做[**概率密度函數**](https://en.wikipedia.org/wiki/Probability_density_function),簡稱“pdf”。它展示了對於一隨機變量取某值的相對似然程度。從圖中可以看出,學生的身高某種程度上1.7米更有可能是1.8米,比起1.4米遠更有可能是1.9米。從另一方面看,許多學生的身高接近1.8,身高是1.4或2.2的人數極少。最後,注意曲線以均值1.8米為中心。> 我在Supporting_Notebooks文件夾下的文章“Computing_and_Plotting_PDFs“中解釋了如何繪製高斯曲線,以及許多其它內容。你可以從[這個連接](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) 「1」在線閱讀。如果叫它“鐘形曲線”可能對讀者更形象。這個曲線在現實世界中無所不在,許多觀測結果都以這種形式分布。我不會使用“鐘形曲線”一詞來指代高斯分布,因為許多不同的分布都由類似鐘形曲線的形狀。來自非數學的資料不會像我這麼嚴謹,所以遇到沒有定義的術語時,下結論前要機智一些。不知是高度的分布曲線是如此——許多自然界的現象都展示出類似的分布,包括我們在濾波器問題中使用的傳感器。我們將會看到,它具有所有我們所期望的性質——它以概率的形式表示單峰的信念或值,它是連續的,且便於高效計算。我們很快還會看有一些我們還沒意識到的需求,但這些令人嚮往的特質高斯分布也具備了。為了給你多一點啟發,請回想一下,在“離散貝葉斯”一章中,分布的形狀是這樣的: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! 它們雖不是完美的高斯曲線,卻與之相似。我們將會用高斯分布替換該章節所用的離散概率! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: 命名法在繼續之前,先學一點命名法的知識——該圖顯示一“隨機變量“在$(-\infty..\infty)$之間的任意值上都有“概率密度”。其含義是什麼?想象一下我們在一段高速公路上測量車輛的速度,測量是無限精確的。那麼我們就可以畫出以任意給定速度通過關口的車數。假如平均速度是120kph,那麼圖像看起來可能像是這樣: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. y軸表示“概率密度”——一個表示速度等於x軸上對應數值的車數的相對度量。我將在後面几節詳述這一概念。高斯模型是不完美的。儘管這幾張圖沒有體現,但實際上高斯分佈的“尾”無限向外延伸。這裡“尾”是指曲線上值最小的地方。當然,人類身高和車速不會小於0,更不必說$-\infty$和$\infty$了。人們常說“地圖上的不是這片土地”(譯者註:意為不要混淆概念和實體),這句話對貝葉斯濾波和統計也一樣適用。上面的高斯分佈建模了車速的分佈,但作為一個模型,它不一定是完美的。濾波器中總是存在模型與現實的差異。高斯分佈之所以在各種數學分支中都有使用,並不是因為它完美地建模了現實情況,而是因為它比其它更精確的方案更易於使用。然而即使是在本書中,也存在高斯分佈無法建模的現實問題,這使得我們不得不採用計算代價昂貴的替代方案。你可能既聽說過“高斯分佈”又聽說過“正態分佈”。在本文中,兩個指的都是一個東西,可以互相替換使用。本書會同時採用這兩種名稱,這是因為不同的來源會使用其中任意一種名稱,而我希望你同時熟悉這兩種叫法。最後,就像在這一段落中一樣,一個典型的做法將名稱縮短為“Gaussian”或者“normal”——它們都是“正態分佈”的簡寫。 Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as: 高斯分佈讓我們看看高斯分佈是如何工作的。高斯分佈是一種能完全有兩個參數描述的“連續概率分佈”。這兩個參數分別是均值($\mu$)和方差($\sigma^2$)。高斯分佈由如下函數定義: $$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$ $\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $\exp[x]$即$e^x$.就算你之前没見過這個公式,也别被它勸退了。你不需要記住它,也不需要演算。它的計算已經存儲在`stats.py`文件的`gaussian(x, mean, var, normed=True)`中了。 忽略常數項,可以看到它不過是一個指數函數: $$f(x)\propto e^{-x^2}$$ which has the familiar bell curve shape 它的形狀是我們熟悉的鐘形曲線。 ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. 回忆一下如何查看函数的源码。在一个单元格中输入函数名并以问号结尾,然後按下CTRL+ENTER,就會跳出一個窗口,顯示出函數的源碼。將下一個單元格中代碼的注釋取消運行看看。 ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. 讓我們繪製一個均值為22$(\mu=22)$,方差為4 $(\sigma^2=4)$的高斯分佈。 ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume. 這個曲線的“含義”是什麼?設我們的溫度計給出讀數22C. 溫度計都是不完美的,所以我們認為每次讀數都略有偏差。然而,[中心極限定理](https://en.wikipedia.org/wiki/Central_limit_theorem)指出如果我們做多次測量,那麼測量結果呈正態分佈。觀察這張圖,我們發現曲線高度正比於給定實際溫度為22°C時溫度計讀取到對應溫度的概率。注意高斯分佈是“連續”的。考慮一條無限長的數軸,在其上隨機取一點,取到2的概率是多少?顯然是0.因為這條直線上有無窮多點可以選擇。對於高斯分佈也是如此。在上圖中,取到的溫度“精確”地等於2°C的概率也是0,因為可以選取的數值無線多。這是一條什麼曲線?它就是我們所說的“概率密度函數”。曲線下某一區間內的面積就等於你取到區間內一數的概率。舉例來說如果你算出20到22之間的底面積,你就得到了讀取溫度在兩者之間的概率。還有另一種理解方式。試想石塊,又或者海綿的“密度”是怎樣的。密度是給定空間內質量大小的度量。石塊稠密而海綿稀鬆。如果你想知道石塊的重量卻沒有秤,你可以計算其體積與密度的乘積來得到質量。實踐中,同一物體的不同部位有不同的密度,所以需要計算石塊體積內的積分。 $$M = \iiint_R p(x,y,z)\, dV$$ We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian 這些對於“概率密度”也是一樣的。如果你想知道溫度處於20°C與21°C之間的概率,你可以計算從20到21的積分。而積分正給出了曲線在積分內的面積。因為這條曲線是概率密度曲線,所以其定積分得到的就是概率。測得的溫度恰好等於22°C的概率是?直覺上我們知道概率是0.這是因為我們討論的是實數,而取得22°C與其它數(不妨設22.00000000000017°C)的幾率都是無窮小的。數學上我們也知道從22到22的積分是零。回到石塊的問題上來,石塊上一個點的質量是多少?一個無窮小的點是沒有質量的。求單個點的質量沒有意義。同樣的,求連續分佈中單個值的概率也沒有意義。兩者的答案都是0.然而實踐中傳感器的精度是有限的,所以讀數22°C實際上表示的是一個區間,例如22 $\pm$ 0.1°C,而這個區間的概率可以通過計算21.9到22.1的積分求得。我們可以從貝葉斯學派和頻率學派兩方面考慮這一問題。從貝葉斯的角度看,如果溫度計讀數恰好是22°C,那麼我們說該曲線描述了我們的信念——溫度接近22°C處置信度高,而真實溫度在18°C附近的置信度低。從頻率學派的角度看,我們說如果在真實溫度為22°C的條件下測量十億次,那麼統計直方圖會接近上面的這條曲線。那麼要如何計算概率,或者說如何求曲線的面積呢?方法是計算高斯分佈的定積分。 $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$ This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute 這被稱為“累積概率函數”,常縮寫為“cdf”。我實現了`filterpy.stats.norm_cdf`用於計算積分。例如可以作如下計算 ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as 均值($\mu$)的含義正如其名——是所有可能性的平均。由於高斯分佈具有對稱性,故而均值也恰好是曲線的最高點。溫度計讀數為22°C,所以我們用它作為均值。隨機變量$X$服從高斯分佈,記為$X \sim\ \mathcal{N}(\mu,\sigma^2)$。其中$\sim$表示“服從於……分佈”。於是我們可以將溫度計讀數表示為 $$\text{temp} \sim \mathcal{N}(22,4)$$ This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. 這個結果十分重要。高斯分佈允許我們只用兩個參數來表示無窮多取值的可能性!給定值$\mu=22$與$\sigma^2=4$我可以計算任意範圍內測量值的分佈。一些來源使用$\mathcal N (\mu, \sigma)$而非$\mathcal N (\mu, \sigma^2)$表是高斯分佈。兩種表示方法都可以,它們都是慣例記法。你在看到類似於$\mathcal{N}(22,4)$的符號時,你需要知道這是哪一種記法。本書一律使用$\mathcal N (\mu, \sigma^2)$,這種記法,所以在這個例子中$\sigma=2$, $\sigma^2=4$。 The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) 方差與信念所有的概率密度分佈,其虛線下的面積總是等於1. 這從直覺上很容易理解——曲線下的面積等於所有可能性的總和。一個事件發生了,那麼“所有可能性中有某種可能性為真”的概率是1. 所以概率密度的總和必須是1. 我們可以用少量代碼證明這一點。(如果你偏向於用數學方法證明,那麼你可以計算高斯分佈從$-\infty$到$\infty$的積分) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. 這引出了一個重要的見解。即方差小則曲線窄。這是因為方差是樣本偏離均值的程度的度量。為使面積等於1,曲線必須變高。另一方面,方差大則曲線寬,這是因為曲線必須變矮以使得面積等於1. 讓我們從圖像上理解這一點。我會使用前面提到的`filterpy.stats.gaussian`函數,它可以接受標量作為輸入,也可以接受數組作為輸入。 ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. 一般`gaussian`函數會對輸出作歸一化,將輸出轉化為概率分佈。可以用`normed`參數控制這一行為。 ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. 沒有歸一化的高斯函數就不能被稱為高斯分佈。 ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. 這張圖高速我們什麼?方差$\sigma^2=0.2^2$的高斯分佈非常窄,它表明限制在$\pm 0.2$標準差的範圍內,我們非常相信$x=23$。對比之下$\sigma^2=1^2$ 的高斯分佈同樣相信$x=23$,但其相信程度沒那麼高。當我們對$x=23$的信念降低時,我們的信念就被分散到其它可能取值上去——比如,我們對$x=20$或 $x=26$的信念都提高了。$\sigma^2=0.2^2$幾乎消除了取值為$22$或$24$的可能性,而$\sigma^2=1^2$的情況下,兩者的可能性與取值為$23$的可能性相當。重新考慮溫度計的問題,我們可以將這三條曲線看作三種不同溫度計的讀數。$\sigma^2=0.2^2$的曲線表示一個相當準確的溫度計,而$\sigma^2=1^2$ 的曲線表示一個相等不準確的溫度計。注意高斯分佈為我們帶來了它的強大表示能力——我們可以完整地表示溫度計的讀數和誤差——僅需要均值和方差兩個參數。另一種等效的高斯分佈表示方式是$\mathcal{N}(\mu,1/\tau)$,其中$\mu$表示“均值”而$\tau$表示“精度”。$1/\tau = \sigma^2$,是方差的導數。儘管本書不會採用這種表示方式,但需要注意的是,這種表示方式表明方差是數據精確程度的度量。方差小則精度大——因而此處的精度是極大的。反之則精度小——這就導致信念廣泛分散開來。你需要習慣這三種不同的表示高斯分佈的方式。在貝葉斯學派的術語中,高斯分佈反映我們對測量值的“信念”,表示測量值的“精度”,表示測量值中的“方差”。它們是陳述同一事實的不同方式。我說的有點超前了。但在下一章中,我就會開始用高斯分佈表達對事物的信念,例如估計被跟蹤物體的位置或傳感器的精度。 The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. 68-95-99.7法則現在值得花點時間討論標準差。標準差度量了數據偏離均值的程度。對於高斯分佈,68%的數據落在均值附近一標準差($\pm1\sigma$)的範圍內,95%的數據落在兩標準差($\pm2\sigma$)的範圍內,有99.7%落在三標準差($\pm3\sigma$)的範圍內。這常常被叫做[68-95-99.7法則](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)。如果你得知一個班級的某次考試分數的均值是71,標準差是9.4,那麼在分數服從正態分佈的條件下你就能得出結論,認為95%的學生的分數落在52.2到89.8之間(即$71 \pm (2 * 9.4)$)。最後,它們不是任意數字。如果我們的位置服從高斯分佈,均值為$\mu=22$米,那麼其標準差的單位也是米。那麼$\sigma=0.2$表示68%的測量值落在21.8到22.2米的範圍內。方差是標準差的平方,所以$\sigma^2 = .04$平方米。正如上一小節中可以看到的那樣,寫作$\sigma^2 = 0.2^2$有時含義更加明顯,這是因為0.2的單位與原數據的單位是一樣的。下圖展示了標準差與高斯分佈的關係。 ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. 可交互的高斯分佈圖我為那些使用Jupyter Notebook的讀者準備了一個可交互版本的高斯分佈圖。調整滑塊以修改$\mu$與$\sigma^2$。調整$\mu$會使得分佈左右移動,這是因為它的均值發送了改變。調整$\sigma^2$使得分佈的薄厚發生改變。 ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. 最後,我為通過瀏覽器閱讀本書的讀者準備了高斯分佈的動圖。首先,均值從左向右移,然後均值固定在$\mu=5$,方差發生改變。 Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussians is that the sum of two independent independent normal variables (https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables) is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. 高斯分佈的計算特性離散貝葉斯濾波器計算任意概率分佈的和與積。而卡爾曼濾波器使用高斯分佈,其餘部分保持一致。所以我們需要計算高斯分佈的乘積和累加。高斯分佈的一個特性是,兩個獨立的服從高斯分佈的隨機變量之和([https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables](https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables))仍服從高斯分佈!高斯分佈的乘積則不是高斯分佈,但仍然與高斯分佈成比例。這裡我們可以說兩個高斯分佈的乘積是高斯函數(這裡函數的意思是其和不一定為1)。在開始數學推導之前,我們先親眼確認這些性質。 ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. 這裡,我建立和繪製了兩個高斯分佈,分別是g1=$\mathcal N(0.8, 0.1)$和g2=$\mathcal N(1.3, 0.2)$。然後我計算它們的乘積,並對結果做歸一化。如圖所示,結果“看起來”像是一個高斯分佈。高斯分佈不是線性函數。一般情況下,非線性函數的乘積會得到其它類型的不同函數。例如,兩個正弦函數的乘積與`sin(x)`是非常不同的。 ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by: 但是高斯分佈的乘積仍然是高斯函數。這是卡爾曼濾波器從計算上可行的關鍵原因。從另一方面來說,卡爾曼濾波器之所以選擇高斯分佈是因為其有良好的計算性質。兩個獨立的服從高斯分佈的隨機變量的乘積如下: $$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$ The sum of two Gaussians is given by 兩個獨立的服從高斯分佈的隨機變量的乘積如下: $$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$ At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. 本章的最後會給出這些式子的推導。但是我不要求你理解它們。 Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: 聯結所有線索現在我們可以討論高斯分佈在濾波器中的應用了。下一章我們會用高斯分佈實現一個濾波器。現在我先介紹我們為什麼要選擇高斯分佈。上一章我們給出了一個使用數組來表示概率分佈。我們通過計算數組間的逐元素乘積來實現更新操作,其中一個數組是概率分佈,另一個數組是測量值在各點的似然度,類似下面這段代碼: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. 換句話說,為了要得到結果,我們需要計算十個乘法。對於實際應用的濾波器而言,所用的數組極大,維度極高,從而需要數十億次計算,佔用大量內存。然而這個離散分佈看起來是一個高斯分佈。如果我們使用高斯分佈來代替這個數組,情況會怎樣呢?我計算了後驗的均值和方差,然後把結果畫了出來。 ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with 結果令人映像深刻。我們僅僅只需使用兩個數就能表示整個離散分佈。這個例子的離散分佈只用到10個數,可能沒有說服力。但是實際問題中可能使用數百萬的數字,而我們仍只需用兩個參數來概括它們。接著,記得我們是這樣實現濾波器的更新函數的: ```pythondef update(likelihood, prior): return normalize(likelihood * prior)``` If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with 如果數組中有一百萬個元素,那麼就需要一百萬次乘法。然而,如果用高斯分佈來替換數組,那乘法操作就可以這樣實現: $$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$ which is three multiplications and two divisions. 一共是三個乘法兩個除法。 Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation: 貝葉斯定理上一章我們實現了一個算法,它通過利用每一時刻的信息來完成推理,而這些信息是通過離散概率分佈表示的。這個過程中我們發現了[貝葉斯定理](https://en.wikipedia.org/wiki/Bayes%27_theorem)。這個定理告訴我們如何在已知先驗的條件下計算概率。我們實現了`update()`函數來執行該概率計算: $$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as: 我們看到這其實就是貝葉斯定理。我馬上給出數學推導。不過從某種角度上,數學推導模糊了公式所表達的簡單思想。上面的算式可以讀作: $$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$ where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is 其中$\| \cdot\|$表示對中間的項做歸一化。一開始我們從關於在走廊中散步的一隻狗開始推理,然後發展出現在的這些知識。然而正如你所見,這些公式能廣泛應用於許多濾波問題。我們在後續每一章節都會使用這一公式。回想一下,“先驗”是在我們考慮測量值的概率(即“似然度”)前某事發生的概率,而“後驗”則是我們將測量的信息納入考量後的概率。貝葉斯理論指出 $$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$ $P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions $P(A \mid B)$稱作[條件概率](https://en.wikipedia.org/wiki/Conditional_probability)。如其名所示,它表示了“假設”$B$發生的條件下,$A$發生的概率。舉例來說,如果昨天下雨了,那麼今天下雨的概率要比以往更大一些,原因是雨天往往持續一天以上。我們可以將昨天下雨的條件下今天也下雨的概率表示為$P$(今天下雨$\mid$昨天下雨).我忽略了一個重要的問題。上面的代碼中,我們使用的是概率的數值,而不是概率的數組——即“概率分佈”。我方才為貝葉斯定理給出的公式同樣使用的概率數值,而不是概率分佈。然而其實它也同樣適用於概率分佈的情況。我們用小寫的$p$來表示概率分佈。 $$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$ In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it. 上面的等式中,$B$是“證據”,$p(A)$是“先驗”,$p(B \mid A)$是“似然”,而$p(A \mid B)$是“後驗”。如果將式子中的數學項替換為對應的名詞,你就會發現這正與我們的更新公式吻合。讓我們根據我們的問題改寫這個公式。我們用$x_i$表示位置*i*,用$z$表示測量值。於是我們相求的是$P(x_i \mid z)$,即已知測量值$z$的條件下,狗位於位置$x_i$的概率。現在,我們將這些符號代入公式求解。 $$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$ That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function: 儘管它看起來很醜陋,但本質上是很簡單的。讓我們看看右側每一項的含義。首先是$p(z \mid x_i)$,它表示似然度,即測量值位於$x_i$處的概率。$p(x_i)$是“先驗”——即我們考慮測量值之前的信念。我們將兩項相乘,也就得到了`update()`的不做歸一化的版本: ```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)``` The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as 最後考慮分母中的$p(z)$. 它是不考慮實際位置的情況下,取得測量值$z$的概率。它常被稱為“證據”。代碼中,我們通過求$x$的和,即`sum(belief)`來計算它。這也和我們做歸一化的方式是一樣的!所以,`update()`本質上所作的事情就是計算貝葉斯公式。許多文獻給出的是這個公式的積分形式。不論如何,積分本質上只是對連續函數的求和。所以,你可能會看到這種形式的貝葉斯定理: $$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$ This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute 其中的分母基本不可能有解析解。即使可以求解,其數學過程也極其繁難。最近英国皇家统计学的[意見書](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)中把它叫做“狗的早餐”[8]。濾波器的課本中,無解析解的積分方程隨處可見。你不必畏懼這些公式,因為我們用簡單的對後驗做歸一化就能處理這個積分。在**粒子濾波器**一章,我們會學到更多處理這類情況的技術。在此之前,你只需要知道這是一個可求和的歸一化項。我想說的是,當你面對一整頁積分時,只需知道它們就是求和。將它們和本章內容聯繫起來,困難就會消失。問問自己“為什麼要求這些值的和?”,“為什麼要除以這個數?”。令人驚訝的是,答案往往顯而易見,以及作者有時候會忘記解釋其中的道理。可能你還沒有清楚認識到貝葉斯定理的威力。要知道計算$p(x_i \mid Z)$,即給定測量值的條件下計算可能狀態是什麼,這件事情通常是極困難的。貝葉斯定理是通用的。我們可能想在得到癌症檢測報告後知道罹患癌症的概率,或者在已有多種傳感器數據的情況下,天會下雨的概率。這些問題看起來是很難直接求解的。但貝葉斯定理允許我們反過來計算$p(Z\mid x_i)$,而這通常是容易直接計算的。 $$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$ That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. 即是說,要想知道已有特定傳感器數據的情況下落雨的概率,只須計算下雨時傳感器讀數的可能性!這是一個相比之下**相當**簡單的問題!好吧,天氣預報仍然是一個很難的問題,但貝葉斯定理至少讓它簡單了些。正如你在“離散貝葉斯”一章中所見,我們通過計算西蒙位於位置`x`時傳感器讀數的可能性來計算西蒙在走廊任意區域的可能性。這就將複雜的問題簡化了。 Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is 全概率定理我們已經正式知道了`update()`函數背後的數學原理。那麼`predict()`函數呢?`predict()`實現的是[全概率定理](https://en.wikipedia.org/wiki/Law_of_total_probability)。讓我們回憶一下`predict()`是如何計算的。它計算的是所有可能的移動方式下目標來到指定位置的概率。目標在時刻$t$來到位置$i$的概率可以記為$P(X_i^t)$. 我們計算$t-1$時刻的先驗$P(X_j^{t-1})$與從$x_j$移動到$x_i$的概率的積,並對所有可能的$j$求和。即 $$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$ That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation 該公式稱為“全概率公式”。維基百科[6]中寫道“它表示了通過不同事件實現某一結果的總的概率”。如果我只告訴你這個公式和實現`predict()`函數的方法的話也行,但這樣一來你就無法理解為什麼這個公式能夠運作了。作為提醒,實現該公式的代碼如下 ```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. 用scipy.stats包計算概率本章中我用了[FilterPy](https://github.com/rlabbe/filterpy)來做計算和繪製高斯分佈。這樣你可以查看庫的源代碼從而了解各函數的實現方式。不過正如俗話所說,Python是“自帶電池”的。`scipy.stats`包同樣提供了豐富的統計函數。所以我們不妨看看如何使用scipy.stats來計算統計量和概率。`scipy.stats`模塊包含許多對象,你可以用它們計算許多種概率分佈的屬性。這個模塊的完整文檔可以從這裡找到:http://docs.scipy.org/doc/scipy/reference/stats.html. 我們先關注於norm變量,它實現了正態分佈。讓我們看看一些使用`scipy.stats.norm`計算高斯分佈的代碼,並對比該代碼與FilterPy的`guassian()`函數在返回值上有什麼不同。 ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: 調用`norm(2, 3)`建立了一個分佈。在scipy中,這被稱為是“凍住”的分佈——它建立和返回了一個均值為2,標準差為3的對象。你可以重複使用該對象以獲取多個點處的概率密度,例如: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor)[2]處的文檔列舉了許多其它函數。例如我們可以利用`rvs()`函數從分佈中生成$n$個樣本。 ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 3.539 0.974 0.532 -2.266 1.364 2.541 2.854 3.469 2.423 -2.208 0.778 1.023 2.271 1.35 3.752] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. 我們也可以使用[累積分佈函數(CDF)](https://en.wikipedia.org/wiki/Cumulative_distribution_function)來計算取得的隨機變量小於等於$x$的概率。 ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: 我們還可以查詢分佈的許多屬性: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. 利用高斯分佈建模世界的局限早先我提到“中心極限定理”,它表明一定條件下,給定一組獨立隨機變量,不論它們服從何種分佈,它們的和都服從正態分佈。這點對於我們很重要,因為自然界中不滿足正態分佈的隨機變量太多了,但是只要在大量樣本上應用中心極限定理,我們就會得到正態分佈。然而,證明的關鍵在於所謂的“一定條件”。對於物理世界來說,這個條件不一定是可以滿足的。例如,廚房用秤的讀數不能小於0. 但如果我們用高斯分佈表示其測量誤差,那麼曲線的左側會一直延伸到負無窮,這表示它有非常小的概率還是能讀到負數。我不會詳盡論述這一廣闊的話題。考慮一個簡單的例子。我們認為諸如考試分數一類的數字服從正態分佈。如果你曾經有一份某教授的“評分曲線”,那麼你可能也受到這個假設的影響。但是,其實考試分數不可能服從正態分佈。這是因為正態分佈為“任意”數值都賦予了非零概率,不論這個數字離均值有多麼遠。例如,假設均值為10,標準差為13,那麼正態分佈認為某人取得90分的概率是很高的,而取得40分的概率是很低的。然而它也認為存在微小的概率使得某人取得的分數為-10或者150。它也為分數為$-10^{300}$和$10^{32986}$的情況分配了極小的概率。高斯分佈的尾巴是無限長的。我們知道這在考試中是不可能發生的。不考慮額外加分,你既不可能取得小於0的分數,也不可能取得大於100的分數。我們將正態分佈畫出來,看看它對真實的分數分佈擬合得多麼差。 ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: 曲線下的面積不等於1,所以這不是一個合法的概率分佈。舉例來說,真實情況可能是有更多的學生的分佈處於上界附近,於是分佈的尾部變得更“肥”。此外,考試可能無法區分學生技能的細微差異,使得均值左側的分佈在某些地方有些集中。傳感器反映了真實的世界。所以,傳感器的誤差往往不真正服從正態分佈。現在談它對卡爾曼濾波器設計的負面影響還為時過早,但你不妨先記住卡爾曼濾波器的數學原理基於理想的世界模型。我現在展示一段生成分佈的代碼,之後我會用它來模擬各式各樣的過程模型以及傳感器模型。這裡用到的分佈叫做[學生t-分佈](https://en.wikipedia.org/wiki/Student%27s_t-distribution)。設我們要建模的傳感器的輸出包含白噪聲。為簡單起見,設信號為常數10,噪聲的標準差為2. 我們可以用函數`numpy.random.randn()`得到均值為0標準差為1的隨機數。於是我可以通過如下代碼模擬該傳感器: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. 畫出來的信號長這樣。 ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. 圖像與我的預期一致。信號以10為中心。標準差為2,所以68%的測量值落在10$\pm$2的範圍內,99%的測量值落在10$\pm$6的範圍內,與觀察相符。現在看看學生t-分佈生成的分佈。我不會深入介紹數學原理,只是簡單的給出源代碼,並且用它向你展示繪製出的分佈。 ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with 從圖上可以看到,輸出與正態分佈相似,但是超出3個標準差範圍(7到13)的離群點數量更多一些。學生t-分佈並不能準確建模你的傳感器(比如GPS或多普勒雷達)。本書主題也不在於如何建模物理系統。儘管如此,當需要呈現真實世界的噪聲時,使用它來測試濾波器的性能還是合理的。本書後續的仿真和測試都會使用這樣的分佈。這樣做並非因為我們在杞人憂天。卡爾曼濾波器假設噪聲服從高斯分佈,一旦該假設不成立,那麼高斯分佈就不能以理想狀態工作。關鍵任務的濾波器設計,比如航天器的濾波器設計,不僅需要對大量理論知識的掌握,還需要對航天器的傳感器性能有經驗性的認識。舉一個例子,有一次我看到NASA的演示文檔上這樣說,說理論上應當用3個標準差作為判定噪聲與有效測量值的界限,但實際上他們使用的是5到6個標準差,而這是通過實驗確定的。rand_student_t的代碼包含於`filterpy.stats`. 你可以這樣使用它: ```pythonfrom filterpy.stats import rand_student_t``` While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. 以下內容不會詳述,但我簡單提一下。統計學定義了多種通過分佈與指數分佈的差異來描述分佈形狀的方法。正態分佈關於均值對稱,形如鐘形曲線。但一般的概率分佈則未必關於均值對稱。這一不對稱性的度量叫作[偏斜度](https://en.wikipedia.org/wiki/Skewness)。分佈的尾部可以偏短,偏胖,偏瘦,或者以其他方式有別於指數分佈。這一差異的度量叫作[峰度](https://en.wikipedia.org/wiki/Kurtosis)。`scipy.stats`模塊的`describe`函數能計算包含這些度量在內的各類統計量。 ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: 我們觀察看看一大一小兩個正態分佈的情況: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.4763064406143493, 1.2590181787171273), mean=-0.2696564651981756, variance=0.5837891948660157, skewness=0.4640233696384569, kurtosis=-0.056406977430087046) DescribeResult(nobs=300000, minmax=(-4.648089038058865, 4.464044626625718), mean=-0.00020574188575666732, variance=0.999635785544993, skewness=0.0033931336262926458, kurtosis=0.013906631067046593) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $B=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to at least basic statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.8 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squaring for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not be faced with these kinds of problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code from filterpy.stats import gaussian xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 0.824 3.24 3.113 4.934 -3.799 -1.775 -1.02 7.542 -0.76 1.875 3.862 1.77 4.846 -0.818 5.726] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.So we desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a *random variable*. *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the *sample space*. For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe *probability distribution* gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$P(X{=}H) = 0.5\\P(X{=}T)=0.5$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u)= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a *measure of central tendency*. For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe *expected value* of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is just the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. So for this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because x is far more like to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x)$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class. So the mean tells us something about the data, but it does not tell the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of *standard deviation* and *variance*. The equation for computing the *variance* is$$VAR(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space (X) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$VAR(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}VAR(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\VAR(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{VAR(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $VAR(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance. What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from gaussian_internal import plot_height_std import matplotlib.pyplot as plt with figsize(y=2): plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $1\sigma$ of the mean 1.8. We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code with figsize(y=2.5): X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct - each value varies by 3 from the mean. But what if have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. Gaussians We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown > I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be so precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights - a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for - it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with figsize(y=1.5): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown Nomenclature A bit of nomenclature before we continue - this chart depicts the *probability density* of of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* - the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. "The map is not the territory" is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* - these are both typical shortcut names for the *Gaussian distribution*. Gaussian Distributions So let us explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$; I avoid using superscripts in print so that the fonts are larger and more readable. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))``` We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown So what does this curve *mean*? Assume we have a thermometer which reads 22$\,^{\circ}C$. No thermometer is perfectly accurate, and so we normally expect that thermometer will read slightly plus or minus that temperature each time we read it. However, a theorem called *Central Limit Theorem* states that if we make many measurements that the measurements will be normally distributed. So, when we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22$^{\circ}C$. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 22$^{\circ}C$ is 0% because there are an infinite number of values the reading can take.So what then is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22$\,^{\circ}C$, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22$\,^{\circ}C$, then a histogram of the measurements would look like this curve. So how do you compute the probability, or area under the curve? Well, you integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. So, for example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown So the mean ($\mu$) is what it sounds like - the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads $22^{\circ}C$, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range. The Variance and Belief Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear - the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown So what is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out - we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us - we can entirely represent both the reading and the error of a thermometer with only two numbers - the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision - our measurement is very precise. Conversely, a large variance yields low precision - our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($1\sigma$) of the mean, 95% falls within two standard deviations ($2\sigma$), and 99.7% within three ($3\sigma$). This is often called the 68-95-99.7 rule. So if you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, so $\sigma^2 = .04$ meters$^2$. The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive Gaussians For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of the Gaussian Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.A typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.The key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are so computationally nice. Computing Probabilities with scipy.stats In this chapter I have used custom code from FilterPy for computing Gaussians, plotting, and so on. I chose to do that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. I find the performance of some of the functions rather slow (the `scipy.stats` documentation contains a warning to this effect), but this is offset by the fact that this is standard code available to everyone, and it is well tested. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. However, we will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown If we look at the documentation for `scipy.stats.norm` here[2] we see that there are many other functions that norm provides.For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 0.409 2.668 6.657 2.062 4.686 4.65 -2.214 -2.35 6.065 1.219 -0.904 5.647 8.615 -4.476 0.463] ###Markdown We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown There are many other functions available, and if you are interested I urge you to peruse the documentation. Sometimes the documentation is terse, but with a bit of searching you can find out what a function does and some examples of how to use it. Most of this functionality is not of immediate interest to the book, so I will leave the topic in your hands to explore. The SciPy tutorial [3] is quite approachable, and I suggest starting there. Fat Tails Earlier I spoke very briefly about the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is extremely important for (at least) two reasons. First, nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. Second, Gaussians are mathematically *tractable*. We will see this more as we develop the Kalman filter theory, but there are very nice closed form solutions for operations on Gaussians that allow us to use them analytically.However, a key part of the proof is "under certain conditions". These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor 'grade on a curve' you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes 'fat'. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. Kalman filters use sensors to measure the world. The errors in sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the *Student's t distribution*. Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. So I could simulate this sensor with ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's T distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output _____no_output_____ ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output _____no_output_____ ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output _____no_output_____ ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output _____no_output_____ ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output _____no_output_____ ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output _____no_output_____ ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output _____no_output_____ ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output _____no_output_____ ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output _____no_output_____ ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output _____no_output_____ ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output _____no_output_____ ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output _____no_output_____ ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output _____no_output_____ ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output _____no_output_____ ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output _____no_output_____ ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output _____no_output_____ ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output _____no_output_____ ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.08 2.024 1.4 3.024 5.799 0.989 2.083 0.978 7.542 -2.22 4.984 0.626 4.387 3.676 -0.12 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8106190910322406, 1.7202801709655346), mean=0.03998695860303425, variance=1.2099810612140205, skewness=0.054824114606583485, kurtosis=-0.8322079773586668) DescribeResult(nobs=300000, minmax=(-5.136201903633123, 4.498934900223554), mean=0.0016752908705450242, variance=1.0019122279656631, skewness=0.002460339180965745, kurtosis=-0.0022807108788165387) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline %load_ext autoreload %autoreload 2 from __future__ import division, print_function import sys sys.path.insert(0,'./code') from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it *might be* at (1.65, -78.01, 2100.45) or it *might be* at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. So we desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesTo understand Gaussians we first need to understand a few simple mathematical computations. We start with a **random variable** x. A random variable is a variable whose value depends on some random process. If you flip a coin, you could have a variable $c$, and assign it the value 1 for heads, and 0 for tails. That a random value. It can be the height of the students in a class. That may not seem random to you, but chances are you cannot predict the height of the student Reem Nassar because her height is not deterministically determined. For a specific classroom perhaps the heights are$$x= [1.8, 2.0, 1.7, 1.9, 1.6]$$Another example of a random variable would be the result of rolling a die. A less obvious example would be the position of an aircraft - the aircraft does deterministically respond to the control inputs, but it is also buffeted by random winds and travels through randomly distributed pressure gradients.The coin toss and die roll are examples of **discrete random variables**. That is, the outcome of any given event comes from a discrete set of values. The roll of a six sided die can never produce a value of 7 or 3.24, for example. In contrast, the student heights are continuous; they can take on any value within biological limits. For example, heights of 1.7, 1.71, 1.711, 1.7111, 1.71111,.... are all possible. Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. The Mean of a Random VariableWe want to know the **average** height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the **mean**. We compute the mean by summing the values and dividing by the number of values. In this case we have$$\mathtt{mean} = (1.8 + 2.0 + 1.7 + 1.9 + 1.6)/5 = 1.8$$In statistics we use the symbol $\mu$ (mu) to denote the mean, so we could write $$\mu_{\mathtt{height}} = 1.8$$We can formalize this computation with the equation$$ \mu_{\mathtt{height}} = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown Standard Deviation of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose a second class has students with these heights:$$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$ ###Code y = [2.2, 1.5, 2.3, 1.7, 1.3] print(np.mean(y)) ###Output 1.8 ###Markdown the mean of these heights is also 1.8 meters, but notice that there is a much greater amount of variation in the heights in this class. Suppose a third class has heights$$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$In this third class the average height is again 1.8 meters, but here there is no variation in the height between students. All three classes have the same mean height of 1.8 meters. So the mean tells us something about the data, but it does not tell the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of **standard deviation** and **variance**:The **standard deviation** is defined as the square root of the average of the squared differences from the mean.That's a mouthful; as an equation this is stated as$$\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2}$$where $\sigma$ is the notation for the standard deviation and $\mu$ is the mean.If this is the first time you have seen this it may not have a lot of meaning for you. But let's work through that with the data from the three classes to be sure we understand the formula. We subtract the mean of x from each value of x, square it, take the average of those, and then take the square root of the result. The mean of $[1.8, 2.0, 1.7, 1.9, 1.6]$ is 1.8, so we compute the standard deviation as$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print(np.std(x)) ###Output 0.141421356237 ###Markdown What does the standard deviation *signify*? It tells us "how much" the heights vary amongst themseves. *How much* is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things in nature, including the height of people, 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can look at this in a plot: ###Code from book_format import set_figsize, figsize from gaussian_internal import plot_height_std import matplotlib.pyplot as plt with figsize(y=2): plot_height_std([1.8, 2.0, 1.7, 1.9, 1.6]) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(x, lw=2) print('mean = {:.3f}'.format(np.mean(x))) print('std = {:.3f}'.format(np.std(x))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $1\sigma$ of the mean 1.8. We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of y is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of y is {:.4f} m'.format(np.std(y))) ###Output std of y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for y, and the standard deviation is larger.Finally, let's compute the standard deviation for $$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_Z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std([1.8, 1.8, 1.8, 1.8, 1.8])) ###Output 0.0 ###Markdown Variance of a Random VariableFinally, the *variance* is defined as the square of the standard deviation. Some texts define this in the opposite way, which gives the definitions* **The variance is the average of the squared differences from the mean.*** **The standard deviation is the square root of the variance.**Both ways of thinking about it are equivalent. We use the notation $\sigma^2$ for the variance, and the equation for the variance is$$\sigma^2 = \frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2$$To make sure we understand this let's compute the variance for $x$:$$ \begin{aligned}\sigma_x^2 &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0^2 + 0.2^2 + (-0.1)^2 + 0.1^2 + (-0.2)^2}{5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\&=0.02\end{aligned}$$We previously computed $\sigma_x=0.1414$, and indeed the square of 0.1414 is 0.02. We can verify this computation with the NumPy function `numpy.var`: ###Code print('VAR(x) = {:.2f} m'.format(np.var(x))) ###Output VAR(x) = 0.02 m ###Markdown Many texts alternatively use *VAR(x)* to denote the variance of x. Why the Square of the DifferencesAs an aside, why are we taking the *square* of the difference? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of x plotted against the mean for $x=[3,-3,3,-3]$ ###Code with figsize(y=2.5): x = [3, -3, 3, -3] m = np.average(x) for i in range(len(x)): plt.plot([i ,i], [m, x[i]], color='k') plt.axhline(m) plt.xlim(-1, len(x)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct - each value varies by 3 from the mean. But what if we change $x=[6, -2, -3, 1]$? In this case we get $12/4=3$. $x$ is clearly more spread out than in the last example, but we get the same variance, so this cannot be correct. If we use the correct formula we get a variance of 3.5, which reflects the larger variation in $x$.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $x = [1,-1,1,-2,3,2,100]$. ###Code x = [1, -1, 1, -2, 3, 2, 100] print('Variance of x = {:.2f}'.format(np.var(x))) ###Output Variance of x = 1210.69 ###Markdown Is this *correct*? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $x$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. For Kalman filters we can prove that this computation produces optimal results within certain limits. More about that soon. Gaussians We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian plot_gaussian(mean=1.8, variance=0.1414**2, xlabel='Student Height') ###Output _____no_output_____ ###Markdown > I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can also read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]Probably this is immediately recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because several probability distributions have a similar bell curve shape. Non-mathematical sources might not be so precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights - a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for - it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with figsize(y=1.5): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown Nomenclature A bit of nomenclature before we continue - this chart depicts the *probability density* of of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* - the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. "The map is not the territory" is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above somewhat closely models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other choice. Even in this book Gaussians will fail to model reality, forcing us to computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, so I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* - these are both typical shortcut names for the *Gaussian distribution*. Gaussian Distributions So let us explore how Gaussians work. A Gaussian is a **continuous probability distribution** that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 }\big ]$$$\exp[x]$ is notation for $e^x$; we avoid using superscripts in print so that the fonts are larger and more readable. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))``` We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown So what does this curve *mean*? Assume for a moment that we have a thermometer, which reads 22$\,^{\circ}C$. No thermometer is perfectly accurate, and so we normally expect that thermometer will read slightly plus or minus that temperature each time we read it. However, a theorem called **Central Limit Theorem** states that if we make many measurements that the measurements will be normally distributed. So, when we look at this chart we can *sort of* think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22$^{\circ}C$. Maybe the probability of it reading 22$\,^{\circ}C$ is 20%? That is not quite accurate mathematically. Recall that we said that the distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at, say, 2.0. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 22$^{\circ}C$ is 0% because there are an infinite number of values the reading can take.So what then is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22$\,^{\circ}C$, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22$\,^{\circ}C$, then a histogram of the measurements would look like this curve. So how do you compute the probability, or area under the curve? Well, you integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. So, for example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown So the mean ($\mu$) is what it sounds like - the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads $22^{\circ}C$, so that is what we used for the mean. > *Important*: I will repeat what I wrote at the top of this section: "A Gaussian...is completely described with two parameters"The standard notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an **extremely important** result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range. The Variance Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear - the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2), label='$\sigma^2$=0.2') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown So what is this telling us? The Gaussian with $\sigma^2=0.2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out - we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2$ has almost completely eliminated $22$ or $24$ as possible value - their probability is almost $0\%$, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us - we can entirely represent both the reading and the error of a thermometer with only two numbers - the mean and the variance.It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($1\sigma$) of the mean, 95% falls within two standard deviations ($2\sigma$), and 99.7% within three ($3\sigma$). This is often called the 68-95-99.7 rule. So if you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, so $\sigma^2 = .04$ meters$^2$. The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown > An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $tau$ the *precision*. Here $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision - our measurement is very precise. Conversely, a large variance yields low precision - our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. For a Bayesian Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact. Interactive Gaussians For those that are reading this in IPython Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed from IPython.html.widgets import FloatSliderWidget set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0, 10), variance=FloatSliderWidget(value=0.6, min=0.2, max=4)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this in an IPython Notebook, here is an animation of a Gaussian. First, the mean is being shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of the Gaussian Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.A typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.The key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are so computationally nice. Computing Probabilities with scipy.stats In this chapter I have used custom code from FilterPy for computing Gaussians, plotting, and so on. I chose to do that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. I find the performance of some of the functions rather slow (the `scipy.stats` documentation contains a warning to this effect), but this is offset by the fact that this is standard code available to everyone, and it is well tested. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. However, we will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown If we look at the documentation for `scipy.stats.norm` here[2] we see that there are many other functions that norm provides.For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ -1.67 1.966 2.794 2.159 2.462 -0.012 12.025 6.336 3.566 -1.321 -1.545 2.25 4.888 2.674 1.885] ###Markdown We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown There are many other functions available, and if you are interested I urge you to peruse the documentation. Sometimes the documentation is terse, but with a bit of googling you can find out what a function does and some examples of how to use it. Most of this functionality is not of immediate interest to the book, so I will leave the topic in your hands to explore. The SciPy tutorial [3] is quite approachable, and I suggest starting there. Fat Tails Earlier I spoke very briefly about the **central limit theorem**, which states that under certain conditions the arithmetic sum of **any** independent random variables will be normally distributed, regardless of how the random variables are distributed. This is extremely important for (at least) two reasons. First, nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. Second, Gaussians are mathematically *tractable*. We will see this more as we develop the Kalman filter theory, but there are very nice closed form solutions for operations on Gaussians that allow us to use them analytically.However, a key part of the proof is "under certain conditions". These conditions often do not hold for the physical world. The resulting distributions are called **fat tailed**. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor 'grade on a curve' you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assumes that there is a infinitesimal chance of getting a score of -1e300, or 4e50. The *tails* of a Gaussian distribution are infinite because Gaussians are continuous functions.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that a more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes 'fat'. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. Kalman filters use sensors to measure the world. The errors in sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on a somewhat idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the student's t distribution. Let's say I want to model a sensor that has some noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. So I could simulate this sensor with ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution. There are many choices, I will use the Student's T distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $B=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to at least basic statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.8 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squaring for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.01999999999999999 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not be faced with these kinds of problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. For example, you may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code from filterpy.stats import gaussian xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397997 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.77 3.378 -3.014 5.337 4.285 4.797 1.477 1.351 5.739 4.075 -1.575 0.681 0.202 -0.747 1.392] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-0.8175084513413164, 1.1865956580866834), mean=0.2940337215902996, variance=0.6230354047551284, skewness=-0.41279400723365806, kurtosis=-1.3123468690692666) DescribeResult(nobs=300000, minmax=(-4.7480566375065525, 4.550774022502299), mean=0.0005748027839955969, variance=1.0005892638782288, skewness=6.280909739979528e-05, kurtosis=0.006885785867941863) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline %load_ext autoreload %autoreload 2 from __future__ import division, print_function from book_format import load_style, set_figsize, figsize load_style() ###Output _____no_output_____ ###Markdown Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it *might be* at (1.65, -78.01, 2100.45) or it *might be* at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. So we desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesTo understand Gaussians we first need to understand a few simple mathematical computations. We start with a **random variable** x. A random variable is a variable whose value depends on some random process. If you flip a coin, you could have a variable $c$, and assign it the value 1 for heads, and 0 for tails. That a random value. It can be the height of the students in a class. That may not seem random to you, but chances are you cannot predict the height of the student Reem Nassar because her height is not deterministically determined. For a specific classroom perhaps the heights are$$x= [1.8, 2.0, 1.7, 1.9, 1.6]$$Another example of a random variable would be the result of rolling a die. A less obvious example would be the position of an aircraft - the aircraft does deterministically respond to the control inputs, but it is also buffeted by random winds and travels through randomly distributed pressure gradients.The coin toss and die roll are examples of **discrete random variables**. That is, the outcome of any given event comes from a discrete set of values. The roll of a six sided die can never produce a value of 7 or 3.24, for example. In contrast, the student heights are continuous; they can take on any value within biological limits. For example, heights of 1.7, 1.71, 1.711, 1.7111, 1.71111,.... are all possible. Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. The Mean of a Random VariableWe want to know the **average** height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the **mean**. We compute the mean by summing the values and dividing by the number of values. In this case we have$$\mathtt{mean} = (1.8 + 2.0 + 1.7 + 1.9 + 1.6)/5 = 1.8$$In statistics we use the symbol $\mu$ (mu) to denote the mean, so we could write $$\mu_{\mathtt{height}} = 1.8$$We can formalize this computation with the equation$$ \mu_{\mathtt{height}} = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown Standard Deviation of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose a second class has students with these heights:$$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$ ###Code y = [2.2, 1.5, 2.3, 1.7, 1.3] np.mean(y) ###Output _____no_output_____ ###Markdown the mean of these heights is also 1.8 meters, but notice that there is a much greater amount of variation in the heights in this class. Suppose a third class has heights$$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$In this third class the average height is again 1.8 meters, but here there is no variation in the height between students. All three classes have the same mean height of 1.8 meters. So the mean tells us something about the data, but it does not tell the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of **standard deviation** and **variance**:The **standard deviation** is defined as the square root of the average of the squared differences from the mean.That's a mouthful; as an equation this is stated as$$\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2}$$where $\sigma$ is the notation for the standard deviation and $\mu$ is the mean.If this is the first time you have seen this it may not have a lot of meaning for you. But let's work through that with the data from the three classes to be sure we understand the formula. We subtract the mean of x from each value of x, square it, take the average of those, and then take the square root of the result. The mean of $[1.8, 2.0, 1.7, 1.9, 1.6]$ is 1.8, so we compute the standard deviation as$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code np.std(x) ###Output _____no_output_____ ###Markdown What does the standard deviation *signify*? It tells us "how much" the heights vary amongst themseves. *How much* is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things in nature, including the height of people, 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can look at this in a plot: ###Code import matplotlib.pyplot as plt from gaussian_internal import plot_height_std with figsize(y=2): plot_height_std([1.8, 2.0, 1.7, 1.9, 1.6]) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(x, lw=2) print('mean = {:.3f}'.format(np.mean(x))) print('std = {:.3f}'.format(np.std(x))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $1\sigma$ of the mean 1.8. We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of y is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of y is {:.4f} m'.format(np.std(y))) ###Output std of y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for y, and the standard deviation is larger.Finally, let's compute the standard deviation for $$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_Z&= 0.0 \ m\end{aligned}$$ ###Code np.std([1.8, 1.8, 1.8, 1.8, 1.8]) ###Output _____no_output_____ ###Markdown Variance of a Random VariableFinally, the *variance* is defined as the square of the standard deviation. Some texts define this in the opposite way, which gives the definitions* **The variance is the average of the squared differences from the mean.*** **The standard deviation is the square root of the variance.**Both ways of thinking about it are equivalent. We use the notation $\sigma^2$ for the variance, and the equation for the variance is$$\sigma^2 = \frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2$$To make sure we understand this let's compute the variance for $x$:$$ \begin{aligned}\sigma_x^2 &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0^2 + 0.2^2 + (-0.1)^2 + 0.1^2 + (-0.2)^2}{5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\&=0.02\end{aligned}$$We previously computed $\sigma_x=0.1414$, and indeed the square of 0.1414 is 0.02. We can verify this computation with the NumPy function `numpy.var`: ###Code print('VAR(x) = {:.2f} m'.format(np.var(x))) ###Output VAR(x) = 0.02 m ###Markdown Many texts alternatively use *VAR(x)* to denote the variance of x. Why the Square of the DifferencesAs an aside, why are we taking the *square* of the difference? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of x plotted against the mean for $x=[3,-3,3,-3]$ ###Code with figsize(y=2.5): x = [3, -3, 3, -3] m = np.average(x) for i in range(len(x)): plt.plot([i ,i], [m, x[i]], color='k') plt.axhline(m) plt.xlim(-1, len(x)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct - each value varies by 3 from the mean. But what if we change $x=[6, -2, -3, 1]$? In this case we get $12/4=3$. $x$ is clearly more spread out than in the last example, but we get the same variance, so this cannot be correct. If we use the correct formula we get a variance of 3.5, which reflects the larger variation in $x$.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $x = [1,-1,1,-2,3,2,100]$. ###Code x = [1, -1, 1, -2, 3, 2, 100] print('Variance of x = {:.2f}'.format(np.var(x))) ###Output Variance of x = 1210.69 ###Markdown Is this *correct*? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $x$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. For Kalman filters we can prove that this computation produces optimal results within certain limits. More about that soon. Gaussians We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter. We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian plot_gaussian(mean=1.8, variance=0.1414**2, xlabel='Student Height') ###Output _____no_output_____ ###Markdown > I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can also read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]Probably this is immediately recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because several probability distributions have a similar bell curve shape. Non-mathematical sources might not be so precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights - a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for - it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import book_plots with figsize(y=1.5): book_plots.bar_plot([ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0]) ###Output _____no_output_____ ###Markdown Nomenclature A bit of nomenclature before we continue - this chart depicts the *probability density* of of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* - the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. "The map is not the territory" is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above somewhat closely models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other choice. Even in this book Gaussians will fail to model reality, forcing us to computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, so I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* - these are both typical shortcut names for the *Gaussian distribution*. Gaussian Distributions So let us explore how Gaussians work. A Gaussian is a **continuous probability distribution** that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 }\big ]$$$\exp[x]$ is notation for $e^x$; we avoid using superscripts in print so that the fonts are larger and more readable. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it. %load -s gaussian stats.py def gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return np.exp((-0.5*(x-mean)**2)/var) / \ np.sqrt(_two_pi*var) We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown So what does this curve *mean*? Assume for a moment that we have a thermometer, which reads 22$\,^{\circ}C$. No thermometer is perfectly accurate, and so we normally expect that thermometer will read slightly plus or minus that temperature each time we read it. However, a theorem called **Central Limit Theorem** states that if we make many measurements that the measurements will be normally distributed. So, when we look at this chart we can *sort of* think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22$^{\circ}C$. Maybe the probability of it reading 22$\,^{\circ}C$ is 20%? That is not quite accurate mathematically. Recall that we said that the distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at, say, 2.0. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 22$^{\circ}C$ is 0% because there are an infinite number of values the reading can take.So what then is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22$\,^{\circ}C$, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22$\,^{\circ}C$, then a histogram of the measurements would look like this curve. So how do you compute the probability, or area under the curve? Well, you integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. So, for example, we can compute ###Code print('Probability of value in range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of value in range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of value in range 21.5 to 22.5 is 19.74% Probability of value in range 23.5 to 24.5 is 12.10% ###Markdown So the mean ($\mu$) is what it sounds like - the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads $22^{\circ}C$, so that is what we used for the mean. > *Important*: I will repeat what I wrote at the top of this section: "A Gaussian...is completely described with two parameters"The standard notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$temp \sim \mathcal{N}(22,4)$$This is an **extremely important** result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range. The Variance Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear - the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2), label='var=0.2') plt.plot(xs, gaussian(xs, 23, 1), label='var=1') plt.plot(xs, gaussian(xs, 23, 5), label='var=5') plt.legend(); ###Output _____no_output_____ ###Markdown So what is this telling us? The blue gaussian is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the red gaussian also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out - we think it is quite likely that $x=20$ or $x=26$, for example. The blue gaussian has almost completely eliminated $22$ or $24$ as possible value - their probability is almost $0\%$, whereas the red curve considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The blue curve represents a very accurate thermometer, and the black one represents a fairly inaccurate one. Green of course represents one in between the two others. Note the very powerful property the Gaussian distribution affords us - we can entirely represent both the reading and the error of a thermometer with only two numbers - the mean and the variance.It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($1\sigma$) of the mean, 95% falls within two standard deviations ($2\sigma$), and 99.7% within three ($3\sigma$). This is often called the 68-95-99.7 rule. So if you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown > An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $tau$ the *precision*. Here $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision - our measurement is very precise. Conversely, a large variance yields low precision - our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. For a Bayesian Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact. Interactive Gaussians For those that are reading this in IPython Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed import IPython.html.widgets as widgets set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0, 10), variance=widgets.FloatSliderWidget(value=0.6, min=0.2, max=4)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this in an IPython Notebook, here is an animation of a Gaussian. First, the mean is being shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of the Gaussian Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.A typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.The key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are so computationally nice. Computing Probabilities with scipy.stats In this chapter I have used custom code from FilterPy for computing Gaussians, plotting, and so on. I chose to do that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. I find the performance of some of the functions rather slow (the `scipy.stats` documentation contains a warning to this effect), but this is offset by the fact that this is standard code available to everyone, and it is well tested. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. However, we will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('probability density of 1.5 is %.4f' % n23.pdf(1.5)) print('probability density of 2.5 is also %.4f' % n23.pdf(2.5)) print('whereas probability density of 2 is %.4f' % n23.pdf(2)) ###Output probability density of 1.5 is 0.1311 probability density of 2.5 is also 0.1311 whereas probability density of 2 is 0.1330 ###Markdown If we look at the documentation for `scipy.stats.norm` here[2] we see that there are many other functions that norm provides.For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code print(n23.rvs(size=15)) ###Output [ 4.28918375 0.28515148 6.6813951 -0.90647378 -3.11286229 -0.23009077 2.70486564 -1.96423111 3.81979672 3.78714347 2.90120567 1.34502476 1.54753878 3.542725 5.22996079] ###Markdown We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown There are many other functions available, and if you are interested I urge you to peruse the documentation. Sometimes the documentation is terse, but with a bit of googling you can find out what a function does and some examples of how to use it. Most of this functionality is not of immediate interest to the book, so I will leave the topic in your hands to explore. The SciPy tutorial [3] is quite approachable, and I suggest starting there. Fat Tails Earlier I spoke very briefly about the **central limit theorem**, which states that under certain conditions the arithmetic sum of **any** independent random variables will be normally distributed, regardless of how the random variables are distributed. This is extremely important for (at least) two reasons. First, nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. Second, Gaussians are mathematically *tractable*. We will see this more as we develop the Kalman filter theory, but there are very nice closed form solutions for operations on Gaussians that allow us to use them analytically.However, a key part of the proof is "under certain conditions". These conditions often do not hold for the physical world. The resulting distributions are called **fat tailed**. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor 'grade on a curve' you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assumes that there is a infinitesimal chance of getting a score of -1e300, or 4e50. The *tails* of a Gaussian distribution are infinite because Gaussians are continuous functions.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this using a normal distribution. ###Code xs = np.arange(10,100, 0.05) plt.plot(xs, [gaussian(x, 90, 30) for x in xs], label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that a more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes 'fat'. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. Kalman filters use sensors to measure the world. The errors in sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on a somewhat idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the student's t distribution. Let's say I want to model a sensor that has some noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. So I could simulate this sensor with ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution. There are many choices, I will use the Student's T distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $B=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an **infinite number of samples of it and then averaged those samples together**. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(f"{np.var(X):.2f} meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn # This is in continuation of last problem of childrens' heights # 1.8 = mean, .1414 = std data = 1.8 + randn(100)*.1414 # randn returns univariate “normal” (Gaussian) distribution of mean 0 and variance 1 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print(f'std of Y is {np.std(Y):.2f} m') ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Variance of X with outlier = {np.var(X):6.2f}') print(f'Variance of X without outlier = {np.var(X[:-1]):6.2f}') ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.> popo_notesCDF = integration(PDF)I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) #22,4 = mean,std print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$**This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.** >popo notesGiven the mu and variance, generating data (gaussians) that fits this distribution is a form of sampling. The author has designed one such method to do so and implemented it in the function `gaussian()`. There is another method to do so:```python3 this returns only one point. Call it N times for N datapointsdef sample_normal_distribution(mu,vari): try: sigma = np.sqrt(vari) x = 0.5*np.sum(np.random.uniform(-sigma, sigma, 12)) return mu+x except Exception as e: print(e)```This is the simplest method. Other better methods are Box-mueller and rejection sampling---Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussians is that the sum of two independent independent normal variables (https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables) is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.**Gaussians are nonlinear functions**. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown (Why were Gaussians needed introduced in robotics ? )**This is impressive. We can describe an entire distribution of numbers with only two numbers.** 🤯 Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. **We use a lower case $p$ for probability distributions**$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$🚨**In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*.🚨** By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! **So, the `update()` function is doing nothing more than computing Bayes' theorem.**The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) # Note the syntax BROOROROROROROROROROOOOOOO print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.888 2.446 2.857 4.706 -1.871 1.427 1.708 5.727 2.989 5.73 1.164 2.174 0.159 -0.734 2.327] ###Markdown > popo notes**We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$.** ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. **The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.**But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code # popo notes # nice way of simulating noise around a mean from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. **However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others.** ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-0.9265948123884868, 2.5947818540871284), mean=0.35753996595685394, variance=1.10272482430536, skewness=0.7485148816381301, kurtosis=0.10424550122403042) DescribeResult(nobs=300000, minmax=(-4.574435547312213, 5.220445997684225), mean=-0.0020406476690705334, variance=0.9940014503804102, skewness=0.0021844163636272214, kurtosis=0.023940053402526473) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a *random variable*. *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the *sample space*. For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe *probability distribution* gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$P(X{=}H) = 0.5\\P(X{=}T)=0.5$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a *measure of central tendency*. For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe *expected value* of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class. The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of *standard deviation* and *variance*. The equation for computing the *variance* is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance. What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from gaussian_internal import plot_height_std import matplotlib.pyplot as plt with figsize(y=2): plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8. We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code with figsize(y=2.5): X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. Gaussians We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curves is a *probability density function* or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with figsize(y=1.5): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown Nomenclature A bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian Distributions Let's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))``` We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called *Central Limit Theorem* states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range. The Variance and Belief Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the 68-95-99.7 rule. If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive Gaussians For those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of the Gaussian Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.A typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.The key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. Computing Probabilities with scipy.stats In this chapter I used code from FilterPy to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 3.527 2.952 3.709 1.501 -0.532 -0.173 2.264 4.293 5.036 6.365 2.79 4.76 -0.052 0.789 2.733] ###Markdown We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat Tails Earlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the *Student's $t$-distribution*. Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussians is that the sum of two independent independent normal variables (https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables) is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.08 2.024 1.4 3.024 5.799 0.989 2.083 0.978 7.542 -2.22 4.984 0.626 4.387 3.676 -0.12 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8106190910322406, 1.7202801709655346), mean=0.03998695860303425, variance=1.2099810612140205, skewness=0.054824114606583485, kurtosis=-0.8322079773586668) DescribeResult(nobs=300000, minmax=(-5.136201903633123, 4.498934900223554), mean=0.0016752908705450242, variance=1.0019122279656631, skewness=0.002460339180965745, kurtosis=-0.0022807108788165387) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output mean = 1.809 std = 0.139 ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.313 2.32 7.222 1.482 -2.586 6.08 -0.536 1.988 1.712 1.512 2.502 1.878 0.834 4.719 0.326] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code %matplotlib inline from __future__ import division, print_function #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $B=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 1.9 meaters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](./table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to at least basic statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.8 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squaring for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.01999999999999999 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not be faced with these kinds of problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. For example, you may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code from filterpy.stats import gaussian xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397997 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 3.532 2.938 5.114 3.104 5.776 3.19 -1.491 4.938 1.16 4.501 1.997 2.38 4.186 6.011 1.8 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-2.2922970571743235, 2.07937157289957), mean=-0.39398879831470823, variance=1.6367898886878445, skewness=0.4089745645544576, kurtosis=-0.44193847413985043) DescribeResult(nobs=300000, minmax=(-4.8659578698172155, 4.692624465524303), mean=0.00014668102226286805, variance=1.001370952607716, skewness=0.0011541152522754141, kurtosis=-0.0006976512418868097) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output mean = 1.809 std = 0.139 ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.313 2.32 7.222 1.482 -2.586 6.08 -0.536 1.988 1.712 1.512 2.502 1.878 0.834 4.719 0.326] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(f"{np.var(X):.2f} meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print(f'std of Y is {np.std(Y):.2f} m') ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Variance of X with outlier = {np.var(X):6.2f}') print(f'Variance of X without outlier = {np.var(X[:-1]):6.2f}') ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussians is that the sum of two independent independent normal variables (https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables) is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 3.026 7.51 -2.588 6.081 -3.413 -1.11 6.484 5.935 2.313 1.912 1.895 7.964 4.876 -0.841 -0.174] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.1170056026895896, 1.0389413316321683), mean=-0.14317693801053674, variance=0.63137454114842, skewness=0.23966670696111078, kurtosis=-1.3702119526058378) DescribeResult(nobs=300000, minmax=(-4.262303362805642, 4.603650808299195), mean=-0.00040808441709135014, variance=0.998953112076118, skewness=-0.0010979017079029859, kurtosis=-0.00030025117171517124) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib notebook from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from code.book_plots import interactive_plot from code.gaussian_internal import plot_height_std import matplotlib.pyplot as plt with interactive_plot(): plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with interactive_plot(): plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! It's too early to understand why, but we will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code with interactive_plot(): X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import code.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with interactive_plot(): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code with interactive_plot(): ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf with interactive_plot(): ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) with interactive_plot(): plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from code.gaussian_internal import display_stddev_plot with interactive_plot(): display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding probabilities. I'm getting ahead of myself, but the Kalman filter uses Gaussians instead of probabilities, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansThe product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality lets us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 6.7 5.323 3.043 3.361 4.981 3.122 2.841 0.552 6.937 5.474 0.829 1.398 0.555 -3.212 1.555] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] with interactive_plot(): plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] with interactive_plot(): plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] with interactive_plot(): plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output /Users/giacomo/Documents/Projects/Kalman-and-Bayesian-Filters-in-Python/venv/bin/../lib/python3.6/_collections_abc.py:841: MatplotlibDeprecationWarning: The examples.directory rcparam was deprecated in Matplotlib 3.0 and will be removed in 3.2. In the future, examples will be found relative to the 'datapath' directory. self[key] = other[key] ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution or a Probability Mass Function (PMF) as opposed to the Probability Density Function (PDF) for the continuous case. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or *average* value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. A more technically correct word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown There is a subtle difference in the meaning of the words *mean* and *average*. The *mean* is a specific term used to indicate the operation described above, i.e. summing all the values and dividing by their number. The *average*, on the other hand, is a more colloquial term used to refer to any measure of central tendency, such as the mode and the median, which are described below.The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution* or *cumulative distribution function*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve, the mean is also the median. Additionally, it also is the mode, since it is the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean, the median and the mode. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the entire area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the product of two Gaussian **distributions** is a Gaussian distribution, if normalized. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test product of two Gaussian distributions visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use gaussians *because* they are computationally nice. The product of two independent gaussian distributions is a gaussian function that, when normalized, becomes a gaussian distribution with a mean and a variance that can be computed analytically as follows: $$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$As showed in the code below, the distribution computed by an element-wise multiplication of two gaussian distributions followed by a normalization of the result is the same as the distribution created with the analytic computation of the $\mu$ and $\sigma^2$ parameters. You should, in fact, get a number very close to zero when computing the difference between the two. ###Code mu1 = 0.8 var1 = 0.1 mu2 = 1.3 var2 = 0.2 # Element-wise multiplication x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=mu1, var=var1) g2 = gaussian(x, mean=mu2, var=var2) g_a = g1 * g2 g_a = g_a / sum(g_a) # Analytic computation. Note: this *includes* normalization mu3 = (var1 * mu2 + var2 * mu1) / (var1 + var2) var3 = var1 * var2 / (var1 + var2) g_b = gaussian(x, mean=mu3, var=var3) # Show that g_a and g_b are the same np.sum(g_a - g_b) # This is basically 0 ###Output _____no_output_____ ###Markdown The sum of two independent Gaussian random **variables** is another Gaussian variable and is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. An important clarification must be done here. Above it is first shown how to compute the normalized product of two gaussian distributions, not variables. Then it is shown how to compute the distribution of the sum of two independent gaussian random variables, not ditributions.These operations are the only ones required when studying the math behind the Kalman filter, but one must pay attention not to confuse operations on random variables with operations on distributions that represent a random variable.For the sake of completeness: - The product of two independent Gaussian random variables is not always (or, is almost never) a Gaussian random variable but instead the difference of two chi-square random variables. - The sum of two Gaussian probability density functions is not always (or, is almost never) proportional to a Gaussian probability density function. The latter can be easily shown graphically: ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.2, var=.1) g2 = gaussian(x, mean=2.3, var=.03) g = g1 + g2 g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Which clearly isn't a gaussian distribution Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussians. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(x) * p(z|x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 6.507 2.2 1.447 0.167 1.132 5.392 -6.473 -1.424 4.103 -3.043 0.083 -1.452 9.799 4.942 -3.933] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.0624865951487876, 1.225024725399776), mean=0.2833249469135627, variance=0.5464786163826505, skewness=-0.2254753115307131, kurtosis=-0.9115659404538272) DescribeResult(nobs=300000, minmax=(-4.6646680625809305, 4.842965825063828), mean=0.0006714261947325837, variance=1.0002659850520095, skewness=0.0036545182234206473, kurtosis=0.004036285807858864) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(f"{np.var(X):.2f} meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print(f'std of Y is {np.std(Y):.2f} m') ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Variance of X with outlier = {np.var(X):6.2f}') print(f'Variance of X without outlier = {np.var(X[:-1]):6.2f}') ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) plt.show() interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of Normally Distributed Random VariablesThe discrete Bayes filter works by multiplying and adding arbitrary probability random variables. The Kalman filter uses Gaussians instead of arbitrary random variables, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussian random variables (Gaussian random variable is just another way to say normally distributed random variable). A remarkable property of Gaussian random variables is that the sum of two independent Gaussian random variables is also normally distributed! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Wikipedia has a good article on this property, and I also prove it at the end of this chapter. https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variablesBefore we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussian random variables is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 2.269 2.485 -0.469 1.991 2.468 1.667 1.231 4.577 2.56 0.224 -3.851 2.604 3.583 -1.623 2.348] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.2037202953168122, 0.894366248899801), mean=-0.18007426896541692, variance=0.43204122396032946, skewness=-0.08295117992362264, kurtosis=-0.9197042651911951) DescribeResult(nobs=300000, minmax=(-4.75206637131407, 4.178833173976851), mean=-0.00047436810878343096, variance=0.9973760542573228, skewness=-0.0038868831924059035, kurtosis=-0.005797621180390955) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $B=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches out update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output mean = 1.809 std = 0.139 ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.313 2.32 7.222 1.482 -2.586 6.08 -0.536 1.988 1.712 1.512 2.502 1.878 0.834 4.719 0.326] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Вероятности, теорема Гаусса и Байеса ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown ВступлениеПоследняя глава завершилась обсуждением некоторых недостатков дискретного байесовского фильтра. Для многих задач отслеживания и фильтрации мы хотим иметь фильтр, который является *унимодальным* и *непрерывным*. То есть мы хотим смоделировать нашу систему с использованием математики с плавающей запятой (непрерывной) и представить только одно убеждение (унимодальное). Например, мы хотим сказать, что самолет находится в (12.34, -95.54, 2389.5), где это широта, долгота и высота. Мы не хотим, чтобы наш фильтр сообщал нам: "это может быть (1.65, -78.01, 2100.45) или это может быть (34.36, -98.23, 2543.79)". Это не соответствует нашей физической интуиции о том, как устроен мир, и, как мы уже обсуждали, вычисление мультимодального случая может быть непомерно дорогостоящим. И, конечно же, множественные оценки местоположения делают навигацию невозможной.Нам нужен унимодальный, непрерывный способ представления вероятностей, который моделирует, как работает реальный мир, и который эффективен с точки зрения вычислений. Гауссовы распределения обеспечивают все эти функции. Среднее значение, Дисперсия и стандартные отклоненияБольшинство из вас, вероятно, имели дело со статистикой, но позвольте мне в любом случае осветить этот материал. Я прошу вас прочитать материал, даже если вы уверены, что хорошо его знаете. Я спрашиваю по двум причинам. Во-первых, я хочу быть уверен, что мы используем термины одинаково. Во-вторых, я стремлюсь сформировать интуитивное понимание статистики, которое сослужит вам хорошую службу в последующих главах. Легко пройти курс статистики и запомнить только формулы и вычисления и, возможно, быть нечетким в отношении последствий того, что вы узнали. Случайные ВеличиныКаждый раз, когда вы бросаете кубик, *результат* будет составлять от 1 до 6. Если бы мы бросили честный кубик миллион раз, мы бы ожидали получить один в 1/6 случаев. Таким образом, мы говорим, что *вероятность* или *шансы* исхода 1 равны 1/6. Аналогично, если бы я спросил вас, какова вероятность того, что 1 будет результатом следующего броска, вы бы ответили 1/6.Эта комбинация значений и связанных с ними вероятностей называется [*случайной величиной*](https://en.wikipedia.org/wiki/Random_variable ). Здесь *случайный* не означает, что процесс недетерминирован, только то, что нам не хватает информации о результате. Результат броска кубика детерминирован, но нам не хватает информации для вычисления результата. Мы не знаем, что произойдет, кроме как вероятностно.Пока мы определяем термины, диапазон значений называется [*пространство выборки*](https://en.wikipedia.org/wiki/Sample_space ). Для матрицы пространство выборки равно {1, 2, 3, 4, 5, 6}. Для монеты пространство выборки равно {H, T}. *Пространство* - это математический термин, который означает множество со структурой. Пространство выборки для кубика представляет собой подмножество натуральных чисел в диапазоне от 1 до 6.Другим примером случайной величины является рост студентов в университете. Здесь пространство выборки представляет собой диапазон значений в действительных числах между двумя пределами, определенными биологией.Случайные величины, такие как броски монет и броски кубиков, являются *дискретными случайными величинами*. Это означает, что их выборочное пространство представлено либо конечным числом значений, либо счетно бесконечным числом значений, таких как натуральные числа. Высоты людей называются *непрерывными случайными величинами*, поскольку они могут принимать любое реальное значение между двумя пределами.Не путайте *измерение* случайной величины с фактическим значением. Если бы мы могли измерить рост человека только с точностью до 0,1 метра, мы бы записали только значения от 0,1, 0,2, 0,3 ... 2,7, что дало бы 27 дискретных вариантов. Тем не менее, рост человека может варьироваться в пределах любого произвольного реального значения между этими диапазонами, и поэтому рост является непрерывной случайной величиной. В статистике заглавные буквы используются для обозначения случайных величин, обычно из второй половины алфавита. Итак, мы могли бы сказать, что $X$ - это случайная величина, представляющая бросок кубика, или $Y$ - это рост студентов в классе поэзии первокурсников. В последующих главах для решения этих задач используется линейная алгебра, и поэтому здесь мы будем следовать соглашению об использовании нижнего регистра для векторов и верхнего регистра для матриц. К сожалению, эти соглашения вступают в противоречие, и вам придется определить, что автор использует из контекста. Я всегда использую жирные символы для векторов и матриц, что помогает различать их. Распределение вероятностей[*Распределение вероятностей*](https://en.wikipedia.org/wiki/Probability_distribution) дает вероятность того, что случайная величина примет любое значение в пространстве выборки. Например, для простого шестигранного кубика мы могли бы сказать:|Значение|Вероятность||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Мы обозначим это распределение строчной буквой p: $p(x)$. Используя обычную нотацию функций, мы бы написали:$$P(X{=}4) = p(4) = \frac{1}{6}$$Это означает, что вероятность того, что кубик упадет на 4, равна $\frac {1} {6} $. $P (X {=} x_k) $ - это обозначение для "вероятности того, что $ X$ будет равен $x_k $". Обратите внимание на тонкую разницу в обозначениях. Заглавная буква $P$ обозначает вероятность одного события, а строчная буква $p$ - это функция распределения вероятностей. Это может ввести вас в заблуждение, если вы не будете наблюдательны. В некоторых текстах используется $ Pr $ вместо $ P $, чтобы улучшить это.Другой пример - честная монета. Он имеет пространство выборки {H, T}. Монета честная, поэтому вероятность выпадения орла (H) составляет 50%, а вероятность выпадения решки (T) - 50%. Мы записываем это как$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Пробные пространства не являются уникальными. Одно пространство для выборки для матрицы составляет {1, 2, 3, 4, 5, 6}. Другим допустимым пространством выборки было бы {четное, нечетное}. Другим может быть {точки во всех углах, а не точки во всех углах}. Пространство выборки допустимо до тех пор, пока оно охватывает все возможности, и любое отдельное событие описывается только одним элементом. {четный, 1, 3, 4, 5} не является допустимым пространством выборки для кубика, поскольку значение 4 соответствует как "четному", так и "4".Вероятности для всех значений *дискретной случайной величины* известны как *дискретное распределение вероятностей*, а вероятности для всех значений *непрерывной случайной величины* известны как *непрерывное распределение вероятностей*.Чтобы быть распределением вероятностей, вероятность каждого значения $x_i$ должна быть $x_i \ge 0$, поскольку никакая вероятность не может быть меньше нуля. Во-вторых, сумма вероятностей для всех значений должна быть равна единице. Это должно быть интуитивно понятно для подбрасывания монеты: если шансы получить орел составляют 70%, то шансы получить решку должны составлять 30%. Мы формализуем это требование следующим образом$$\sum\limits_u P(X{=}u)= 1$$для дискретных распределений $$\int\limits_u P(X{=}u) \,du= 1$$для непрерывных распределений.В предыдущей главе мы использовали распределения вероятностей для оценки положения собаки в коридоре. Например: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Каждая позиция имеет вероятность от 0 до 1, а сумма всех равна единице, так что это делает ее распределением вероятностей. Каждая вероятность дискретна, поэтому мы можем точнее назвать это дискретным распределением вероятностей. На практике мы опускаем термины дискретный и непрерывный, если только у нас нет особой причины проводить это различие. Среднее значение, Медиана и Режим случайной величиныУчитывая набор данных, мы часто хотим знать репрезентативное или среднее значение для этого набора. Для этого существует множество мер, и эта концепция называется [*мерой центральной тенденции*](https://en.wikipedia.org/wiki/Central_tendency ). Например, мы могли бы захотеть узнать *средний* рост учащихся в классе. Мы все знаем, как найти среднее значение набора данных, но позвольте мне пояснить суть, чтобы я мог ввести более формальные обозначения и терминологию. Другое слово для обозначения среднего - "среднее". Мы вычисляем среднее значение путем суммирования значений и деления на количество значений. Если рост учащихся в метрах равен $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$мы вычисляем среднее значение как$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$Традиционно для обозначения среднего значения используется символ $\mu$ (мю).Мы можем формализовать это вычисление с помощью уравнения$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy предоставляет `numpy.mean()` для вычисления среднего значения. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown Для удобства массивы NumPy предоставляют метод `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Ожидаемое значение случайной величины[*Ожидаемое значение*](https://en.wikipedia.org/wiki/Expected_value) случайной величины - это среднее значение, которое она имела бы, если бы мы взяли бесконечное число ее выборок, а затем усреднили эти выборки вместе. Допустим, у нас есть $ x = [1,3,5] $, и каждое значение одинаково вероятно. Какое значение мы *ожидаем* от $x$ в среднем?Конечно, это было бы среднее значение 1, 3 и 5, что равно 3. Это должно иметь смысл; мы ожидаем, что будут встречаться равные числа 1, 3 и 5, так что $(1+3+5)/3=3$ очевидно, что это среднее значение этого бесконечная серия выборок. Другими словами, здесь ожидаемое значение - это *среднее значение* выборочного пространства.Теперь предположим, что каждое значение имеет разную вероятность возникновения. Скажем, 1 имеет 80%-ную вероятность возникновения, 3 имеет 15%-ную вероятность, а 5 имеет только 5%-ную вероятность. В этом случае мы вычисляем ожидаемое значение путем умножения каждого значения $x$ на процентную вероятность его возникновения и суммирования результата. Для этого случая мы могли бы вычислить$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Здесь я ввел обозначение $\mathbb E[X]$ для ожидаемого значения $x$. В некоторых текстах используется $E(x)$. Значение 1.5 для $x$ имеет интуитивный смысл, потому что $x$ с гораздо большей вероятностью будет равно 1, чем 3 или 5, а также 3 с большей вероятностью, чем 5.Мы можем формализовать это, позволив $x_i$ быть $i ^ {th}$ значением $X$, а $p_i$ - вероятностью его возникновения. Это дает нам$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$Тривиальный кусочек алгебры показывает, что если все вероятности равны, то ожидаемое значение совпадает со средним:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$Если $x$ непрерывен, мы подставляем сумму вместо интеграла, например$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$где $f(x)$ - функция распределения вероятностей для $x$. Мы пока не будем использовать это уравнение, но будем использовать его в следующей главе.Мы можем написать немного Python, чтобы имитировать это. Здесь я беру 1 000 000 выборок и вычисляю ожидаемое значение распределения, которое мы только что вычислили аналитически. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown Вы можете видеть, что вычисленное значение близко к аналитически полученному значению. Это не точно, потому что для получения точных значений требуется бесконечный размер выборки. УпражнениеКаково ожидаемое значение броска кубика? РешениеКаждая сторона одинаково вероятна, поэтому вероятность каждой из них равна 1/6. Следовательно$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ УпражнениеУчитывая равномерное непрерывное распределение$$f(x) = \frac{1}{b - a}$$вычислите ожидаемое значение для $a=0$ и $b=20$. Решение$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Дисперсия случайной величиныПриведенное выше вычисление показывает нам средний рост студентов, но оно не говорит нам всего, что мы, возможно, хотели бы знать. Например, предположим, что у нас есть три класса учащихся, которые мы обозначаем $X$, $Y$ и $Z$ следующими высотами: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Используя NumPy, мы видим, что средняя высота каждого класса одинакова. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown Среднее значение для каждого класса составляет 1,8 метра, но обратите внимание, что во втором классе разница в высоте гораздо больше, чем в первом классе, а в третьем классе вообще нет различий.Среднее значение говорит нам кое-что о данных, но не всю историю. Мы хотим иметь возможность указать, насколько велика *разница* между ростом учащихся. Вы можете представить себе целый ряд причин для этого. Возможно, школьному округу необходимо заказать 5000 парт, и они хотят быть уверены, что покупают размеры, соответствующие диапазону роста учащихся. Статистика формализовала эту концепцию измерения вариаций в понятие [*стандартное отклонение*](https://en.wikipedia.org/wiki/Standard_deviation) и [*дисперсии*](https://en.wikipedia.org/wiki/Variance). Уравнение для вычисления дисперсии имеет вид$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Игнорируя квадрат на мгновение, вы можете видеть, что дисперсия - это *ожидаемое значение* для того, насколько пространство выборки $ X $ отличается от среднего $ \ mu: $ ($X-\ mu) $. Я объясню назначение квадратного члена позже. Формула для ожидаемого значения равна $ \mathbb E[X] = \sum \limits_{i= 1} ^ n p_ix_i $, поэтому мы можем подставить это в приведенное выше уравнение, чтобы получить$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Давайте вычислим дисперсию трех классов, чтобы увидеть, какие значения мы получаем, и ознакомиться с этой концепцией.Среднее значение $X$ равно 1,8 ($\mu_x = 1,8$), поэтому мы вычисляем$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy предоставляет функцию `var()` для вычисления дисперсии: ###Code print(f"{np.var(X):.2f} метров квадратных") ###Output 0.02 метров квадратных ###Markdown Возможно, это немного сложно интерпретировать. Высоты указаны в метрах, но разница составляет метры в квадрате. Таким образом, у нас есть более часто используемая мера, *стандартное отклонение*, которое определяется как квадратный корень из дисперсии:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$Обычно используется $\sigma$ для *стандартного отклонения* и $\sigma ^ 2 $ для *дисперсии*. В большей части этой книги я буду использовать $\sigma ^ 2$ вместо $\mathit{VAR}(X)$ для дисперсии; они символизируют одно и то же.Для первого класса мы вычисляем стандартное отклонение с помощью$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$Мы можем проверить это вычисление с помощью метода NumPy `numpy.std()`, который вычисляет стандартное отклонение. "std" - это распространенная аббревиатура стандартного отклонения. ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown И, конечно же, $0,1414 ^ 2 = 0,02$, что согласуется с нашим более ранним вычислением дисперсии.Что означает стандартное отклонение? Это говорит нам о том, насколько сильно различаются высоты между собой. "Сколько" - это не математический термин. Мы сможем определить его гораздо точнее, как только введем понятие гауссова в следующем разделе. А пока я скажу, что для многих вещей 68% всех значений лежат в пределах одного стандартного отклонения от среднего. Другими словами, мы можем сделать вывод, что для случайного класса 68% учащихся будут иметь высоту от 1,66 (1,8-0,1414) метра до 1,94 (1,8+ 0,1414) метра. Мы можем просмотреть это на графике: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown Очевидно, что только для 5 студентов мы не получим ровно 68% в пределах одного стандартного отклонения. Мы видим, что 3 из 5 студентов находятся в пределах $\pm1\sigma$, или 60%, что настолько близко, насколько вы можете приблизиться к 68% только с 5 выборками. Давайте посмотрим на результаты для класса со 100 учениками.> Мы записываем одно стандартное отклонение как $1\sigma$, что произносится как "одно стандартное отклонение", а не "одна сигма". Два стандартных отклонения составляют $2\sigma$ и так далее. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown На глаз примерно 68% высот лежат в пределах $\pm1\sigma$ от среднего значения 1,8, но мы можем проверить это с помощью кода. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown Скоро мы обсудим это более подробно. А пока давайте вычислим стандартное отклонение для$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$Среднее значение $Y$ равно $\mu=1.8$ m, так что$$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$Мы проверим это с помощью NumPy ###Code print(f'Стандартное отклоенение для Y это {np.std(Y):.2f} метров') ###Output Стандартное отклоенение для Y это 0.39 метров ###Markdown Это соответствует тому, что мы ожидали бы. Существует больше различий в высотах для $ Y$, и стандартное отклонение больше.Наконец, давайте вычислим стандартное отклонение для $Z$. Отклонений в значениях нет, поэтому мы ожидаем, что стандартное отклонение будет равно нулю.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Прежде чем мы продолжим, я должен отметить, что я игнорирую тот факт, что в среднем мужчины выше женщин. В целом разница в росте в классе, состоящем только из мужчин или женщин, будет меньше, чем в классе с обоими полами. Это справедливо и для других факторов. Хорошо питающиеся дети выше ростом, чем дети, страдающие от недоедания. Скандинавы выше итальянцев. При разработке экспериментов статистики должны учитывать эти факторы. Я предположил, что мы могли бы провести этот анализ, чтобы заказать столы для школьного округа. Для каждой возрастной группы, вероятно, будут два разных средних значения - одно сгруппировано вокруг среднего роста женщин, а второе среднее значение сгруппировано вокруг среднего роста мужчин. Среднее значение для всего класса будет где-то между этими двумя значениями. Если мы купим парты для среднего числа всех учеников, то, скорее всего, в итоге получим парты, которые не подходят ни для мужчин, ни для женщин в школе! Мы не будем рассматривать эти вопросы в этой книге. Почитатйте литературу о вероятности, если вам нужно изучить методы решения этих проблем. Почему квадрат различий?Почему мы принимаем *квадрат* различий за дисперсию? Я мог бы подробно заняться математикой, но давайте посмотрим на это простым способом. Вот диаграмма значений $X$, построенная по отношению к среднему значению для $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown Если бы мы не взяли квадрат различий, знаки бы все перечеркнули:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$Это явно неверно, так как в данных имеется отклонение более 0. Может быть, мы можем использовать абсолютное значение? При осмотре мы можем видеть, что результат равен $ 12/4 = 3 $, что, безусловно, правильно — каждое значение отличается на 3 от среднего. Но что, если у нас есть $Y=[6, -2, -3, 1]$? В этом случае мы получаем $12/4=3$. $ Y$ явно более разбросан, чем $X$, но вычисление дает ту же дисперсию. Если мы используем формулу с использованием квадратов, мы получим дисперсию 3,5 для $Y$, что отражает ее большую вариацию.Это не является доказательством правильности. Действительно, Карл Фридрих Гаусс, изобретатель этой техники, признал, что она несколько произвольна. Если есть выбросы, то возведение разницы в квадрат придает этому термину непропорциональный вес. Например, давайте посмотрим, что произойдет, если у нас есть: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Дисперсия X с выбросом = {np.var(X):6.2f}') print(f'Дисперсия X без выбросов = {np.var(X[:-1]):6.2f}') ###Output Дисперсия X с выбросом = 621.45 Дисперсия X без выбросов = 2.03 ###Markdown Является ли это "правильным"? Ты мне скажи. Без выброса 100 мы получаем $\sigma^2 = 2,03$, что точно отражает, как изменяется $X$ при отсутствии выброса. Один выброс заглушает вычисление дисперсии. Хотим ли мы затопить вычисления, чтобы мы знали, что есть выброс, или надежно включить выброс и по-прежнему предоставлять оценку, близкую к значению, отсутствующему в выбросах? Опять же, ты мне скажи. Очевидно, это зависит от вашей проблемы.Я не буду продолжать идти по этому пути; если вам интересно, вы, возможно, захотите взглянуть на работу, проделанную Джеймсом Бергером по этой проблеме, в области, называемой "Байесовская надежность", или на отличные публикации по "надежной статистике" Питера Дж. Хубера [4]. В этой книге мы всегда будем использовать дисперсию и стандартное отклонение, определенные Гауссом.Из этого следует сделать вывод, что эти *сводные* статистические данные всегда рассказывают неполную историю о наших данных. В этом примере дисперсия, определенная Гауссом, не говорит нам, что у нас есть один большой выброс. Тем не менее, это мощный инструмент, поскольку мы можем кратко описать большой набор данных с помощью нескольких чисел. Если бы у нас был 1 миллиард точек данных, мы бы не хотели просматривать графики на глаз или просматривать списки чисел; сводная статистика дает нам способ описать форму данных полезным способом. ГауссианыТеперь мы готовы узнать о [гауссианах](https://en.wikipedia.org/wiki/Gaussian_function ). Давайте напомним себе о мотивации этой главы.> Нам нужен унимодальный, непрерывный способ представления вероятностей, который моделирует, как работает реальный мир, и который эффективен с точки зрения вычислений.Давайте посмотрим на график распределения Гаусса, чтобы получить представление о том, о чем мы говорим. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown Эта кривая представляет собой [*функцию плотности вероятности*](https://en.wikipedia.org/wiki/Probability_density_function) или сокращенно *ФПВ*. Он показывает относительную вероятность того, что случайная величина примет определенное значение. Из диаграммы мы можем сказать, что у студента несколько больше шансов иметь рост около 1,8 м, чем 1,7 м, и гораздо больше шансов иметь рост 1,9 м против 1,4 м. Другими словами, у многих студентов рост будет около 1,8 м, и очень немногие студенты будут иметь рост около 1,4 м или 2,2 метра. Наконец, обратите внимание, что кривая центрирована по среднему значению 1,8 м.> Я объясняю, как строить гауссианы и многое другое, в записной книжке *Computing_and_Plotting_PDFs* в Папка Supporting_Notebooks. Вы можете прочитать его онлайн [здесь](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].Это может быть узнаваемо для вас как "колоколообразная кривая". Эта кривая повсеместна, потому что в реальных условиях многие наблюдения распределяются таким образом. Я не буду использовать термин "колоколообразная кривая" для обозначения гауссовой кривой, потому что многие распределения вероятностей имеют аналогичную форму колоколообразной кривой. Нематематические источники могут быть не столь точными, поэтому будьте благоразумны в своих выводах, когда увидите, что термин используется без определения.Эта кривая не уникальна для высот — огромное количество природных явлений демонстрирует такое распределение, включая датчики, которые мы используем в задачах фильтрации. Как мы увидим, он также обладает всеми атрибутами, которые мы ищем — он представляет собой унимодальное убеждение или значение в виде вероятности, он непрерывен и эффективен с точки зрения вычислений. Вскоре мы обнаружим, что у него есть и другие желательные качества, о которых мы, возможно, не подозреваем, что желаем.Чтобы еще больше мотивировать вас, вспомните формы распределений вероятностей в главе "Дискретный Байесовский фильтр": ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown Это были не идеальные гауссовы кривые, но они были похожи. Мы будем использовать гауссианы для замены дискретных вероятностей, используемых в этой главе! НоменклатураНемного терминологии, прежде чем мы продолжим - на этой диаграмме показана *плотность вероятности случайной величины*, имеющей любое значение между ($-\infty..\infty)$. Что это значит? Представьте, что мы проводим бесконечное количество бесконечно точных измерений скорости автомобилей на участке шоссе. Затем мы могли бы построить график результатов, показав относительное количество автомобилей, проезжающих мимо с любой заданной скоростью. Если бы средняя скорость составляла 120 км/ ч, это могло бы выглядеть так: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown Ось y показывает *плотность вероятности* — относительное количество автомобилей, которые развивают скорость на соответствующей оси x. Я объясню это подробнее в следующем разделе.Гауссова модель несовершенна. Хотя эти диаграммы этого не показывают, "хвосты" распределения простираются до бесконечности. *Хвосты* - это дальние концы кривой, где значения являются самыми низкими. Конечно, высота человека или скорость автомобиля не могут быть меньше нуля, не говоря уже о $-\infty$ или $\infty$. “Карта - это не территория” - распространенное выражение, и оно справедливо для байесовской фильтрации и статистики. Приведенное выше распределение Гаусса моделирует распределение измеренных автомобильных скоростей, но, будучи моделью, оно обязательно несовершенно. Разница между моделью и реальностью будет всплывать снова и снова в этих фильтрах. Гауссианы используются во многих областях математики не потому, что они идеально моделируют реальность, а потому, что их проще использовать, чем любой другой относительно точный выбор. Однако даже в этой книге гауссианам не удастся смоделировать реальность, что вынудит нас использовать дорогостоящие в вычислительном отношении альтернативы. Вы услышите, что эти распределения называются *гауссовскими распределениями* или *нормальными распределениями*. *Гауссовский* и *нормальный* оба означают одно и то же в этом контексте и используются взаимозаменяемо. Я буду использовать оба термина на протяжении всей этой книги, поскольку в разных источниках будут использоваться оба термина, и я хочу, чтобы вы привыкли видеть оба. Наконец, как и в этом параграфе, обычно сокращают название и говорят о *гауссовском* или *нормальном* — оба они являются типичными сокращенными именами для *распределения Гаусса*. Гауссовы распределенияДавайте рассмотрим, как работают гауссианы. Гауссово - это *непрерывное распределение вероятностей*, которое полностью описывается двумя параметрами: средним значением ($\mu$) и дисперсией ($\sigma^2$). Он определяется как:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ - это обозначение для $e^x$.Не отговаривайтесь уравнением, если вы не видели его раньше; вам не нужно будет запоминать или манипулировать им. Вычисление этой функции хранится в `stats.py` с функцией `gaussian(x, mean, var, normed=True)`. Лишенный констант, вы можете видеть, что это простая экспоненциальная: $$f(x)\propto e^{-x^2}$$который имеет знакомую форму колоколообразной кривой ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Давайте напомним себе, как смотреть на код функции. В ячейке введите название функции, за которым следуют два вопросительных знака, и нажмите CTRL+ENTER. Это откроет всплывающее окно с отображением источника. Раскомментируйте следующую ячейку и попробуйте сделать это сейчас. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Давайте построим график Гаусса со средним значением 22 $(\mu = 22) $ с дисперсией 4 $ (\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown Что означает эта кривая *означает*? Предположим, у нас есть термометр, который показывает 22 °C. Ни один термометр не является абсолютно точным, и поэтому мы ожидаем, что каждое показание будет немного отличаться от фактического значения. Однако теорема, называемая [*Центральная предельная теорема*](https://en.wikipedia.org/wiki/Central_limit_theorem) утверждает, что если мы сделаем много измерений, то измерения будут нормально распределены. Когда мы смотрим на эту диаграмму, мы видим, что она пропорциональна вероятности того, что термометр покажет определенное значение, учитывая фактическую температуру 22 ° C.Напомним, что гауссово распределение является *непрерывным*. Подумайте о бесконечно длинной прямой линии - какова вероятность того, что точка, которую вы выбираете случайным образом, равна 2. Очевидно, что 0%, поскольку существует бесконечное количество вариантов на выбор. То же самое верно и для нормальных распределений; на приведенном выше графике вероятность быть *ровно* 2° C равна 0%, потому что существует бесконечное число значений, которые могут принимать показания.Что это за кривая? Это то, что мы называем *функцией плотности вероятности*. бласть под кривой в любой области дает вам вероятность этих значений. Так, например, если вы вычислите площадь под кривой между 20 и 22, результирующая площадь будет представлять собой вероятность того, что показания температуры находятся между этими двумя температурами. Вот еще один способ понять это. Какова *плотность* камня или губки? Это мера того, сколько массы уплотняется в данном пространстве. Скалы плотные, губки менее плотные. Итак, если вы хотели узнать, сколько весит камень, но у вас не было весов, вы могли бы взять его объем и умножить на его плотность. Это дало бы вам его массу. На практике плотность варьируется в большинстве объектов, поэтому вы должны интегрировать локальную плотность по объему породы.$$M = \iiint_R p(x,y,z)\, dV$$Мы делаем то же самое с *плотностью вероятности*. Если вы хотите знать температуру от 20 ° C до 21 ° C, вам следует интегрировать приведенную выше кривую от 20 до 21. Как вы знаете, интеграл кривой дает вам площадь под кривой. Поскольку это кривая плотности вероятности, интегралом от плотности является вероятность. Какова вероятность того, что температура будет ровно 22°C? Интуитивно понятно, что 0. Это действительные числа, и вероятность 22°Cvs, скажем, 22.00000000000017°C бесконечно мала. Математически, что бы мы получили, если бы интегрировали от 22 до 22? Ноль.Возвращаясь к камню, каков вес одной точки на камне? Бесконечно малая точка не должна иметь веса. Нет смысла спрашивать вес одной точки, и нет смысла спрашивать о вероятности непрерывного распределения, имеющего одно значение. Ответ для обоих, очевидно, равен нулю.На практике наши датчики не обладают бесконечной точностью, поэтому показания 22 ° C подразумевают диапазон, например 22 $\pm$ 0,1 ° C, и мы можем вычислить вероятность этого диапазона путем интегрирования от 21,9 до 22,1.Мы можем думать об этом в байесовских терминах или частотных терминах. Как байесовский, если термометр показывает ровно 22 ° C, то наше убеждение описывается кривой - наше убеждение в том, что фактическая (системная) температура близка к 22 ° C, очень высокое, а наше убеждение в том, что фактическая температура близка к 18, очень низкое. Как специалист по частоте, мы бы сказали, что если бы мы провели 1 миллиард измерений температуры системы ровно при 22 ° C, то гистограмма измерений выглядела бы как эта кривая. Как вы вычисляете вероятность или площадь под кривой? Вы интегрируете уравнение для Гауссианы $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$Это называется *кумулятивным распределением вероятностей*, обычно сокращенным *КДВ*.Я написал `filterpy.stats.norm_cdf`, который вычисляет интеграл для вас. Например, мы можем вычислить ###Code from filterpy.stats import norm_cdf print('Совокупная вероятность в диапазоне от 21,5 до 22,5 составляет {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Совокупная вероятность в диапазоне от 23,5 до 24,5 составляет {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Совокупная вероятность в диапазоне от 21,5 до 22,5 составляет 19.74% Совокупная вероятность в диапазоне от 23,5 до 24,5 составляет 12.10% ###Markdown Среднее значение ($\mu$) — это то, на что оно похоже, - среднее значение всех возможных вероятностей. Из-за симметричной формы кривой это также самая высокая часть кривой. Термометр показывает 22 ° C, так что это то, что мы использовали для среднего значения. Обозначение нормального распределения для случайной величины $X$ равно $X \sim\ \mathcal{N}(\mu,\sigma^2)$, где $\sim$ означает *распределенное в соответствии с*. Это означает, что я могу выразить показания температуры нашего термометра как$$\text{temp} \sim \mathcal{N}(22,4)$$Это чрезвычайно важный результат. Гауссианы позволяют мне фиксировать бесконечное число возможных значений только с помощью двух чисел! Со значениями $ \mu = 22$ и $\sigma^2 = 4$ я могу вычислить распределение измерений в любом диапазоне.Некоторые источники используют $\mathcal N (\mu, \sigma)$ вместо $\mathcal N (\mu, \sigma^2)$. И то, и другое прекрасно, они оба являются условностями. Вам нужно иметь в виду, какая форма используется, если вы видите такой термин, как $\mathcal {N}(22,4) $. В этой книге я всегда использую $\mathcal N (\mu, \sigma ^ 2)$, так что $\sigma =2$, $\sigma ^ 2 = 4$ для этого примера. The Variance and BeliefПоскольку это распределение плотности вероятности, требуется, чтобы площадь под кривой всегда была равна единице. Это должно быть интуитивно понятно — область под кривой представляет все возможные исходы, *что-то* произошло, а вероятность того, что * что-то произойдет*, равна единице, поэтому плотность должна быть равна единице. Мы можем доказать это сами с помощью небольшого кода. (Если вы склонны к математике, интегрируйте уравнение Гаусса из $-\infty$ в $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown Это приводит к важному пониманию. Если отклонение невелико, кривая будет узкой. это связано с тем, что дисперсия является мерой того, *насколько* выборки отличаются от среднего значения. Чтобы сохранить площадь равной 1, кривая также должна быть высокой. С другой стороны, если дисперсия велика, кривая будет широкой, и, следовательно, она также должна быть короткой, чтобы площадь была равна 1.Давайте посмотрим на это графически. Мы будем использовать вышеупомянутый `filterpy.stats.gaussian`, который может принимать либо одно значение, либо массив значений. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown По умолчанию `gaussian` нормализует выходные данные, что превращает выходные данные обратно в распределение вероятностей. Используйте аргумент `normed`, чтобы управлять этим. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown Если гауссово распределение не нормализовано, оно называется *функцией Гаусса* вместо *распределения Гаусса*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown О чем это нам говорит? Гауссов с $\sigma^2 = 0.2^2$ очень узок. Это говорит о том, что мы считаем, что $x = 23$, и что мы очень уверены в этом: в пределах $\pm 0.2$ std. Напротив, гауссиан с $\sigma^2 = 1^2$ также считает, что $x = 23$, но мы гораздо менее уверены в этом. Наше убеждение в том, что $x = 23$ ниже, и поэтому наше убеждение о вероятных возможных значениях для $x$ распространяется — мы думаем, что вполне вероятно, что $x = 20$ или $x = 26$, например. $\sigma^2 = 0.2^2$ почти полностью исключил $22$ или $24$ в качестве возможных значений, тогда как $\sigma^2 = 1 ^ 2$ считает их почти такими же вероятными, как $23$.Если мы вспомним о термометре, мы можем рассматривать эти три кривые как представляющие показания трех разных термометров. Кривая для $\sigma^2 = 0,2^2$ представляет собой очень точный термометр, а кривая для $\sigma^2 = 1^2$ представляет собой довольно неточный. Обратите внимание на очень мощное свойство, которое дает нам распределение Гаусса — мы можем полностью представить как показания, так и погрешность термометра только с помощью двух чисел — среднего и дисперсии.Эквивалентной формацией для гауссова является $\mathcal {N} (\mu,1/\tau) $, где $\mu $ - *среднее значение*, а $\tau $ - *точность*. $1/\tau = \sigma^2$; это обратная величина дисперсии. Хотя мы не используем эту формулировку в этой книге, она подчеркивает, что дисперсия является мерой точности наших данных. Небольшое отклонение дает большую точность — наши измерения очень точны. И наоборот, большая дисперсия приводит к низкой точности — наше убеждение распространяется на большую площадь. Вы должны привыкнуть думать о гауссианах в этих эквивалентных формах. В байесовских терминах гауссианы отражают наше *убеждение* в измерении, они выражают *точность* измерения и выражают, насколько велика *дисперсия* в измерениях. Все это разные способы констатации одного и того же факта.Я забегаю вперед, но в следующих главах мы будем использовать гауссианы, чтобы выразить нашу веру в такие вещи, как предполагаемое положение объекта, который мы отслеживаем, или точность датчиков, которые мы используем. Правило 68-95-99.7Сейчас стоит сказать несколько слов о стандартном отклонении. Стандартное отклонение - это показатель того, насколько данные отклоняются от среднего значения. Для гауссовских распределений 68% всех данных находятся в пределах одного стандартного отклонения ($\pm1\sigma$) от среднего значения, 95% находятся в пределах двух стандартных отклонений ($\pm2\sigma $) и 99,7% в пределах трех ($\pm3\sigma $). Это часто называют правилом [68-95-99.7](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). Если бы вам сказали, что средний балл теста в классе составил 71 при стандартном отклонении 9,4, вы могли бы сделать вывод, что 95% учащихся получили балл от 52,2 до 89,8, если распределение нормальное (то есть рассчитывается с помощью $71 \pm (2 * 9,4) $). Наконец, это не произвольные числа. Если гауссово значение для нашей позиции равно $\mu=22$ метров, то стандартное отклонение также имеет единицы измерения. Таким образом, $\sigma=0.2$ означает, что 68% измерений находятся в диапазоне от 21,8 до 22,2 метров. Дисперсия - это стандартное отклонение в квадрате, таким образом, $\sigma ^ 2 = .04$ метров $^2$. Как вы видели в предыдущем разделе, запись $\sigma 2 = 0.2^2$ может сделать это несколько более значимым, поскольку 0.2 находится в тех же единицах измерения, что и данные.На следующем графике показана взаимосвязь между стандартным отклонением и нормальным распределением. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Интерактивные гауссианыДля тех, кто читает это в записной книжке Jupyter, вот интерактивная версия гауссовых графиков. Используйте ползунки для изменения $\mu$ и $\sigma^2$. Настройка $\mu $ переместит график влево и вправо, потому что вы корректируете среднее значение, а настройка $\sigma^2$ сделает колоколообразную кривую толще и тоньше. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Наконец, если вы читаете это онлайн, вот анимация гауссова. Во-первых, среднее значение смещается вправо. Затем среднее значение центрируется на уровне $\mu = 5$ и изменяется дисперсия. Вычислительные свойства гауссианДискретный байесовский фильтр работает путем умножения и сложения произвольных распределений вероятностей. Фильтр Калмана использует гауссианы вместо произвольных распределений, но остальная часть алгоритма остается прежней. Это означает, что нам нужно будет умножить и добавить гауссово.Замечательным свойством гауссианов является то, что сумма двух независимых независимых нормальных переменных (https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables ) также нормально распределяется! Произведение не является гауссовым, но пропорционально гауссову. Там мы можем сказать, что результатом умножения двух гауссовых распределений является гауссова функция (функция отзыва в данном контексте означает, что свойство суммирования значений до единицы не гарантируется).Прежде чем мы займемся математикой, давайте проверим это визуально. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Здесь я создал два гауссиана, g1 = $\mathcal N (0.8, 0.1)$ и g2 = $\mathcal N (1.3, 0.2)$ и нанес их на график. Затем я умножил их вместе и нормализовал результат. Как вы можете видеть, результат *выглядит* как распределение по Гауссу.Гауссианы - это нелинейные функции. Как правило, если вы умножаете нелинейные уравнения, вы получаете функцию другого типа. Например, форма умножения двух грехов сильно отличается от `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown Но результатом умножения двух гауссовых распределений является гауссова функция. Это ключевая причина, по которой фильтры Калмана выполнимы с вычислительной точки зрения. Другими словами, фильтры Калмана используют гауссианы *, потому что * они удобны в вычислительном отношении.Произведение двух независимых гауссианов задается формулой:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$Сумма двух гауссин задается формулой$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$В конце главы я вывожу эти уравнения. Однако понимание происхождения не очень важно. Собирая все это воединоТеперь мы готовы поговорить о том, как гауссианы можно использовать при фильтрации. В следующей главе мы реализуем фильтр с использованием гауссова фильтра. Здесь я объясню, почему мы хотели бы использовать гауссово.В предыдущей главе мы представили распределения вероятностей с помощью массива. Мы выполнили вычисление обновления, вычислив поэлементное произведение этого распределения с другим распределением, представляющим вероятность измерения в каждой точке, примерно так: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown Другими словами, мы должны вычислить 10 умножений, чтобы получить этот результат. Для реального фильтра с большими массивами в нескольких измерениях нам потребовались бы миллиарды умножений и огромные объемы памяти. Но это распределение выглядит как гауссово. Что, если мы используем гауссов вместо массива? Я вычислю среднее значение и дисперсию заднего значения и сопоставлю их с гистограммой. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown Это впечатляет. Мы можем описать все распределение чисел только с помощью двух чисел. Возможно, этот пример неубедителен, учитывая, что в распределении всего 10 чисел. Но реальная проблема может содержать миллионы чисел, и все же для ее описания требуется всего два числа.Далее, напомним, что наш фильтр реализует функцию обновления с помощью```pythondef update(likelihood, prior): return normalize(likelihood * prior)```Если массивы содержат миллион элементов, то это один миллион умножений. Однако, если мы заменим массивы гауссовым, то мы выполним это вычисление с помощью$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$что составляет три умножения и два деления. Теорема БайесаВ предыдущей главе мы разработали алгоритм, рассуждая об информации, которой мы располагаем в каждый момент, которую мы выразили в виде дискретных распределений вероятностей. В процессе мы обнаружили [*Теорему Байеса*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Теорема Байеса говорит нам, как вычислить вероятность события с учетом предварительной информации.Мы реализовали функцию `update()` с помощью этого вычисления вероятности:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ Оказывается, это теорема Байеса. Через секунду я разовью математику, но во многих отношениях это затеняет простую идею, выраженную в этом уравнении. Мы читаем это как:$$ обновлённые\,знания = \big\|вероятность\,новых\,знаний\times предварительные\, знания \big\|$$где $\| \cdot\|$ выражает нормализацию термина.Мы пришли к этому с помощью простых рассуждений о собаке, идущей по коридору. Тем не менее, как мы увидим, то же самое уравнение применимо ко множеству проблем фильтрации. Мы будем использовать это уравнение в каждой последующей главе.Для обзора, *предшествующий* - это вероятность того, что что-то произойдет до того, как мы включим вероятность измерения (*вероятность*), а *последующий* - это вероятность, которую мы вычисляем после включения информации из измерения.Теорема Байеса является$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ называется [*условная вероятность*](https://en.wikipedia.org/wiki/Conditional_probability ). То есть он представляет вероятность того, что $A$ произойдет, *если* $B$ произошло. Например, сегодня вероятность дождя выше по сравнению с обычным днем, если вчера также шел дождь, потому что дождевые системы обычно длятся более одного дня. Мы бы записали вероятность того, что сегодня пойдет дождь, учитывая, что вчера шел дождь, как $P$ (дождь сегодня $\mid$ дождь вчера).Я умолчал об одном важном моменте. В нашем приведенном выше коде мы работаем не с единичными вероятностями, а с массивом вероятностей - *распределением вероятностей*. Уравнение, которое я только что привел для Байеса, использует вероятности, а не распределения вероятностей. Однако это в равной степени справедливо и для распределений вероятностей. Мы используем нижний регистр $p$ для распределения вероятностей$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$В приведенном выше уравнении $B$ - это *доказательство*, $p (A) $ - это *предшествующее*, $p (B \mid A) $ - это *вероятность*, а $p (A \mid B) $ - это *последующее*. Заменив математические термины соответствующими словами, вы можете увидеть, что теорема Байеса соответствует нашему уравнению обновления. Давайте перепишем уравнение в терминах нашей задачи. Мы будем использовать $x_i$ для позиции в точке *i* и $z$ для измерения. Следовательно, мы хотим знать $P (x_i \mid z) $, то есть вероятность того, что собака находится на уровне $x_i $, учитывая измерение $ z $.Итак, давайте включим это в уравнение и решим его.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$Это выглядит некрасиво, но на самом деле все довольно просто. Давайте разберемся, что означает каждый термин справа. Первый - это $p(z \mid x_i)$. Это вероятность или вероятность измерения в каждой ячейке $x_i$. $p(x_i)$ - это *предшествующее* - наше убеждение перед включением измерений. Мы умножаем их вместе. Это всего лишь ненормированное умножение в функции `update()`:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```Последний член, который следует рассмотреть, - это знаменатель $p (z) $. Это вероятность получения измерения $z$ без учета местоположения. Это часто называют "доказательством". Мы вычисляем это, беря сумму $ x$, или `сумма (вера)` в коде. Вот как мы вычисляем нормализацию! Итак, функция `update()` делает не что иное, как вычисляет теорему Байеса.В литературе часто приводятся эти уравнения в виде интегралов. В конце концов, интеграл - это просто сумма по непрерывной функции. Итак, вы можете увидеть теорему Байеса, записанную как$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$Этот знаменатель обычно невозможно решить аналитически; когда его можно решить, математика дьявольски сложна. Недавняя статья [мнение](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up) ибо Королевское статистическое общество назвало это "собачьим завтраком" [8]. Учебники по фильтрации, использующие байесовский подход, заполнены интегральными уравнениями без аналитического решения. Не пугайтесь этих уравнений, поскольку мы тривиально обработали этот интеграл, нормализовав наш апостериор. Мы узнаем больше о методах решения этой проблемы в главе **Фильтры твердых частиц**. До тех пор признайте, что на практике это всего лишь термин нормализации, по которому мы можем суммировать. Я пытаюсь сказать, что когда вы сталкиваетесь со страницей интегралов, просто думайте о них как о суммах и соотносите их с этой главой, и часто трудности исчезнут. Спросите себя: "почему мы суммируем эти значения" и "почему я делю на этот термин". На удивление часто ответ очевиден. Удивительно часто автор забывает упомянуть об этой интерпретации.Вполне вероятно, что сила теоремы Байеса еще не полностью очевидна для вас. Мы хотим вычислить $p(x_i\mod Z)$. То есть, на шаге i, каково наше вероятное состояние с учетом измерения. В целом это чрезвычайно сложная проблема. Теорема Байеса является общей. Мы можем захотеть узнать вероятность того, что у нас рак, учитывая результаты теста на рак, или вероятность дождя, учитывая различные показания датчиков. Заявленные таким образом проблемы кажутся неразрешимыми.Но теорема Байеса позволяет нам вычислить это, используя обратную $ p (Z \mid x_i) $, которую часто просто вычислить$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$То есть, чтобы вычислить, насколько вероятно, что будет дождь, учитывая конкретные показания датчика, нам нужно только вычислить вероятность показаний датчика, учитывая, что идет дождь! Это ***гораздо*** более простая проблема! Что ж, прогнозирование погоды по-прежнему остается сложной проблемой, но Байес делает ее решаемой.Аналогично, как вы видели в главе о дискретном Байесе, мы вычислили вероятность того, что Саймон находился в любой заданной части коридора, вычислив, насколько вероятно, что показания датчика указывают на то, что Саймон находится в позиции `x`. Трудная проблема становится легкой. Теорема о полной вероятностиТеперь мы знаем формальную математику, лежащую в основе функции `update()`; как насчет функции `predict()`? `predict()` реализует [*теорему о полной вероятности*](https://en.wikipedia.org/wiki/Law_of_total_probability). Давайте вспомним, что вычисляет `predict()`. Он вычислял вероятность нахождения в любой заданной позиции с учетом вероятности всех возможных событий перемещения. Давайте выразим это в виде уравнения. Вероятность нахождения в любой позиции $i$ в момент времени $t$ может быть записана как $P(X_i^t)$. Мы вычислили это как сумму предыдущих в момент времени $t-1 $ $P (X_j ^ {t-1}) $, умноженную на вероятность перехода из ячейки $x_j $ в $x_i$. То есть$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$Это уравнение называется *теоремой о полной вероятности*. Цитата из Википедии [6] "Он выражает общую вероятность результата, который может быть реализован с помощью нескольких различных событий". Я мог бы дать вам это уравнение и реализовать `predict()`, но ваши шансы понять, почему это уравнение работает, были бы невелики. В качестве напоминания, вот код, который вычисляет это уравнение```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Вычисление вероятностей с помощью scipy.statsВ этой главе я использовал код из [FilterPy](https://github.com/rlabbe/filter py) для вычисления и построения гауссианов. Я сделал это, чтобы дать вам возможность взглянуть на код и посмотреть, как реализованы эти функции. Тем не менее, Python поставляется с "включенными батарейками", как говорится, и он поставляется с широким спектром статистических функций в модуле "scipy.stats". Итак, давайте рассмотрим, как использовать scipy.stats для вычисления статистики и вероятностей.Модуль `scipy.stats` содержит ряд объектов, которые вы можете использовать для вычисления атрибутов различных распределений вероятностей. Полная документация для этого модуля находится здесь: http://docs.scipy.org/doc/scipy/reference/stats.html . Мы сосредоточимся на переменной norm, которая реализует нормальное распределение. Давайте посмотрим на некоторый код, который использует `scipy.stats.norm` для вычисления гауссиана, и сравним его значение со значением, возвращаемым функцией `gaussian()` из FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown Вызов `norm(2, 3)` создает то, что scipy называет "замороженным" распределением - он создает и возвращает объект со средним значением 2 и стандартным отклонением 3. Затем вы можете использовать этот объект несколько раз, чтобы получить плотность вероятности различных значений, например: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown Документация для [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor ) [2] перечисляет множество других функций. Например, мы можем сгенерировать $n$ выборок из дистрибутива с помощью функции `rvs()`. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 4.348 -5.495 1.46 0.521 -0.415 1.098 -0.106 3.19 4.375 6.114 0.14 2.281 3.347 3.662 -0.808] ###Markdown Мы можем получить [*кумулятивную функцию распределения (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function ), которая представляет собой вероятность того, что случайно выбранное значение из распределения меньше или равно $x$. ###Code # вероятность того, что случайное значение меньше среднего значения 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown Мы можем получить различные свойства распределения: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Ограничения использования гауссианов для моделирования мираРанее я упоминал *центральную предельную теорему*, которая гласит, что при определенных условиях арифметическая сумма любой независимой случайной величины будет нормально распределена, независимо от того, как распределены случайные величины. Это важно для нас, потому что природа полна ненормальных распределений, но когда мы применяем центральную предельную теорему к большим популяциям, мы получаем нормальные распределения.Однако ключевая часть доказательства - “при определенных условиях”. Эти условия часто не применимы к физическому миру. Например, кухонные весы не могут показывать показания ниже нуля, но если мы представим погрешность измерения как гауссову, то левая сторона кривой простирается до отрицательной бесконечности, что подразумевает очень малую вероятность получения отрицательных показаний. Это широкая тема, которую я не буду рассматривать исчерпывающе. Давайте рассмотрим тривиальный пример. Мы думаем о таких вещах, как результаты тестов, как о нормальном распределении. Если у вас когда-либо была оценка профессора “по кривой”, вы были подвержены этому предположению. Но, конечно, результаты тестов не могут следовать нормальному распределению. Это связано с тем, что распределение присваивает ненулевое распределение вероятностей для * любого* значения, независимо от того, насколько оно далеко от среднего. Итак, например, предположим, что ваше среднее значение равно 90, а стандартное отклонение равно 13. Нормальное распределение предполагает, что существует большая вероятность того, что кто-то получит 90 баллов, и небольшая вероятность того, что кто-то получит 40 баллов. Однако это также подразумевает, что есть крошечный шанс, что кто-то получит оценку -10 или 150. Он присваивает чрезвычайно малую вероятность получения оценки в размере $-10 ^{300} $ или $10 ^{32986}$. Хвосты гауссова распределения бесконечно длинны.Но для проверки мы знаем, что это неправда. Игнорируя дополнительный кредит, вы не можете получить меньше 0 или больше 100. Давайте построим график этого диапазона значений, используя нормальное распределение, чтобы увидеть, насколько плохо это отражает распределение реальных результатов тестов. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown Площадь под кривой не может быть равна 1, поэтому это не распределение вероятностей. Что на самом деле происходит, так это то, что больше учащихся, чем прогнозировалось нормальным распределением, получают оценки ближе к верхней границе диапазона (например), и этот хвост становится “толстым”. Кроме того, тест, вероятно, не способен точно различать незначительные различия в навыках у учащихся, поэтому распределение слева от среднего значения также, вероятно, немного сгруппировано в некоторых местах.Датчики измеряют окружающий мир. Ошибки в измерениях датчика редко бывают действительно гауссовыми. Еще слишком рано говорить о трудностях, которые это создает для разработчика фильтров Калмана. Стоит помнить о том факте, что математика фильтра Калмана основана на идеализированной модели мира. Сейчас я представлю немного кода, который я буду использовать позже в книге для формирования дистрибутивов для моделирования различных процессов и датчиков. Это распределение называется [*$t$-распределением Стьюдента*](https://en.wikipedia.org/wiki/Student%27s_t-distribution).Допустим, я хочу смоделировать датчик, который имеет некоторый белый шум на выходе. Для простоты предположим, что сигнал равен константе 10, а стандартное отклонение шума равно 2. Мы можем использовать функцию `numpy.random.randn()`, чтобы получить случайное число со средним значением 0 и стандартным отклонением 1. Я могу смоделировать это с помощью: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Давайте построим график этого сигнала и посмотрим, как он выглядит. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown Это выглядит так, как я и ожидал. Сигнал сосредоточен вокруг 10. Стандартное отклонение 2 означает, что 68% измерений будут находиться в пределах $ \pm $ 2 от 10, а 99% будут находиться в пределах $ \pm $ 6 от 10, и это похоже на то, что происходит. Теперь давайте посмотрим на распределение, сгенерированное с помощью $ t$-распределения Стьюдента. Я не буду вдаваться в математику, а просто дам вам исходный код для него, а затем построю распределение, используя его. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown Мы можем видеть из графика, что, хотя выходные данные аналогичны нормальному распределению, существуют выбросы, которые намного превышают 3 стандартных отклонения от среднего значения (от 7 до 13).Маловероятно, что распределение $ t $ Студента является точной моделью того, как работает ваш датчик (скажем, GPS или доплеровский), и это не книга о том, как моделировать физические системы. Тем не менее, он предоставляет разумные данные для проверки производительности вашего фильтра при наличии шума в реальном мире. Мы будем использовать подобные дистрибутивы на протяжении всей остальной части книги в наших симуляциях и тестах.Это не праздная забота. Уравнения фильтра Калмана предполагают, что шум распределен нормально, и работают неоптимально, если это не так. Разработчикам критически важных фильтров, таких как фильтры на космических аппаратах, необходимо овладеть большим количеством теоретических и эмпирических знаний о работе датчиков на своих космических аппаратах. Например, в презентации, которую я видел во время миссии НАСА, говорилось, что, хотя теория гласит, что они должны использовать 3 стандартных отклонения, чтобы отличить шум от действительных измерений, на практике они должны были использовать от 5 до 6 стандартных отклонений. Это было то, что они определили с помощью экспериментов.Код для rand_student_t включен в `filter py.stats`. Вы можете использовать его с```pythonfrom filterpy.stats import rand_student_t```Хотя я не буду описывать это здесь, статистика определила способы описания формы распределения вероятностей по тому, как оно отличается от экспоненциального распределения. Нормальное распределение имеет симметричную форму вокруг среднего значения - как колоколообразная кривая. Однако распределение вероятностей может быть асимметричным относительно среднего значения. Мера этого называется [*перекос*](https://en.wikipedia.org/wiki/Skewness). Хвосты могут быть укорочены, толще, тоньше или иметь иную форму, отличную от экспоненциального распределения. Мера этого называется [*эксцесс*](https://en.wikipedia.org/wiki/Kurtosis). модуль `scipy.stats` содержит функцию `describe`, которая, среди прочего, вычисляет эти статистические данные. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Давайте рассмотрим две нормальные популяции, одну маленькую, одну большую: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-2.0844545447677825, 2.5173517045322016), mean=-0.32591600274118304, variance=2.260221886988111, skewness=0.3664404234912478, kurtosis=-0.6719721214837135) DescribeResult(nobs=300000, minmax=(-4.946035173902078, 4.542396974939784), mean=0.0015529518548840398, variance=1.0056292973403207, skewness=-0.006174152145108188, kurtosis=-0.01380398360520374) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die roll? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.08 2.024 1.4 3.024 5.799 0.989 2.083 0.978 7.542 -2.22 4.984 0.626 4.387 3.676 -0.12 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8106190910322406, 1.7202801709655346), mean=0.03998695860303425, variance=1.2099810612140205, skewness=0.054824114606583485, kurtosis=-0.8322079773586668) DescribeResult(nobs=300000, minmax=(-5.136201903633123, 4.498934900223554), mean=0.0016752908705450242, variance=1.0019122279656631, skewness=0.002460339180965745, kurtosis=-0.0022807108788165387) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from gaussian_internal import plot_height_std import matplotlib.pyplot as plt with figsize(y=2): plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! It's too early to understand why, but we will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code with figsize(y=2.5): X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with figsize(y=1.5): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding probabilities. I'm getting ahead of myself, but the Kalman filter uses Gaussians instead of probabilities, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansThe product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_1^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality lets us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 2.319 -2.683 4.506 5.454 6.858 3.501 2.62 1.606 5.649 -3.655 -4.25 2.354 -2.9 1.926 6.512] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) 概率、高斯和贝叶斯定理 ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown 简介上一章讨论了离散贝叶斯滤波器的一些缺点。对于许多跟踪和过滤问题,我们希望有一支持个*单模态(unimodal)*和*连续(continuous)*的滤波器。也就是说,我们希望使用浮点数学(continuous)对系统建模,并且只给出一个确认的结果(unimodal)。例如,我们说一架飞机在(12.34,-95.54,2389.5),这里分别表示纬度、经度和高度。我们不希望滤波器告诉我们“它可能在(1.65,-78.01,2100.45)或(34.36,-98.23,2543.79)”。这与我们对世界如何运作的物理直觉不符,正如我们所讨论的,计算多模态情况的代价是非常巨大的。当然,多个位置估计使得导航无法应用。我们希望用一种单模态、连续的方式来模拟现实世界各种问题的概率,并且这种方法计算起来十分高效。高斯分布提供了这些特征。 均值、方差和标准差你们中的大多数人可能都会接触过数据统计,但无论如何,请允许我介绍这些基础内容。我要求你阅读这些内容,即使你认为你已经很清楚。我这样要求是有两个原因。首先,我想确保我们使用术语的表达是相同的。第二,我努力形成对统计的直观理解,这将在后面的章节中为您提供很好的帮助。学习统计课程很容易,只记住公式和计算,但可能对所学内容的含意很模糊。 随机变量我们每次在掷骰子时,得到的结果都在1-6之中。如果我们掷骰子掷一百万次,其中1/6次会得到1。因此,我们说结果为1的*概率(probability)*或*几率(odds)*是1/6。同样地,如果我问你下一次掷骰子得到1的结果可能性,你会回答1/6。这种值和相关概率的组合称为[*随机变量(random variable)*](https://en.wikipedia.org/wiki/random_variable)。这里*随机(random)*并不意味着过程是不确定的,只是我们缺少关于结果的信息。掷骰子的结果是确定的,但我们缺乏足够的信息来计算结果。除了可能性,我们不知道会发生什么。当我们定义术语时,值的范围称为[*样本空间(sample space)*](https://en.wikipedia.org/wiki/sample_space)。对于骰子,它的样本空间为1、2、3、4、5、6。对于硬币来说,样本空间是{正面,反面}。*空间(space)*是一个数学术语,意思是一个集合。骰子的样本空间是1到6范围内自然数的子集。另一个随机变量的例子是大学学生的身高。这里的样本空间是一个实数范围,介于生物学定义的两个极限值之间。抛硬币和掷骰子等随机变量是*离散随机变量(discrete random variables)*。这意味着它们的样本空间由有限个值或可数的无限个值(如自然数)组成。人类的身高则是*连续随机变量(continuous random variables)*,因为它们可以在两个极限值之间取任何实际值。不要将随机变量的*测量值(measurement)*与实际值混淆。如果我们测量一个人的身高的精度只能到0.1米,我们只能测量得到0.1米、0.2米、0.3米、2.7米的数值,从而产生27个离散的值。尽管如此,一个人身高的实际值可能是这两个范围内的任意值,所以身高是一个连续的随机变量。在统计学中,大写字母用于表示随机变量,通常用字母表的后半部分的字母。所以,我们可以说$X$是代表掷骰子的随机变量,或者$Y$是诗歌班新生的身高。后面的章节会使用线性代数来解决这些问题,因此我们将遵循向量使用小写,矩阵使用大写的惯例。所以会有一些冲突,您必须从上下文中确定使用的是哪一个意思。我总是用粗体符号来表示向量和矩阵,这有助于区分两者。 概率分布[*概率分布*](https://en.wikipedia.org/wiki/probability_distribution)给出了随机变量在样本空间中获取某一值的概率。例如,对于一个公平的六面骰子,我们可以说:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|我们用小写的p:$p(x)$表示这个分布。使用一般函数表示法,骰子掷到4的概率为:$$P(X{=}4) = p(4) = \frac{1}{6}$$这表明,骰子在4朝上着陆的概率是$\frac 1 6$。$P(x=x_k)$是“随机变量X$中x_k$的概率”的符号表示。请注意细微的符号差异。大写$P$表示单个事件的概率,小写$p$表示概率分布函数。如果你不注意的话,这会把你引入歧途。有些文献中使用$pr$而不是$p$来区分概率分布。另一个例子是抛硬币。它的样本空间是{H, T}。硬币是均匀的,所以头部(H)朝上的概率是50%,图案(T)朝上的概率是50%。我们把这个写成$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$样本空间不是唯一的。骰子的一个样本空间是1、2、3、4、5、6。另一个有效的样本空间是{偶数,奇数}。另一种可能是{所有角落的点,所有不在角落的点}。只要样本空间包含所有可能的情况,并且任何单个事件仅由样本空间中的一个元素描述,那么样本空间就是有效的。{偶数,1,3,4,5}就不是骰子的有效样本空间,因为结果若是4,则同时与样本空间中的“偶数”和“4”匹配。*离散随机值*的所有值的概率被称为*离散概率分布*,并且*连续随机值*的所有值的概率被称为*连续概率分布*。要成为概率分布,每个值的概率$x_i$必须是$x_i\ge 0$,因为任何概率都不能小于零。其次,所有值的概率之和必须等于1。这对于掷硬币来说应该是显而易见的:如果获得头部的几率是70%,那么获得尾部的几率必须是30%。我们将此要求公式化,对于离散的分布,需满足:$$\sum\limits_u P(X{=}u)= 1$$对于连续的分布需满足:$$\int\limits_u P(X{=}u) \,du= 1$$在前一章中,我们使用概率分布来估计狗在走廊中的位置。例如: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown 每个位置的概率在0到1之间,并且所有位置的概率和等于1,所以这是一个概率分布。而每个概率都是离散的值,所以我们可以更精确地称之为离散概率分布。在实践中,除非我们有特殊情况需要来区分它们,通常会省略离散或连续。 随机变量的均值,中位数,众数对于给定一组数据,我们通常希望知道这组数据的具有代表性的值或平均值。有很多方法可以解决这个问题,这个概念被称为[*中央趋势的测量方法(measure of central tendency)*](https://en.wikipedia.org/wiki/central_trends)。例如,我们可能想知道一个班上学生的平均身高。我们都知道如何找到一组数据的平均值,但是让我详细介绍一下这一点,这样我可以引入更正式的符号和术语。“平均(average)”的同义词是“*平均(mean)*”。我们通过求和值并除以值的个数来计算平均值。如果学生的身高以米为单位$$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$我们计算均值为$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$通常上使用符号$\mu$(mu)来表示平均值。我们可以用方程把这个计算过程形式化。$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy 提供了`numpy.mean()`方法来计算均值。 ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown 为了方便起见,numpy数组提供了方法`mean()`。 ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown 一组数字的*众数(mode)*是指其中最常出现的数字。如果只有一个数字最常出现,我们就说它是一个*单模态(unimodal)*集合,如果有两个或两个以上的数字出现的频率一样多则这个集合是*多模态(multimodal)*。例如,集合1、2、2、3、4、4、4具有众数2和4,这是多模的,集合5、7、7、13具有众数7,因此是单模的。在这本书中,我们不会以这种方式计算众数,但我们确实在广义的角度上使用单模和多模的概念。例如,在**离散贝叶斯**一章中,我们讨论了我们对狗的位置的确定性是*多模态分布(multimodal distribution)*,因为我们为不同的位置分配了不同的概率。最后,一组数字的*中位数(median)*是集合的中点,这样一半的值低于中位数,一半的值高于中位数。这里,高于和低于与集合的排列顺序有关。如果集合中元素的个数是偶数,则中位数是两个中间数的平均值。Numpy提供了`numpy.median()`来计算中位数。如您所见,1.8、2.0、1.7、1.9、1.6的中位数为1.8,因为1.8是排序后该集合的第三个元素。在这例子中,中位数等于平均值,但这通常情况下并不会这样。 ###Code np.median(x) ###Output _____no_output_____ ###Markdown 随机变量的期望值一个随机变量的[*期望值(expected value)*](https://en.wikipedia.org/wiki/expected_value)是它的平均值,如果我们取无限多的样本,然后将这些样本平均在一起。假设我们有$X=[1,3,5]$并且每个值出现的可能性都相同。我们*期望(expect)*X$的平均值是多少?当然,这是1、3和5的平均值,也就是3。这是有依据的;我们希望1、3和5的数目相等,所以$(1+3+5)/3=3$显然是无限系列样本的平均值。换句话说,这里的期望值就是样本空间的*平均值*。现在假设每个值都发生的概率都不同。假设1有80%的几率发生,3有15%的几率发生,5只有5%的几率发生。在这种情况下,我们通过将$x$的每个值乘以它发生的概率百分比,并求和结果来计算预期值。对于这种情况,我们计算得出$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$这里我使用了符号$\mathbb e[x]$来表示$x$的预期值。有些文献中会使用$E(X)$。$x$期望值是1.5是可以理解的,因为$x$相较于3或5更有可能是1,3也比5更有可能。我们来形式化表述下,令$x_i$表示$X$中第$i$个元素的值,$p_i$表示它发生的概率。于是我们可以得到公式:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$如果每个值的概率都相等,则期望值与平均值相同:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$如果$x$是连续值,我们用积分来代替求和,如下:$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$其中$f(x)$是$x$的概率分布函数。我们还没有介绍这个方程,但是我们将在下一章中介绍它。我们可以用Python编写一些代码来模拟。在这里,我取了1000000个样本,并计算了我们刚刚分析计算过的分布的期望值。 ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown 您可以看到计算的值十分接近分析得出的值。它不够精确,那是因为想要获取精确值需要有无限大小的样本。 练习扔骰子的期望是多少? 答案每一面出现的概率都是相等的,都是1/6。所以:$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ 练习给定一个连续分布$$f(x) = \frac{1}{b - a}$$计算$a=0$和$b=20$时的期望 答案$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ 随机变量的方差前面的计算可以告诉我们学生的平均身高,但它并没有告诉我们我们可能想知道的一切。例如,假设我们有三个班的学生,我们用$X$、$Y$和$Z$表示每个班学生的身高: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown 使用NumPy,我们可以看到每个班的平均高度是相同的。 ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown 每个班学生身高的平均值都是1.8米,但请注意,第二个班的身高的变化量比第一个班大得多,第三个班的身高则完全没有变化。平均值告诉我们了一些关于数据的事情,但不是所有。我们希望能够知道每个学生身高之间有多少*差异(variation)*。你可以想象这其中的一些原因。也许一个学校需要订购5000张课桌,而且他们想确保他们购买的课桌尺寸能够满足学生的身高范围。统计学已经将测量值差异的概念定义为[*标准差(standard deviation)*](https://en.wikipedia.org/wiki/standard_deviation)和[*方差(variance)*](https://en.wikipedia.org/wiki/variation)。计算方差的公式是$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$暂时忽略这个平方,您可以看到方差是样本空间$X$与平均值$\mu:$ ($X-\mu)$的*预望*。稍后我将解释平方项的用途。预期值的公式是 $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ 这样我们就可以将其代入上面的公式中$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$让我们计算这三个班身高的方差,看看我们得到了什么值,并熟悉这个概念。$X$ 的均值是1.8, ($\mu_x = 1.8$) ,所以$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy提供了函数`var()`来计算方差 ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown 上面的结果可能有点难以解释,为什么高度以米为单位,但方差却是平方米。因此,我们需要一个更通用的度量,即*标准差(standard deviation)*,它被定义为方差的平方根:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$通常使用$ \sigma $表示*标准偏差*,用$ \sigma^2 $表示* 方差 *。 在本书的大部分内容中,我将使用$ \sigma^2 $而不是$ \mathit{VAR}(X)$来表示方差,但是他们的含义是一样的。我们计算第一个班的标准差为$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$我们可以用NumPy中的`numpy.std()`函数来验证这个计算,它用来计算标准偏差。 'std'是标准偏差的通用缩写。 ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown 当然,$ 0.1414 2 = 0.02 $,这与我们之前对方差的计算一致。标准偏差表示什么含义呢?它告诉了我们样本之间的高度的差异有多少。“多少”不是一个数学术语。一旦我们在下一节中介绍高斯的概念,我们就能够更精确地定义它。现在我会说,对于很多事情,68%的值都在其平均值正负一个标准差内范围内。 换句话说,我们可以得出结论,对于随机选择的一个班,68%的学生的身高在1.66(1.8-0.1414)米和1.94(1.8 + 0.1414)米之间。我们可以画个图观察一下: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown 对于只有5名学生的样本,我们显然无法在一个标准差内得到68%的值。但我们确实可以发现5个学生中有3个在$ \pm1\sigma $范围内或60%的样本是在的,对于5个样本,已经很接近68%了。让我们来看看有100名学生的课程结果。>我们将一个标准差写为$ 1\sigma $,叫做“一个标准偏差”,而不是“一个西格玛”。两个标准偏差是$ 2\sigma $,依此类推。 ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown 通过大约68%的学生的身高位于平均值1.8的$ \pm1\sigma $内,但我们可以通过代码验证这一点。 ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown 我们很快就会对此进行更深入的讨论。现在让我们计算完三个班学生身高的标准偏差$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$$Y$ 的平均值$\mu=1.8$ m, 所以$$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$我们使用Numpy来验证 ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown 这符合我们的预期。$ Y $的高度变化更剧烈,所以标准偏差更大。最后,让我们计算$ Z $的标准差。如果值没有变化,那么我们认为标准偏差为零。$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown 在我们继续之前,我需要指出的是,我忽略了通常男性比女性高的事实。一般而言,仅包含男性或仅包含女性的班级的高度差异将小于既有男性又有女性的班级。对于其他影响因素也是如此。营养良好的孩子比营养不良的孩子高。斯堪的纳维亚人比意大利人高。在设计实验的时候,统计人员需要综合考虑这些因素。我们需要通过分析来决策订购学校的桌子。对于每个年龄段的学生,有两种不同的平均值 - 一类在女生的平均高度周围,第二类均值在男生的平均高度周围。整个班级的平均值将介于两者之间。如果我们根据所有学生的平均值购买桌子,我们很可能最终得到的桌子既不适合学校的男生也不适合女生!我们不会在本书中考虑这些问题。如果您需要学习处理这些问题的技巧,请查阅标准概率相关文献。 为什么是差值的平方为什么我们用差值的* 平方(square)*来表示方差?我们可以用大量的数学计算来证明,但让我们以一种简单的方式来看待它。 下面是$ X = [3,-3,3,-3] $及其均值相对应的图表: ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown 如果我们没有计算差值的平方,那么正负号将相互抵消:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$这显然是不正确的,因为数据中的方差不止0。也许我们可以使用绝对值?我们可以通过计算看到结果是$ 12/4 = 3 $,这当然是正确的——每个值与平均值相差3。但如果我们的样本是$ Y = [6,-2,-3,1] $,那该怎么办?在这种情况下,我们得到$ 12/4 = 3 $。但是$ Y $显然比$ X $更加分散,但计算的方差却一样。如果我们在公式中使用平方,我们会得到$ Y $的方差是3.5,这就能反映它的变化更大。这不是严谨的证明。事实上,该技术的发明者卡尔弗里德里希高斯认识到这样计算有些武断。如果存在异常值,那么对差值取平方会造成带来不成比例的权重。例如,让我们看看如果我们有以下情况会发生什么: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown 它是否是“正确”?你来告诉我。如果没有100这个异常值,我们得到$ \sigma^2 = 2.03 $,这准确地反映了在没有异常值的情况下$ X $的变化。一个异常值掩盖了方差计算。我们是否就是想要掩盖方差计算,因为我们知道存在异常值,或者通过有效的手段处理异常值并提供接近没有异常值的估计值?再说一次,这取决于你的问题。我不会继续探讨下去, 如果你感兴趣的话,你可以看看James Berger在*贝叶斯鲁棒性(Bayesian robustness)*的领域对这个问题上所做的工作,或者是由Peter J. Huber撰写的关于*稳健统计(robust statistics)*的书[3]。在本书中,我们将始终使用Gauss定义的方差和标准差。从前面我们可以知道是,这些统计数据的*摘要*总是告诉我们有关数据的片面的部分。在这个例子中,高斯定义的方差并没有告诉我们数据中可能有一个很大的异常值。但是,它依旧是一个功能强大的工具,因为我们可以简洁地用少量数字来描述一个大型数据集。如果我们有10亿个数据点,我们不希望用眼睛检查图或数字列表,摘要统计为我们提供了一种有效的描述数据形状的方法。 高斯我们现在准备介绍[高斯(Gaussians)](https://en.wikipedia.org/wiki/Gaussian_function)。让我先强调下本章的目的。> 我们希望以单模态,连续的方式来表示概率,模拟现实世界如何工作,并且计算效率要高。让我们看一下高斯分布的图形,以便了解我们所谈论的内容。 ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown 该曲线是[*概率密度函数(probability density function)*](https://en.wikipedia.org/wiki/Probability_density_function)或简写为*pdf*。 它显示了随机变量取值的相对可能性。 我们可以从图表中看出,学生的身高是1.8米比1.7米的可能性大,而相对于1.4米,更有可能是身高1.9米。换句话说,许多学生的身高接近1.8米,很少有学生身高达到1.4米或2.2米。最后,注意曲线的中心位于1.8米这个平均值的地方。> 我在Supporting_Notebooks文件夹中的 * Computing_and_Plotting_PDFs *解释了如何绘制Gaussians分布等图形。。 你可以在线阅读[这里](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]。您可以把它叫做“钟形曲线”。这种曲线无处不在,因为在现实世界的条件下,许多观测值都以这种方式分布。我不会用“钟形曲线”这样的术语来指代高斯分布,因为许多概率分布具有类似钟形的曲线形状。非数学的表述可能不那么准确,因此当您在没有定义的情况下看到术语时,请务必明白。这条曲线并不是身高所独有的——大量的自然现象都表现出这种分布,包括我们在过滤问题时使用的传感器。正如我们将要看到的,它还具有我们正在寻找的所有属性——它有唯一的峰值可以作为概率,它是连续的,并且它可以高效的计算。我们很快就会发现它还有其他我们可能没想到的优秀属性。为了进一步说明,回想一下*离散贝叶斯*那一章中概率分布的形状: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown 它们不是标准的高斯曲线,但它们很相似。我们可以高斯分布来代替那一章中使用的离散概率! Nomenclature 命名在我们继续之前有一些术语需要定义。这个图表描绘了*随机变量(probability density)*的*概率密度(random variable)*,其中任何值都在($ - \infty , \infty)$之间。 这意味着什么?想象一下,我们对一段高速公路上的汽车速度进行了无数次极度确的测量。然后我们可以根据以任意给定速度经过的汽车的相对数量来绘制图形。如果平均值是120公里/小时,它可能看起来像这样: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown y轴表示*概率密度(probability density)* ——在x轴对应速度行驶的汽车的相对量。我将在下一节进一步解释。高斯模型并不完美。虽然这些图表没有表现出来,但分布的*尾部(tails)* 延伸到无限远。 *尾部*是曲线的远端,它的值最低。当然人的高度或汽车速度不可能低于零,更不用说$ -\infty $或$ \infty $。毕竟“地图不是领土”,贝叶斯滤波和统计数据也是如此。上面的高斯分布模拟了所测量的汽车速度的分布,但作为模型它必然是不完美的。模型和现实之间的差异将在这些滤波器中反复出现。高斯分布在数学的许多分支中都有使用,不是因为它可以完美地模拟现实,而是因为它比任何其他相对准确的选择更容易使用。然而,在本书中,高斯分布也无法模拟现实,这迫使我们使用更多的计算太弥补。您将听到称为*高斯分布*或*正态分布*的分布。*高斯*和*正态*在这种情况下都意味着相同的东西,并且可以互换使用。我将在本书中不同的地方将使用这两者中的任一名词,我希望你习惯同时看到两者。最后,如本段所述,通常缩短名称并谈论*高斯*或*正态*——这些都是*高斯分布*的典型简称。 高斯分布让我们来探讨高斯分布的工作方式。高斯分布是一个*连续概率分布*,用两个参数可以完全描述,即均值($ \ mu $)和方差($ \ sigma ^ 2 $)。 它被定义为:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ 是 $e^x$的符号表示方法. 如果你以前没见过,请不要被这个等式所唬住, 你不需要记住或操纵它。该函数的计算可以要使用`stats.py`中的`gaussian(x,mean,var,normed = True)`。去掉其中的常数,你可以看到它其实是一个简单的指数函数:$$f(x)\propto e^{-x^2}$$它具有类似的钟形曲线形状 ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown 让我提醒一下如何来查看函数的代码。在单元格中,键入函数名称在后面跟两个问号,然后按CTRL + ENTER。这将打开一个显示源码的弹出窗口。取消下面单元格里的注释来尝试一下。 ###Code from filterpy.stats import gaussian gaussian?? ###Output _____no_output_____ ###Markdown 让我们绘制一个平均值为22 $(\mu = 22)$,方差为4 $(\sigma^2 = 4)$的高斯分布。 ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown 这曲线表达了什么含义呢?假设我们的温度计读数为22°C。没有一个温度计是完全准确的,因此我们认为每个读数都会略微偏离实际值。然而,一个称为[*中心极限定理(Central Limit Theorem)*](https://en.wikipedia.org/wiki/Central_limit_theorem)的定理指出,如果我们做了足够多次测量,那么测量值将是正态分布的。当我们观察这个图表时,我们可以看到温度计某次的测量值的概率与给定的实际温度22°C成比例。回想一下,高斯分布是*连续*。想象一条无限长的直线——你随机选择的一个点2的概率显然是0%,因为有无数种可能可供选择。正态分布也是如此; 在上图中,*测量值* 为2°C的概率为0%,因为这个读数可以采用无数个值。这条曲线是什么呢?这个我们称之为*概率密度函数*。任意区间在曲线下的面积就是该区间的概率。因此,例如,如果计算曲线下20到22之间的面积,则得到的面积将是温度读数介于这两个温度之间的概率。还有另一种理解它的方法。什么是岩石或海绵的*密度*?它是一种衡量在给定空间内的所具有的质量的量度。岩石密集,海绵没有那么密集。所以,如果你想知道一块岩石有多重,但没有一个秤,你可以把它的体积乘以它的密度,就能得到它的质量。实际上,大多数物体的密度都不同,因此您可以将岩石体积的局部密度进行积分。$$M = \iiint_R p(x,y,z)\, dV$$我们用*概率密度*做同样的事情。如果你想知道温度在20°C到21°C之间的可能性,你可以将上面的曲线从20到21进行积分。如你所知,曲线的积分可以得到曲线下面积。由于这是概率密度的曲线,因此密度的积分就是概率。温度恰好是22°C的概率是多少?直观地说是0。在实数域,22°C到22.00000000000017°C的几率是无穷小的。从数学上讲,如果我们从22到22,我们会得到什么? 零。回想一下岩石,岩石上单点的重量是多少?无穷小的点必然没有重量。询问单个点的重量是没有意义的,而询问连续分布中单个值的的概率也是没有意义的。两者的答案显然都是零。在实际中,我们的传感器没有无限的精度,因此22°C的读数意味着一个范围,例如22 $ \pm $ 0.1°C,我们可以通过计算从21.9到22.1的积分来得到该范围的概率。我们可以用贝叶斯术语或概率论术语来考虑这一点。作为贝叶斯,如果温度计精确的读数为22°C,那么通过曲线描述我们的认知 ——我们相信实际(系统)温度接近22°C的概率是非常高的,我们相信实际温度接近18°C概率非常低。我们常说,如果我们在22°C时对系统进行10亿次温度测量,那么测量的直方图就会像上面的曲线。如何计算曲线下的概率或面积? 你可以对高斯的方程做积分$$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$这称为*累积概率分布(cumulative probability distribution)*,通常缩写为*cdf*。我写了一个`filterpy.stats.norm_cdf`函数,它可以帮助你计算积分。例如,我们可以计算 ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown 平均值($ \mu $)看起来像是所有可能概率的平均值。由于曲线是对称的,它也是曲线的最高的地方。温度计读数为22°C,这就是我们使用的平均值。随机变量$ X $的正态分布符号是$ X \sim\ \mathcal {N}(\mu,\sigma^2) $其中$ \sim $表示*分布属于*。这意味着我可以将温度计的温度读数表示为$$\text{temp} \sim \mathcal{N}(22,4)$$这是一点非常重要的。高斯可以让我们只用两个数字就能表示无数个可能的值!使用$ \mu = 22 $和$ \sigma^2 = 4 $,我可以计算任意区间的测量值分布。有的文献会使用$ \mathcal N (\mu,\sigma)$来表示高斯分布,而不是$ \mathcal N (\mu,\sigma^2)$。这两种方式都可以,它们都是惯用的表达方法。当你看到$ \mathcal {N}(22,4) $这样的公式时,你需要知道它表示什么含义。在本书中,我通常使用$ \mathcal N(\mu,\sigma^2)$,就像本例中$ \sigma = 2 $,$ \sigma^2 = 4 $。 方差与置信度由于这样的曲线表示概率密度分布,因此要求曲线下面积始终等于1。这应该很好理解——曲线下的区域代表所有可能的发送结果,*这些可能发生的事件的概率和*是1,所以密度的总和必须为1。我们可以用一些代码来证明这一点。(如果你在数学上倾向于将高斯方程从$ - \infty $积分到$ \infty $) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown 这引出了一个重要的结论。如果方差很小,那么曲线将变窄。这是因为方差是*样本与平均值之间的差异*的度量。要使面积等于1,曲线最高点必须很高。另一方面,如果方差很大,曲线将变宽,那么它必须变低以使面积等于1。让我们以图形方式看一下。我们将使用前面提到的`filterpy.stats.gaussian`,它可以输入单个值或数组。 ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown 默认情况下,`gaussian`会归一化输出,将输出转换回概率分布。我们可以使用参数`normed`来控制它。 ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown 如果高斯的结果没有归一化,则称为*高斯函数*而不是*高斯分布*。 ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown 上图告诉了我们什么? $ \sigma^2 = 0.2^2 $的高斯分布非常窄,我们认为$ x = 23 $的可能性非常高:误差在$ \pm 0.2 $ 。相比之下,$ \sigma^2 = 1^2 $的高斯分布我们也会认为$ x = 23 $,但我们不能太确定。我们认为$ x = 23 $可能性很低,我们认为$ x $可能的值是分散的——我们认为很可能是$ x = 20 $或$ x = 26 $。 $ \sigma^2 = 0.2^2 $则几乎完全排除了可能的值是$ 22 $或$ 24 $的情况,而$ \sigma^2 = 1^2 $认为它们几乎与$ 23 $的可能性是一样的。我们回忆一下前面的温度计,我们可以将这三条曲线视为三个不同温度计的读数。 $ \sigma^2 = 0.2^2 $的曲线代表一个非常精确的温度计,$ \sigma^2 = 1^2 $的曲线代表一个相当不准确的温度计。请注意高斯分布为我们提供的非常强大的属性——我们可以只用两个数字——均值和方差——来代表温度计的读数和误差。高斯分布的等价形式可以写成是$ \mathcal {N}(\mu,1 / \tau)$,其中$ \mu $表示*均值* ,$ \tau $是*精确度(precision)*。 $ 1 / \tau = \sigma^2 $; 它是方差的倒数。虽然我们在本书中没有使用这个公式,但它强调了方差是衡量数据精确程度的指标。方差越小精度越大——我们的测量越精确。相反,较大的方差会导致精度降低——我们的置信区间会在很大的范围内。你应该习惯于以这些等价的形式思考高斯分布。在贝叶斯中,高斯分布反映了我们对于测量值的*置信度(belief)*,它们表示测量的*精度(precision)*,并且它们表示测量值的*方差* 多少是。这些都是陈述相同事实的不同表述方式。我这样讲有些提前,但在接下来的章节中,我们将使用高斯来表达我们估计跟踪对象位置的置信度或我们正在使用的传感器的准确性等信息。 68-95-99.7法则现在需要对标准多讲一些。标准差是衡量数据偏离平均值的量度。对于高斯分布,68%的数据落在平均值的一个标准差($ \pm1 \sigma $)内,95%落在两个标准差($ \pm2 \sigma $)内,99.7%落在三个标准差($\pm3\sigma$)。 通常被称为[68-95-99.7法则](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)。如果你被告知一个班级的平均考试成绩为71,标准差为9.4,你可以得出结论,如果服从正态分布,那么95%的学生得分在52.2和89.8之间(按$71 \pm (2 * 9.4)$计算)。最后,这些不是无意义的数字。如果高斯分布表述我们的位置是$ \mu = 22 $米,那么标准差的单位也是米。 因此如果$ \sigma = 0.2 $,那么意味着68%的测量范围从21.8米到22.2米。方差是标准偏差的平方,因此$ \sigma^2 =0.04 $米$^2 $。正如您在上一节中所看到的,编写$ \sigma^2 = 0.2^2 $可以看起来更有意义,因为0.2和测量数据的单位相同。下图描绘了标准差与正态分布之间的关系。 ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown 高斯分布交互实验对于那些在Jupyter notebook的读者,这里提供了一个绘制高斯分布的交互式版本。可以使用滑块修改$ \mu $和$ \sigma^2 $。 调整$ \mu $会使图形向左或向右移动,调整$ \sigma^2 $会使钟形曲线变得更宽或者更窄。 ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown 最后,如果你在网上阅读,这里有一个高斯动画。首先,平均值向右移动。然后均值以$ \mu = 5 $为中心,方差会发生变化。 高斯分布的可计算性离散贝叶斯滤波器通过乘以和加上任意的概率分布来工作。卡尔曼滤波器使用高斯而不是任意分布,但算法的其余部分保持不变。这意味着我们需要乘以并加上高斯分布。高斯分布的一个显着特性是两个独立高斯的和是另一个高斯分布!两个高斯部分的乘积虽然不是高斯分布,但是与高斯分布成比例。我们可以说两个高斯分布相乘的结果是高斯函数(在这个上下文中的调用的函数不能保证所有值的和为1)。在我们进行数学运算之前,让我们先在视觉上进行测试一下。 ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown 在这里,我实现了两个高斯分布,g1 = $ \mathcal N(0.8,0.1)$和g2 = $ \mathcal N(1.3,0.2)$并绘制它们。然后我将它们相乘并将结果归一化。如您所见,结果*看起来*像是高斯分布。高斯分布是非线性函数。通常,如果将多个非线性方程相乘,则最终会得到不同类型的函数。 例如,两个正弦函数相乘的结果与`sin(x)`差异很大。 ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown 但是两个高斯分布相乘的结果却是高斯函数。这是卡尔曼滤波器在计算上可行的关键因素。换句话说,卡尔曼滤波器使用高分布斯是*因为*它们是可计算。两个独立的高斯分布的乘积可以由下面的公式计算:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$两个高斯分布相加可以由下面的公式计算:$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$在本章的最后,我会推导出这些公式式。 但是,理解这些推导并不是十分重要。 把它们合到一起吧现在我们准备谈论如何使用高斯分布实现滤波。在下一章中,我们将使用高斯分布实现一个过滤器。在这里,我将先解释为什么我们要使用高斯。在前一章中,我们用数组表示概率分布。我们通过计算该分布的元素与另一个分布的乘积来表示每个点的测量值可能性来执行更新计算,如下所示: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown 换句话说,我们必须计算10次乘法才能得到这个结果。对于现实中的滤波器需要处理多维度的大矩阵,我们需要数十亿次乘法和大量内存。但这种分布看起来像高斯分布。如果我们使用高斯分布而不用数组会怎么样?我将计算后验概率的均值和方差,并将其与条形图进行对比。 ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown 结果令人吃惊。我们可以用两个数字来描述整个数字分布。也许这个例子没有说服力,因为给出的数组中只有10个数字。但即使真正的问题可能有数百万个数字,但仍然只需要两个数字来描述它。接下来,回忆一下我们的滤波器实现的更新功能```pythondef update(likelihood, prior): return normalize(likelihood * prior)```如果数组包含一百万个元素,那就是一百万个乘法。但是,如果我们用高斯分布替换数组,那么我们只需要执行三个乘法和两个除法$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$ 贝叶斯定理在上一章中,我们通过推理每个时刻的信息来编写算法,我们将其表示为离散概率分布。在这个过程中我们学习了[*贝叶斯定理*](https://en.wikipedia.org/wiki/Bayes%27_theorem)。贝叶斯定理告诉我们如何计算给定先验信息的事件的概率。我们根据这种概率计算方法实现了`update()`函数:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ 事实证明这是贝叶斯定理。我紧接着给出了数学表达,但在许多方面,它淡化了这个等式中所表达的简单概念。我们这样表示:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$其中 $\| \cdot\|$ 表示规范术语。我们通过这个简单的推理来知道一条狗在走廊上的位置。然而,正如我们将要看到的,同样的等式适用于滤波问题。我们将在随后的每一章中使用这个等式。要回顾一下,*先验概率*是在我们测量(*可能值*)之前发生事件的概率,而*后验概率*是我们在合并测量出的信息之后计算出的概率。贝叶斯定理$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$被称为[*条件概率*](https://en.wikipedia.org/wiki/Conditional_probability)。也就是说,它表示*如果* $ B $发生,那么$ A $发生的概率。例如,如果昨天下雨,那么今天下雨的可能性也更大,因为雨天通常持续一天以上。我们用$ P $(今天下雨$ \mid $ 昨天下雨)表示今天下雨的可能性。我已经掩盖了一个重要的问题。在上面的代码中,我们不使用单个概率,而是使用一个概率数组——*概率分布*。我刚才给出的贝叶斯方程中使用的是概率,而不是概率分布。然而,它对于概率分布同样有效。我们用小写$p$表示概率分布$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$在上述方程中,$B$是*依据(evidence)*,$p(A)$是*先验概率*,$p(B\mid A)$是*可能性*,$p(A \mid B)$是*后验概率*。通过用相应的词替换数学术语,您就能将贝叶斯定理与更新方程联系起来。让我们根据我们的问题重写更新方程。我们将使用$x_i$表示第*i*个位置,使用$z$表示测量值。我们想知道是$P(x_i \mid z)$,也就是说,在给定测量值$z$的情况下,狗处于$x_i$的概率。那么,让我们把它代入方程,然后求解它。$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$看起来不好理解,但实际上很简单。让我们弄清楚右边的每一个术语代表什么意思。第一个值是$p(z\mid x_i)$。它表示每个位置$x_i$测量值的可能性或概率。$p(x_i)$是*先验概率*——我们在合并测量之前的置信度。我们把它们相乘。使用`update()`函数中未规范化的乘法:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```最后一个要考虑的术语是分母$p(z)$。这是在不考虑位置的情况下获得测量值$z$的概率。它通常被称为*依据*。我们通过在代码中取$x$或`sum(belif)`来计算。这就是我们计算标准化的方法!所以,`update()`函数只不过是在计算贝叶斯定理。在其他文献经常以积分的形式给出这些方程。毕竟,积分是一个连续函数的和。所以,你可能会看到贝叶斯定理被写成如下形式:$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$这个分母通常不可能用解析法求解;当它能被求解时,数学是极其困难的。最近,英国皇家统计学会(Royal Statistics Society)发布了一篇[意见文章](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)称之为“小狗的早餐”[8]。采用贝叶斯方法的滤波教科书中充满了不含解析解的积分方程。不要被这些方程吓倒,因为我们可以通过标准化我们的后验概率来处理这个积分。我们将在**粒子过滤器**一章中学习更多处理此问题的技术。在此之前,我们要认识到,在实践中通过对其求和是实现标准化的一个途经。我想说的是,当你面对一页积分时,把它们看作是求和,并把它们与本章联系起来,困难往往就会消失。问问自己“为什么我们要把这些值加起来”,“为什么我要除以这个值”。令人惊讶的是,答案往往是显而易见的,而作者经常不会提及这种解释。你可能还没有完全明白贝叶斯定理的优点。我们要计算$p(x_i\mid z)$。也就是说,在第i步,给定一个测量值,我们的可能状态是什么。这是一个非常困难的问题。贝叶斯定理是一般定理。我们可能想知道,根据癌症测试的结果,我们患癌症的可能性,或者根据不同的传感器读数,我们下雨的可能性。这样说问题似乎无法解决。但是Bayes定理让我们可以用逆向计算$P(z\mid x_i)$,这通常很容易计算:$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$也就是说,为了计算已知某个传感器读数的前提下降雨的可能性,我们只需计算下雨时传感器读数的可能性!这是一个简单得多的问题!嗯,天气预报仍然是一个难题,但是贝叶斯使它变得容易处理。同样,正如您在离散贝叶斯章节中看到的,我们通过计算给定位置`x`下传感器读数的可能性,计算出西蒙在走廊任何给定位置的可能性。困难的问题变得容易。 全概率定理现在我们知道了`update()`函数背后的数学含义;那么`predict()`函数呢?` predict()`实现了[*全概率定理(total probability theorem)*](https://en.wikipedia.org/wiki/law_total_probability)。让我们回忆一下`predict()`计算的内容。它根据所有可能的运动事件的概率,计算出在任意给定位置的概率。让我们把它表示为一个等式。在任意位置$i$在时刻$t$的概率可以写为$P(X_i^t)$。它等于前一时刻$T-1$的概率位于 $x_j$的概率$P(X_j^{t-1})$ 乘以从$X_j$移动到$X_i$的概率之和。就是$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$这个方程就叫做*全概率定理*。引用维基百科[6]的话,“它表达了一个结果的总概率,这个结果可以通过几个不同的事件来实现”。我本可以给你这个方程并实现`predict()`,但你会不好理解这个方程是怎么起作用的。作为提醒,这里是计算这个方程的代码```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` 使用scipy.stats计算概率在本章中,我使用了来自[FilterPy](https://github.com/rlabbe/filterpy)的代码来计算和绘制高斯函数。我这样做是为了让您有机会查看代码,并理解这些函数是如何实现的。但是,正如公认的那样,python是功能齐全的(batteries included),且在`scipy.stats`模块中提供了广泛的统计功能。我们来介绍一下如何使用scipy.stats来统计数据和概率。`scipy.stats `模块中包含一系列对象,您可以使用这些对象来计算各种概率分布的属性。此模块的完整文档如下:http://doc s.scipy.org/doc/scipy/reference/stats.html 。我们将把注意力集中在实现正常分布的变量上。让我们看看通过使用`scipy.stats.norm`计算高斯分布的代码,并将其结果与FilterPy的`gaussian()`函数返回值进行比较。 ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown 调用`norm(2,3)`创建scipy所称的“凝固”分布——它创建并返回一个平均值为2、标准偏差为3的对象。然后您可以多次使用此对象来获取各种值的概率密度,例如: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor)[2]的文档列出了许多其他函数。例如,我们可以使用`rvs()`函数从分发中生成样本。 ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 5.912 -2.009 -2.718 1.266 -1.085 3.941 3.499 5.626 -0.137 1.396 4.562 2.127 8.176 1.794 1.829] ###Markdown 我们可以得到[*累积分布函数(cdf)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function),即从分布中随机抽取的值小于或等于$x$的概率。 ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown 我们可以得到分布的各种属性: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown 使用高斯分布模拟现实世界的局限性前面我提到了*中心极限定理(central limit theorem)*,它指出在某些特定条件下,任何独立随机变量的算术和都将是正态分布的,不管这些随机变量是如何分布的。这对我们很重要,因为自然界充满了非正态分布,但是当我们在大的种群上应用中心极限定理时,我们最终得到正态分布。然而,关键部分是“在特定条件下”。这些条件通常不适用于物理世界。例如,厨房磅秤的读数不可能低于零,但如果我们将测量误差表示为高斯分布,则曲线的左侧延伸到负无穷大,这意味着给出负读数的可能性很小。这是一个宽泛的话题,我不会详尽论述。让我们考虑一个小例子。我们认为像考试分数这样的东西应该是正态分布的。如果你曾经被教授给过一个“曲线上的分数”,你就会受到这个假设的影响。考试分数其实并不遵循正态分布。这是因为分布为一个不管离平均值有多远的*任意*值分配了一个非零概率分布。例如,假设你的平均值是90,标准差是13。正态分布假设一些人有很大的机会得到90,一些人有很小的机会得到40。然而,这也意味着有一个人很小的机会会得到-10或150分。它会为获-10^{300}$或$10^{32986}$的分数分配了极小的机会。高斯分布的尾部是无限长的。但对于一次考试,我们知道这是不可能的。忽略额外的学分,你的成绩不可能小于0,或超过100。让我们用正态分布来绘制这一范围的值,看看用它来代表真实的考试分数的分布的表现有多糟糕。 ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown 曲线下的面积不等于1,因此它不是概率分布。而真实的情况是,更多的学生比正态分布预测的分数更接近范围的上限,例如,尾部会变得“胖”。此外,测试可能无法完全区分学生技能上的细微差异,因此平均值左边的分布也可能会有些杂乱无章。传感器可以测量世界。传感器测量中的误差很少是真正的高斯分布。现在谈论这个给卡尔曼滤波器设计者所带来的困难还为时过早。卡尔曼滤波数学思想是建立在一个理想化的世界模型上的,这一点值得你铭记在心。现在,我将介绍一些代码,稍后在书中我将使用这些代码来生成分布,以模拟各种过程和传感器。此分布名为[*学生的 $t$-分布(Student's $t$-distribution)*](https://en.wikipedia.org/wiki/student%27s_-distribution)。假设我想建立一个输出中有一些白噪声的传感器模型。为了简单起见,假设信号为常数10,噪声的标准偏差为2。我们可以使用函数`numpy.random.randn()`得到一个平均值为0,标准偏差为1的随机数。我可以用以下方法模拟: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown 让我们绘制出信号的图像,看看它是什么样子的。 ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown 看起来是我所期望的。信号的中心在10左右。标准偏差为2意味着68%的测量值将在10的$\pm$2范围之内,99%将在10的$\pm$6范围之内,。现在,让我们看看用学生的$t$-分布生成的分布。我将不讨论数学,只给你它的源代码,然后用它绘制一个分布图。 ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown 从图中可以看出,虽然输出与正态分布相似,但有一些离群值与平均值(7到13)相差超过3个标准差。学生$t$分布不是传感器(例如,GPS或多普勒)运行方式的精确模型,而这本书也不是一本关于如何模拟物理系统的书。然而,它确实能产生合理的数据来测试滤波器在真实世界中的性能。在我们的模拟和测试中,我们将在书的其余部分使用类似的分布。这并不是虚无的关心。卡尔曼滤波方程假设噪声是正态分布的,如果不这样做,则执行就不会理想。关键任务的滤波器的设计者,如航天器上的滤波器,需要掌握许多关于航天器上传感器性能的理论和经验知识。例如,我在一次美国航天局任务中看到的一个演示表明,虽然理论上说他们可以使用3个标准差来区分噪音和实际的有效测量,但他们必须使用5到6个标准差。这是他们通过实验确定的。rand_student_t的代码被包含在`filterpy.stats`。你可以通过如下方式使用它:```pythonfrom filterpy.stats import rand_student_t```虽然我在这里不讨论它,但是统计学已经定义了通过概率分布与指数分布的变化来描述概率分布形状的方法。正态分布是围绕平均值对称形成的,就像钟形曲线。然而,概率分布在平均值附近可能是不对称的。这种度量称为[*偏斜(skew)*](https://en.wikipedia.org/wiki/skewness)。尾巴可以变短、变胖、变薄,或者形状不同于指数分布。这种度量方法称为[*峰度(kurtosis)*](https://en.wikipedia.org/wiki/kurtosis)。`scipy.stats`模块包含`describe`函数,该函数可以计算这些统计信息。 ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown 让我们来检测两个正态分布,一个小一些,一个大一些: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8942931152842175, 0.49750728125905835), mean=-0.10563915941786776, variance=0.4841165908890319, skewness=-1.8464582995970673, kurtosis=2.5452896197893757) DescribeResult(nobs=300000, minmax=(-4.772620736872989, 4.446895068081072), mean=-0.0006837046884366415, variance=0.9995353806594786, skewness=0.002331471754136653, kurtosis=0.007185223820032061) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from code.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output mean = 1.782 std = 0.140 ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import code.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from code.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.441 -1.375 1.349 0.612 3.477 4.749 1.203 -2.042 7.189 -0.289 0.525 4.164 -0.256 -0.485 -0.805] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from code.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output mean = 1.808 std = 0.142 ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! It's too early to understand why, but we will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import code.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from code.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding probabilities. I'm getting ahead of myself, but the Kalman filter uses Gaussians instead of probabilities, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansThe product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.308 1.55 4.163 0.148 -2.413 3.39 6.311 3.682 1.681 3.063 1.402 0.148 -0.208 5.415 -4.221] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline %load_ext autoreload %autoreload 2 from __future__ import division, print_function import sys sys.path.insert(0,'./code') from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown Introduction The last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it *might be* at (1.65, -78.01, 2100.45) or it *might be* at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. So we desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesTo understand Gaussians we first need to understand a few simple mathematical computations. We start with a **random variable** x. A random variable is a variable whose value depends on some random process. If you flip a coin, you could have a variable $c$, and assign it the value 1 for heads, and 0 for tails. That is a random value. It can be the height of the students in a class. That may not seem random to you, but chances are you cannot predict the height of the student Reem Nassar because her height is not deterministically determined. For a specific classroom perhaps the heights are$$x= [1.8, 2.0, 1.7, 1.9, 1.6]$$Another example of a random variable would be the result of rolling a die. A less obvious example would be the position of an aircraft - the aircraft does deterministically respond to the control inputs, but it is also buffeted by random winds and travels through randomly distributed pressure gradients.The coin toss and die roll are examples of **discrete random variables**. That is, the outcome of any given event comes from a discrete set of values. The roll of a six sided die can never produce a value of 7 or 3.24, for example. In contrast, the student heights are continuous; they can take on any value within biological limits. For example, heights of 1.7, 1.71, 1.711, 1.7111, 1.71111,.... are all possible. Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. The Mean of a Random VariableWe want to know the **average** height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the **mean**. We compute the mean by summing the values and dividing by the number of values. In this case we have$$\mathtt{mean} = (1.8 + 2.0 + 1.7 + 1.9 + 1.6)/5 = 1.8$$In statistics we use the symbol $\mu$ (mu) to denote the mean, so we could write $$\mu_{\mathtt{height}} = 1.8$$We can formalize this computation with the equation$$ \mu_{\mathtt{height}} = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown Standard Deviation of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose a second class has students with these heights:$$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$ ###Code y = [2.2, 1.5, 2.3, 1.7, 1.3] print(np.mean(y)) ###Output 1.8 ###Markdown the mean of these heights is also 1.8 meters, but notice that there is a much greater amount of variation in the heights in this class. Suppose a third class has heights$$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$In this third class the average height is again 1.8 meters, but here there is no variation in the height between students. All three classes have the same mean height of 1.8 meters. So the mean tells us something about the data, but it does not tell the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of **standard deviation** and **variance**:The **standard deviation** is defined as the square root of the average of the squared differences from the mean.That's a mouthful; as an equation this is stated as$$\sigma = \sqrt{\frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2}$$where $\sigma$ is the notation for the standard deviation and $\mu$ is the mean.If this is the first time you have seen this it may not have a lot of meaning for you. But let's work through that with the data from the three classes to be sure we understand the formula. We subtract the mean of x from each value of x, square it, take the average of those, and then take the square root of the result. The mean of $[1.8, 2.0, 1.7, 1.9, 1.6]$ is 1.8, so we compute the standard deviation as$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print(np.std(x)) ###Output 0.141421356237 ###Markdown What does the standard deviation *signify*? It tells us "how much" the heights vary amongst themseves. *How much* is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things in nature, including the height of people, 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can look at this in a plot: ###Code from book_format import set_figsize, figsize from gaussian_internal import plot_height_std import matplotlib.pyplot as plt with figsize(y=2): plot_height_std([1.8, 2.0, 1.7, 1.9, 1.6]) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with figsize(y=3.): plot_height_std(x, lw=2) print('mean = {:.3f}'.format(np.mean(x))) print('std = {:.3f}'.format(np.std(x))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $1\sigma$ of the mean 1.8. We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of y is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of y is {:.4f} m'.format(np.std(y))) ###Output std of y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for y, and the standard deviation is larger.Finally, let's compute the standard deviation for $$ z = [1.8, 1.8, 1.8, 1.8, 1.8]$$There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_Z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std([1.8, 1.8, 1.8, 1.8, 1.8])) ###Output 0.0 ###Markdown Variance of a Random VariableFinally, the *variance* is defined as the square of the standard deviation. Some texts define this in the opposite way, which gives the definitions* **The variance is the average of the squared differences from the mean.*** **The standard deviation is the square root of the variance.**Both ways of thinking about it are equivalent. We use the notation $\sigma^2$ for the variance, and the equation for the variance is$$\sigma^2 = \frac{1}{N}\sum_{i=1}^N(x_i - \mu)^2$$To make sure we understand this let's compute the variance for $x$:$$ \begin{aligned}\sigma_x^2 &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0^2 + 0.2^2 + (-0.1)^2 + 0.1^2 + (-0.2)^2}{5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\&=0.02\end{aligned}$$We previously computed $\sigma_x=0.1414$, and indeed the square of 0.1414 is 0.02. We can verify this computation with the NumPy function `numpy.var`: ###Code print('VAR(x) = {:.2f} m'.format(np.var(x))) ###Output VAR(x) = 0.02 m ###Markdown Many texts alternatively use *VAR(x)* to denote the variance of x. Why the Square of the DifferencesAs an aside, why are we taking the *square* of the difference? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of x plotted against the mean for $x=[3,-3,3,-3]$ ###Code with figsize(y=2.5): x = [3, -3, 3, -3] m = np.average(x) for i in range(len(x)): plt.plot([i ,i], [m, x[i]], color='k') plt.axhline(m) plt.xlim(-1, len(x)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct - each value varies by 3 from the mean. But what if we change $x=[6, -2, -3, 1]$? In this case we get $12/4=3$. $x$ is clearly more spread out than in the last example, but we get the same variance, so this cannot be correct. If we use the correct formula we get a variance of 3.5, which reflects the larger variation in $x$.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $x = [1,-1,1,-2,3,2,100]$. ###Code x = [1, -1, 1, -2, 3, 2, 100] print('Variance of x = {:.2f}'.format(np.var(x))) ###Output Variance of x = 1210.69 ###Markdown Is this *correct*? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $x$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. For Kalman filters we can prove that this computation produces optimal results within certain limits. More about that soon. Gaussians We are now ready to learn about Gaussians. Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is very computationally efficient to calculate. Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height') ###Output _____no_output_____ ###Markdown > I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can also read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]Probably this is immediately recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because several probability distributions have a similar bell curve shape. Non-mathematical sources might not be so precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights - a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for - it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! Please note that eyeball comparisons of PDF curves is strongly discouraged, as humans have trouble estimating areas; CDFs are usually the preferred choice. ###Code import book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with figsize(y=1.5): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown Nomenclature A bit of nomenclature before we continue - this chart depicts the *probability density* of of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going at any given speed. If the average was 120 kph, it might look like this: ###Code with figsize(y=3.): plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* - the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. "The map is not the territory" is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above somewhat closely models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other choice. Even in this book Gaussians will fail to model reality, forcing us to computationally expensive alternative. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, so I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* - these are both typical shortcut names for the *Gaussian distribution*. Gaussian Distributions So let us explore how Gaussians work. A Gaussian is a **continuous probability distribution** that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 }\big ]$$$\exp[x]$ is notation for $e^x$; we avoid using superscripts in print so that the fonts are larger and more readable. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))``` We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown So what does this curve *mean*? Assume for a moment that we have a thermometer, which reads 22$\,^{\circ}C$. No thermometer is perfectly accurate, and so we normally expect that thermometer will read slightly plus or minus that temperature each time we read it. However, a theorem called **Central Limit Theorem** states that if we make many measurements that the measurements will be normally distributed. So, when we look at this chart we can *sort of* think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22$^{\circ}C$. Maybe the probability of it reading 22$\,^{\circ}C$ is 20%? That is not quite accurate mathematically. Recall that we said that the distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at, say, 2.0. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 22$^{\circ}C$ is 0% because there are an infinite number of values the reading can take.So what then is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22$\,^{\circ}C$, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22$\,^{\circ}C$, then a histogram of the measurements would look like this curve. So how do you compute the probability, or area under the curve? Well, you integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. So, for example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown So the mean ($\mu$) is what it sounds like - the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads $22^{\circ}C$, so that is what we used for the mean. > *Important*: I will repeat what I wrote at the top of this section: "A Gaussian...is completely described with two parameters"The standard notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an **extremely important** result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range. The Variance Since this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear - the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2), label='$\sigma^2$=0.2') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown So what is this telling us? The Gaussian with $\sigma^2=0.2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out - we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2$ has almost completely eliminated $22$ or $24$ as possible value - their probability is almost $0\%$, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us - we can entirely represent both the reading and the error of a thermometer with only two numbers - the mean and the variance.It is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($1\sigma$) of the mean, 95% falls within two standard deviations ($2\sigma$), and 99.7% within three ($3\sigma$). This is often called the 68-95-99.7 rule. So if you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, so $\sigma^2 = .04$ meters$^2$. The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from gaussian_internal import display_stddev_plot with figsize(y=3): display_stddev_plot() ###Output _____no_output_____ ###Markdown > An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $tau$ the *precision*. Here $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision - our measurement is very precise. Conversely, a large variance yields low precision - our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. For a Bayesian Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact. Interactive Gaussians For those that are reading this in IPython Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed from IPython.html.widgets import FloatSliderWidget set_figsize(y=3) def plt_g(mu,variance): xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0, 10), variance=FloatSliderWidget(value=0.6, min=0.2, max=4)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this in an IPython Notebook, here is an animation of a Gaussian. First, the mean is being shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of the Gaussian Recall how our discrete Bayesian filter worked. We had a vector implemented as a NumPy array representing our belief at a certain moment in time. When we integrated another measurement into our belief using the `update()` function we had to multiply probabilities together, and when we performed the motion step using the `predict()` function we had to shift and add probabilities. I've promised you that the Kalman filter uses essentially the same process, and that it uses Gaussians instead of histograms, so you might reasonable expect that we will be multiplying, adding, and shifting Gaussians in the Kalman filter.A typical textbook would directly launch into a multi-page proof of the behavior of Gaussians under these operations, but I don't see the value in that right now. I think the math will be much more intuitive and clear if we just start developing a Kalman filter using Gaussians. I will provide the equations for multiplying and shifting Gaussians at the appropriate time. You will then be able to develop a physical intuition for what these operations do, rather than be forced to digest a lot of fairly abstract math.The key point, which I will only assert for now, is that all the operations are very simple, and that they preserve the properties of the Gaussian. This is somewhat remarkable, in that the Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are so computationally nice. Computing Probabilities with scipy.stats In this chapter I have used custom code from FilterPy for computing Gaussians, plotting, and so on. I chose to do that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. I find the performance of some of the functions rather slow (the `scipy.stats` documentation contains a warning to this effect), but this is offset by the fact that this is standard code available to everyone, and it is well tested. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. However, we will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown If we look at the documentation for `scipy.stats.norm` here[2] we see that there are many other functions that norm provides.For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ -1.67 1.966 2.794 2.159 2.462 -0.012 12.025 6.336 3.566 -1.321 -1.545 2.25 4.888 2.674 1.885] ###Markdown We can get the *cumulative distribution function (CDF)*, which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown There are many other functions available, and if you are interested I urge you to peruse the documentation. Sometimes the documentation is terse, but with a bit of googling you can find out what a function does and some examples of how to use it. Most of this functionality is not of immediate interest to the book, so I will leave the topic in your hands to explore. The SciPy tutorial [3] is quite approachable, and I suggest starting there. Fat Tails Earlier I spoke very briefly about the **central limit theorem**, which states that under certain conditions the arithmetic sum of **any** independent random variables will be normally distributed, regardless of how the random variables are distributed. This is extremely important for (at least) two reasons. First, nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. Second, Gaussians are mathematically *tractable*. We will see this more as we develop the Kalman filter theory, but there are very nice closed form solutions for operations on Gaussians that allow us to use them analytically.However, a key part of the proof is "under certain conditions". These conditions often do not hold for the physical world. The resulting distributions are called **fat tailed**. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor 'grade on a curve' you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assumes that there is a infinitesimal chance of getting a score of -1e300, or 4e50. The *tails* of a Gaussian distribution are infinite because Gaussians are continuous functions.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes 'fat'. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a *fat tail distribution*. Kalman filters use sensors to measure the world. The errors in sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on a somewhat idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the student's t distribution. Let's say I want to model a sensor that has some noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. So I could simulate this sensor with ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution. There are many choices, I will use the Student's T distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.08 2.024 1.4 3.024 5.799 0.989 2.083 0.978 7.542 -2.22 4.984 0.626 4.387 3.676 -0.12 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8106190910322406, 1.7202801709655346), mean=0.03998695860303425, variance=1.2099810612140205, skewness=0.054824114606583485, kurtosis=-0.8322079773586668) DescribeResult(nobs=300000, minmax=(-5.136201903633123, 4.498934900223554), mean=0.0016752908705450242, variance=1.0019122279656631, skewness=0.002460339180965745, kurtosis=-0.0022807108788165387) ###Markdown [目录](./table_of_contents.ipynb) 概率,高斯和贝叶斯定理 ###Code %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown 概述上一章讨论了离散贝叶斯滤波器的一些缺点。对于许多跟踪和过滤问题,我们希望有一个*单峰*和*连续*的过滤器。也就是说,我们希望使用浮点数学(连续的)建模我们的系统,并且只表示一个信念(单峰的)。例如,我们想说一架飞机位于(12.34,-95.54,2389.5)的纬度、经度和高度。我们不想让过滤器告诉我们“它可能在(1.65,-78.01,2100.45)或它可能在(34.36,-98.23,2543.79)。”这与我们对世界如何运作的物理直觉不符,正如我们讨论过的,计算多模态情况的代价可能会非常高。而且,多重位置估计使得导航变得不可能。我们希望用一种单峰的、连续的方式来表示模拟现实世界如何工作的概率,并且计算效率高。高斯分布提供了所有这些特性。 均值、方差和标准差你们大多数人都接触过统计学,但请允许我介绍一下这部分内容。我要求你阅读材料,即使你确定你很了解它。我这样要求有两个原因。首先,我想确定我们使用的术语是相同的。其次,我努力形成对统计的直观理解,这将在后面的章节中很好地帮助你们。学习统计学课程很容易只记住公式和计算,可能对所学内容的含义感到模糊。 随机变量 每次掷骰子,结果将介于1到6之间。如果我们掷一个骰子一百万次,我们期望得到1/6的概率。因此,结果1的*概率*是1/6。同样地,如果我问你下一次掷出1的概率,你会回答1/6。这种值和相关概率的组合称为[*随机变量*](https://en.wikipedia.org/wiki/Random_variable)。在这里,“随机”并不意味着过程是不确定性的,只是我们缺乏关于结果的信息。掷骰子的结果是确定的,但我们缺乏足够的信息来计算结果。我们不知道将会发生什么,除了概率以外。当我们定义术语时,值的范围称为[*sample space*](https://en.wikipedia.org/wiki/Sample_space)。对于骰子,样本空间为{1,2,3,4,5,6}。对于一枚硬币,样本空间是{H, T}。空间是一个数学术语,意思是一个具有结构的集合。骰子的样本空间是1到6范围内的自然数的子集。另一个随机变量的例子是大学里学生的身高。这里的样本空间是在生物学定义的两个极限之间的实数值的范围。掷硬币和掷骰子等随机变量是*离散随机变量*。这意味着它们的样本空间由有限数量的值或可数的无限数量的值(如自然数)表示。人类的身高被称为“连续随机变量”,因为它们可以在两个极限之间取任何实际值。不要将随机变量的“测量值”与实际值混淆。如果我们只能测量一个人的身高到0.1米,那么我们只能记录0.1、0.2、0.3……2.7之间的数值,从而产生27个离散的选择。尽管如此,一个人的身高可以在这些范围内的任意实值之间变化,所以身高是一个连续的随机变量。在统计学中,随机变量用大写字母,通常来自字母表的后半部分。所以,我们可以说$X$是表示掷骰子的随机变量,或者$Y$是新生诗歌课上学生的身高。后面的章节将使用线性代数来解决这些问题,因此我们将遵循向量用小写表示,矩阵用大写表示的惯例。不幸的是,这些约定相互冲突,您必须根据上下文确定作者使用的是哪一个。我总是用粗体符号表示向量和矩阵,这有助于区分两者。 概率分布[*概率分布*](https://en.wikipedia.org/wiki/Probability_distribution)给出了随机变量在样本空间中取任意值的概率。例如,对于一个骰子,我们可能会说:|值|概率||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|我们用小写p表示这个分布:$p(x)$。使用普通函数表示法,我们可以这样写: $$P(X{=}4) = p(4) = \frac{1}{6}$$ 这表明,骰子落在4点的概率为$\frac{1}{6}$。$P(X{=}x_k)$表示“$X$为$x_k$的概率”。注意微妙的符号差异。大写$P$表示单个事件的概率,小写$P$表示概率分布函数。如果你不善于观察,就会误入歧途。有些文本使用$Pr$而不是$P$来改善这一点。Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as另一个例子是均匀硬币。它有样本空间{H, T}。硬币是均匀的,正面(H)的概率是50%反面(T)的概率是50%我们把它写成$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$样本空间不是唯一的。一个骰子样本空间是{1,2,3,4,5,6}。另一个有效的样本空间是{偶数,奇数}。另一种可能是{所有角上都是点,而不是所有角上都是点}。一个样本空间是有效的,只要它涵盖了所有的可能性,并且任何单个事件只能由一个元素来描述。{even, 1,3,4,5}不是骰子的有效样本空间,因为值为4的骰子会同时被'even'和'4'匹配。一个*离散随机值*的所有值的概率称为*离散概率分布*,一个*连续随机值*的所有值的概率称为*连续概率分布*。要成为一个概率分布,每个值$x_i$的概率必须是$x_i \ge 0$,因为没有概率可以小于零。其次,所有值的概率之和必须等于1。这对于抛硬币来说应该是很直观的:如果得到正面的概率是70%,那么得到反面的概率一定是30%。我们将这个要求公式化为$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$连续分布。在前一章中,我们使用概率分布来估计狗在走廊中的位置。例如: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown 每个位置的概率都在0到1之间,所有位置的和都是1,这是一个概率分布。每个概率都是离散的,所以我们可以更准确地称之为离散概率分布。在实践中,我们省略了离散和连续这两个术语,除非我们有特殊的理由来区分它们。 随机变量的平均数、中位数和模态给定一组数据,我们通常想知道该数据集的代表性值或平均值。对此有很多方法,这个概念被称为[*集中趋势的方法*](https://en.wikipedia.org/wiki/Central_tendency)。例如,我们可能想知道班上学生的平均身高。我们都知道如何求一组数据的平均值,但是让我详细说明一下,以便引入更正式的符号和术语。*average*的另一个词是*mean*。我们计算平均值的方法是把这些值加起来,然后除以这些值的个数。如果学生的高度以米为单位$$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$我们计算均值为$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$传统上用符号$\mu$ (mu)表示平均值。我们可以用这个方程把这个计算形式化$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy提供' NumPy .mean() 来计算平均值。 ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown 作为一种便利,NumPy数组提供了该方法 `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown 一组数字的*mode*是最常出现的数字。如果只有一个数字最常出现,我们说它是一个*单峰*集合,如果两个或更多的数字以相同的频率出现最多,则该集合是*多峰*。例如集合{1,2,2,2,3,4,4,4}有模态2和模态4,这是多模态,而集合{5,7,7,13}有模态7,所以它是单模态。在这本书中,我们不会以这种方式计算模态,但我们在更一般的意义上使用单模态和多模态的概念。例如,在**离散贝叶斯**一章中,我们讨论了我们认为狗的位置是一个*多模态分布*,因为我们为不同的位置分配了不同的概率。最后,一组数字的*中值*是集合的中点,因此一半的值在中值以下,一半在中值以上。这里,上面和下面是与正在排序的集合相关的。如果集合中包含偶数个值,那么两个中间的数将被平均在一起。Numpy提供了`numpy.median()`来计算中值。可以看到,{1.8,2.0,1.7,1.9,1.6}的中值是1.8,因为1.8是这个集合中经过排序后的第三个元素。在这种情况下,中值等于均值,但这通常不是真的。 ###Code np.median(x) ###Output _____no_output_____ ###Markdown 随机变量的期望值一个随机变量的[*期望值*](https://en.wikipedia.org/wiki/Expected_value)是它的平均值,如果我们取它的无限个样本,然后把这些样本一起平均。假设有$x=[1,3,5]$,每个值都是等概率的。 我们*期望* $x$是什么 ,平均值?它是1 3和5的平均值,当然是3。这应该是有道理的;我们期望1 3 5的数目相等,所以$(1+3+5)/3=3$显然是这个无穷样本序列的平均值。换句话说,这里的期望值是样本空间的*均值*。现在假设每个值都有不同的发生概率。假设1有80%的概率发生,3有15%的概率发生,5只有5%的概率发生。在本例中,我们通过$x$的每个值乘以它发生的概率百分比来计算期望值,并对结果求和。对于这种情况,我们可以计算$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$ 这里我介绍了$ X $的期望值$\mathbb E[X]$的符号。一些文本使用$E(x)$。对于$x$, 1.5的值很直观,因为$x$比3或5更有可能是1,3也比5更有可能。我们可以将它形式化,让$x_i$是$X$的$i^{th}$值,$p_i$是它发生的概率。这给了我们$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$一个简单的代数运算表明,如果所有的概率都相等,期望值就等于平均值:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$如果$x$是连续的,我们把和代入一个积分,就像这样$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$其中$f(x)$是$x$的概率分布函数。我们还不会用到这个方程,但我们会在下一章用到它。我们可以编写一些Python来模拟这个过程。这里我取了1,000,000个样本值并计算了刚刚解析计算的分布的期望值。 ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown 你可以看到,计算的值与解析推导的值很接近。它不是精确的,因为得到精确的值需要无限的样本容量。 练习掷骰子的期望值是多少? 答案每边的概率都是等的,所以每边的概率都是1/6。因此$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ 练习给定均匀连续分布$$f(x) = \frac{1}{b - a}$$计算$a=0$和$b=20$的期望值。 答案$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ 随机变量的方差上面的计算告诉了我们学生的平均身高,但它并没有告诉我们想知道的一切。例如,假设我们有三个班级的学生,我们用这些高度标记$X$, $Y$和$Z$: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown 使用NumPy,我们可以看到每个类的平均高度是相同的。 ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown 每个班的平均身高是1.8米,但是请注意,第二类学生的身高变化要比第一类学生大得多,而第三类学生的身高则完全没有变化。 平均值告诉我们一些数据,但不是全部。我们希望能够指定学生的身高之间有多少*变化*。你可以想象有很多原因。也许一个学区需要订购5000张课桌,他们想确保购买的尺寸能适应学生的身高范围。统计学已经将测量变化的概念形式化为[*标准差*](https://en.wikipedia.org/wiki/Standard_deviation)和[*方差*](https://en.wikipedia.org/wiki/Variance)。计算方差的方程为$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$暂时忽略这个平方,你可以看到方差是样本空间$X$与均值$\mu:$ ($X-\mu)$的期望值。稍后我会解释平方项的用途。期望值的公式为$\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$,因此我们可以将其代入上面的方程,得到$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ 我们来计算这三个类的方差,看看得到什么值,熟悉这个概念。$X$的均值是1.8 ($\mu_x = 1.8$),所以我们计算$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy提供了函数`var()`来计算方差: ###Code print(f"{np.var(X):.2f} meters squared") ###Output 0.02 meters squared ###Markdown 这可能有点难以解释。高度单位是米,而方差是米的平方。因此我们有一个更常用的度量,*标准差*,定义为方差的平方根:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$典型的是用$\sigma$表示*标准差*,用$\sigma^2$表示*方差*。在本书的大部分内容中,我将使用$\sigma^2$来代替$\mathit{VAR}(X)$作为方差;它们象征着同样的东西。第一堂课,我们计算标准差$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$我们可以用NumPy方法`numpy.std()`来验证这个计算,它计算的是标准偏差。'std'是标准偏差的常见缩写。 ###Code print(f"std {np.std(X):.4f}") print(f"var {np.std(X)**2:.4f}") ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.当然,$0.1414^2 = 0.02$,这与我们之前计算的方差一致。标准差表示什么?它告诉我们高度之间的变化。“多少”不是一个数学术语。我们将能够更精确地定义它,一旦我们引入高斯的概念,在下一节。现在我要说的是68%的值都在一个标准差范围内。换句话说,我们可以得出结论,对于一个随机班级,68%的学生身高在1.66(1.8-0.1414)米到1.94(1.8+0.1414)米之间。我们可以在图中看到: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown 对于5个学生,显然不能在一个标准差范围内精确得到68%我们确实看到5个学生中有3个在$ pm1 σ $之内,也就是60%,这是你在只有5个样本的情况下所能得到的68%。让我们看看一个有100名学生的班级的结果。> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print(f'mean = {mean:.3f}') print(f'std = {std:.3f}') ###Output _____no_output_____ ###Markdown 通过肉眼观察,大约68%的高度位于1.8平均值的$\pm1\sigma$内,但我们可以用代码验证这一点。 ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown 我们很快会更深入地讨论这个问题。现在我们来计算标准差$$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$$Y$的均值是$\mu=1.8 m,所以$$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$我们将用NumPy来验证 ###Code print(f'std of Y is {np.std(Y):.2f} m') ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.这符合我们的预期。$Y$的高度变化更大,标准差也更大。最后,让我们计算$Z$的标准差。这些值没有变化,所以我们期望标准差为零。$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown 在我们继续之前,我需要指出的是,我忽略了男性的平均身高高于女性。一般来说,只有男性或女性的班级的身高差异要小于有两种性别的班级。其他因素也是如此。营养良好的儿童比营养不良的儿童高。斯堪的纳维亚人比意大利人高。统计学家在设计实验时需要考虑这些因素。我建议我们可能是在为一个学区订购课桌而进行这个分析。对于每个年龄组,可能有两种不同的平均数——一种集中在女性的平均身高周围,另一种集中在男性的平均身高周围。整个班级的平均值将介于两者之间。如果我们按照所有学生的平均比例购买课桌,那么我们最终得到的课桌可能既不适合男生也不适合女生! 我们在本书中将不考虑这些问题。如果您需要学习处理这些问题的技术,请参阅任何标准概率文本。 为什么是差异的平方为什么要取方差的差的平方?我可以做很多数学运算,但让我们用简单的方法来看一下。这是一张$X$值与$X=[3,-3,3,-3]$均值的图表 ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown 如果我们不取差的平方符号会把所有东西都消掉$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$ 这显然是不正确的,因为数据中的方差大于0。也许我们可以用绝对值?通过检验,我们可以看到结果是$12/4=3$,这当然是正确的——每个值与平均值相差3。但如果我们有$Y=[6, -2, - 3,1]$呢?在这种情况下,我们得到$12/4=3$。显然,$Y$比$X$更分散,但计算结果是相同的。如果我们使用使用平方的公式,我们得到$Y$的方差为3.5,这反映了其更大的变化。这并不是正确的证明。事实上,这项技术的发明者卡尔·弗里德里希·高斯(Carl Friedrich Gauss)认识到,它在某种程度上是武断的。如果有异常值,那么对差异进行平方会给该术语带来不成比例的权重。例如,让我们看看如果我们有: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print(f'Variance of X with outlier = {np.var(X):6.2f}') print(f'Variance of X without outlier = {np.var(X[:-1]):6.2f}') ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown 这是“正确”吗?你告诉我。如果没有100的异常值,我们得到$\sigma^2=2.03$,这准确地反映了没有异常值时$X$的变化情况。一个离群值淹没了方差计算。我们是想要淹没计算,从而知道有一个离群值,还是稳健地合并离群值,并仍然提供一个接近于没有离群值的值的估计?再说一遍,你来告诉我。显然,这取决于你的问题。我不会继续沿着这条路走下去;如果你感兴趣,你可能会想看看James Berger在这个问题上所做的工作,在一个叫做“贝叶斯稳健性”的领域,或者是Peter J. Huber关于“稳健性统计”的优秀出版物。在这本书中,我们总是使用高斯定义的方差和标准差。 从这里可以看出,这些“概要”统计数据总是在讲述我们的数据的一个不完整的故事。在这个例子中,高斯定义的方差并没有告诉我们有一个大的离群值。然而,它是一个功能强大的工具,因为我们可以用几个数字精确地描述一个大数据集。如果我们有10亿个数据点,我们就不会想要用眼睛来检查图表或查看数字列表;摘要统计给我们提供了一种有用的方法来描述数据的形状。 高斯函数 我们现在准备学习[高斯函数](https://en.wikipedia.org/wiki/Gaussian_function)。让我们提醒自己这一章的动机。> 我们希望用一种单峰的、连续的方式来表示模拟现实世界如何工作的概率,并且计算效率高。让我们看一下高斯分布的图来了解一下我们在讨论什么。 ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown 此曲线为[*概率密度函数*](https://en.wikipedia.org/wiki/Probability_density_function)或*pdf*。它显示了随机变量取一个值的相对可能性。从图表中我们可以看出,学生的身高更有可能接近1.8米,而不是1.7米,身高更有可能是1.9米,而不是1.4米。换句话说,很多学生的身高会接近1.8米,而很少有学生的身高是1.4米或2.2米。最后,注意曲线的中心是1.8米的平均值。> 我在Notebook * computing_and_plotting_pdf *中解释了如何绘制高斯函数,以及更多内容Supporting_Notebooks文件夹。你可以在线阅读[这里](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]。这可能被你识别为“钟形曲线”。这条曲线是普遍存在的,因为在现实世界条件下,许多观测结果都是这样分布的。我不会用“钟形曲线”一词来指代高斯分布,因为许多概率分布都有类似的钟形曲线形状。非数学来源可能不那么精确,所以当你看到这个术语没有定义时,在你得出结论时要明智。这条曲线并不是高度所特有的——大量的自然现象都表现出这种分布,包括我们在过滤问题中使用的传感器。正如我们将看到的,它也具有我们正在寻找的所有属性——它将单峰信念或值表示为概率,它是连续的,而且它在计算上是高效的。我们很快就会发现,它还有其他我们可能没有意识到我们所渴望的品质。为了进一步激励你,回想一下*离散贝叶斯*一章中概率分布的形状: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown 它们不是完美的高斯曲线,但它们是相似的。我们将使用高斯函数来代替那一章中使用的离散概率! 术语在我们继续之前,先讲一点术语——这个图表描述了一个随机变量的概率密度,该变量的值在($- infty. \infty)$之间。这是什么意思?想象一下,我们在高速公路上对汽车的速度进行无限次、无限精确的测量。然后,我们可以通过显示以任何给定速度通过的汽车的相对数量来绘制结果。如果平均速度是120公里每小时,它可能是这样的: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown y轴表示概率密度-在相应的x轴上以相应速度行驶的汽车的相对数量。我将在下一节进一步解释这一点。高斯模型并不完善。虽然这些图表没有显示出来,但分布的“尾巴”一直延伸到无穷远。“尾”是曲线的远端,即值最低的地方。当然,人的高度或汽车的速度不能小于零,更不用说$-\infty$ or $\infty$了。“地图不是领域”是一个常见的表达,它适用于贝叶斯过滤和统计。上述高斯分布模型模拟了实测车速的分布,但作为一个模型,它必然是不完善的。在这些过滤器中,模型和现实之间的差异会一次又一次地出现。高斯函数被用于数学的许多分支,不是因为它们完美地模拟了现实,而是因为它们比其他任何相对准确的选择都更容易使用。然而,即使在这本书中,高斯函数也无法模拟现实,迫使我们使用计算上昂贵的替代方法。 你会听到这些分布叫做“高斯分布”或“正态分布”。在这里,“高斯”和“正态”都是同一个意思,并且可以互换使用。我将在整本书中使用这两个词,因为不同的来源会使用这两个词,我希望你们习惯看到这两个词。最后,就像在这段话中,它是典型的缩短名称和谈论一个*高斯*或*正态* -这都是*高斯分布的典型的捷径名称。 多高斯分布让我们来看看高斯函数是如何工作的。高斯分布是一个*连续概率分布*,它完全由两个参数描述,均值($\mu$)和方差($\sigma^2$)。定义为:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$ $\exp[x]$表示$e^x$ 如果你之前没见过这个等式,不要被它吓倒;你将不需要记忆或操纵它。这个函数的计算存储在`stats.py`和函数`gaussian(x, mean, var, normed=True)`中。 去掉常数,你可以看到它是一个简单的指数: $$f(x)\propto e^{-x^2}$$ ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown 让我们来回顾一下如何查看函数的代码。在单元格中,键入函数名,后跟两个问号,然后按CTRL+ENTER。这将打开一个显示源代码的弹出窗口。取消注释下一个单元格,现在尝试它。 ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown 让我们画一个高斯分布,均值为22 $(\mu=22)$,方差为4 $(\sigma^2=4)$。 ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown 这条曲线是什么意思?假设我们有一个读数为22°C的温度计。没有一个温度计是完全精确的,因此我们希望每次读数都与实际值稍有偏差。然而,一个叫做[*中心极限定理*](https://en.wikipedia.org/wiki/Central_limit_theorem)的定理指出,如果我们进行许多测量,这些测量值将是正态分布的。当我们看这张图表时,我们可以看到它与温度计读数特定值的概率成正比,给定实际温度为22°C。回想一下高斯分布是*连续的*。想象一条无限长的直线,随机选取的点在2点的概率是多少。显然是0%,因为有无数的选择可供选择。正态分布也是如此;在上图中,恰好是 2°C的概率是0%,因为读数可以取无数个值。这条曲线是什么?我们称之为概率密度函数。曲线下任意区域的面积给出了这些值的概率。例如,如果你计算曲线下的面积在20和22之间,得到的面积就是温度读数在这两个温度之间的概率。这是另一种理解方式。岩石或海绵的密度是多少?它是对在给定空间中压缩了多少质量的度量。岩石密度大,海绵密度小。所以,如果你想知道一块石头的重量,但没有秤,你可以用它的体积乘以它的密度。这就得到了它的质量。实际上,密度在大多数物体中都是变化的,所以你可以通过岩石的体积对局部密度进行积分。$$M = \iiint_R p(x,y,z)\, dV$$我们对概率密度也是这样做的。如果你想知道温度在20°C到21°C之间,你可以对上面的曲线从20到21积分。众所周知,曲线的积分是曲线下的面积。因为这是概率密度的曲线,密度的积分就是概率。温度正好是22°C的概率是多少?直观地说,0。这些都是实数,22°C相对于22.00000000000017°C的几率是无限小的。数学上,从22到22积分会得到什么?零。回想一下这块岩石,岩石上一个点的重量是多少?一个无限小的点一定是没有权值的。问一个点的权重是没有意义的,问一个连续分布只有一个值的概率也是没有意义的。两者的答案显然都是零。在实践中,我们的传感器没有无限的精度,所以读数22°C意味着一个范围,例如22 $\pm$ 0.1°C,我们可以通过从21.9到22.1积分来计算该范围的概率。 我们可以用贝叶斯术语或频率术语来思考。作为贝叶斯,如果温度计准确读数为22°C,那么我们的信念就被曲线所描述——我们认为实际(系统)温度接近22°C是非常高的,而我们认为实际温度接近18°C是非常低的。作为一个频率主义者,我们会说,如果我们对一个系统在22°C的温度进行10亿次测量,那么测量的直方图就会像这条曲线。你怎么计算概率,或者曲线下的面积?对高斯函数方程积分$$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$这被称为*累积概率分布*,通常缩写为*cdf*。我写的`filterpy.stats.norm_cdf` 来计算积分。例如,我们可以计算 ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown 平均值($\mu$)听起来就是所有可能概率的平均值。由于曲线的对称形状,它也是曲线上最高的部分。温度计上的读数是22°C,所以这是我们用来计算平均值的。 随机变量$X$的正态分布表示为$X \sim\ \mathcal{N}(\mu,\sigma^2)$,其中$ sim$表示根据*分布。这意味着我可以把温度计的温度读数表示为$$\text{temp} \sim \mathcal{N}(22,4)$$ 这是一个极其重要的结果。高斯函数允许我只用两个数就能捕获无限个可能的值!用$\mu=22$和$\sigma^2=4$,我可以计算出测量值在任何范围内的分布。一些来源使用$\mathcal N (\mu, \sigma)$代替$\mathcal N (\mu, \sigma^2)$。两者都可以,都是惯例。如果看到$\mathcal{N}(22,4)$这样的术语,您需要记住使用的是哪种形式。在这本书中,我总是使用$ \mathcal N (\mu, \sigma^2)$,在这个例子中,$\sigma=2$, $\sigma^2=4$。 方差和置信度因为这是一个概率密度分布它要求曲线下的面积总是等于1。这应该是直观清楚的-曲线下的面积代表所有可能的结果,*某件事*发生了,*某件事发生的概率是1,所以密度的总和必须是1。我们可以用一些代码来证明这一点。(如果你有数学倾向,将高斯方程从$-\inty$到$\inty$进行积分) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown 这引出了一个重要的见解。如果方差很小,曲线就会变窄。这是因为方差衡量的是样本与均值的差异。为了使面积等于1,曲线也必须是高的。另一方面,如果方差很大,曲线会很宽,因此它也会很短,以使面积等于1。让我们用图形来看看。我们将使用前面提到的filterpy.stats。高斯',它可以接受单个值或数组值。 ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown 默认情况下,`高斯`将输出归一化,将输出转换回概率分布。使用参数`normed`来控制它。 ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown 如果高斯函数没有归一化,它就被称为*高斯函数*而不是*高斯分布*。 ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown 这告诉我们什么?$\sigma^2=0.2^2$的高斯分布非常窄。它是说,我们相信$x=23$,并且我们非常确定:在$\pm 0.2$ std内。相反,高斯函数$\sigma^2=1^2$也相信$x=23$,但我们对此不太确定。我们认为$x=23$更低,所以我们对$x$可能值的信念是分散的——例如,我们认为$x=20$或$x=26$很有可能。$\sigma^2=0.2^2$几乎完全消除了$22$或$24$的可能值,而$\sigma^2=1^2$认为它们几乎与$23$一样有可能如果我们回想一下温度计,我们可以把这三条曲线看作是三个不同温度计的读数。$\sigma^2=0.2^2$的曲线代表一个非常精确的温度计,而$\sigma^2=1^2$的曲线代表一个相当不精确的温度计。请注意,高斯分布给我们提供了一个非常强大的特性——我们可以只用两个数字——平均值和方差——来完全表示温度计的读数和误差。高斯函数的等价形式是$\mathcal{N}(\mu,1/\tau)$,其中$\mu$是*平均值* ,$\tau$是*精度* 。$1/\tau = \sigma^2$;它是方差的倒数。虽然我们在本书中没有使用这个公式,但它强调了方差是衡量我们的数据有多精确的一个指标。一个小的方差产生大的精度-我们的测量是非常精确的。相反,大的差异产生低的精度-我们的信念散布在一个大的区域。你们应该习惯于用这些等价形式来考虑高斯函数。在贝叶斯术语中,高斯函数反映了我们对测量的“信念”,它们表达了测量的“精度”,并表达了测量中有多少“方差”。这些都是表述同一个事实的不同方式。我讲得有点超前了,但在接下来的章节中,我们将使用高斯函数来表达我们对一些事情的信念,比如我们跟踪的物体的估计位置,或者我们使用的传感器的精度。 68 - 95 - 99.7规则现在值得在标准差上花几句话。标准偏差是衡量数据偏离均值的程度。对于高斯分布,68%的数据落在均值的一个标准差内($\pm1\sigma$), 95%落在两个标准差内($\pm2\sigma$), 99.7%落在三个标准差内($\pm3\sigma$)。这通常被称为[68-95-99.7规则](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule)。如果你被告知一个班级的平均考试成绩是71分,标准差是9.4,你可以得出结论,95%的学生的成绩在52.2到89.8之间,如果分布是正态分布(这是用$71 \pm(2 * 9.4)$计算的)。最后,这些不是任意的数字。如果我们位置的高斯分布是$\mu=22$米,那么标准差也有单位米。因此$\sigma=0.2$意味着68%的测量范围从21.8米到22.2米。方差是标准差的平方,因此$\sigma^2 = 0.04 $米$^2$。正如你在上一节看到的,写$\sigma^2 = 0.2^2$会让这个更有意义,因为0.2和数据是相同的单位。下图描述了标准差和正态分布之间的关系。 ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown 交互式高斯函数对于那些在Jupyter Notebook中阅读这篇文章的人,这里有一个交互版本的高斯图。使用滑块修改$\mu$和$\sigma^2$。调整$\mu$将使图形向左或向右移动,因为你是在调整均值,而调整$\sigma^2$将使钟形曲线变厚或变薄。 ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown 最后,如果你在网上阅读这篇文章,这是一个高斯函数的动画。首先,均值向右平移。然后均值集中在$\mu=5$,方差被修改。 高斯的计算性质 离散贝叶斯滤波器的工作原理是乘加任意概率分布。卡尔曼滤波器使用高斯分布而不是任意分布,但算法的其余部分保持不变。这意味着我们需要将高斯函数相乘和相加。高斯函数的一个显著性质是两个独立的正态变量(https://en.wikipedia.org/wiki/Sum_of_normally_distributed_random_variables)的和也是正态分布的!乘积不是高斯函数,而是正比于高斯函数。在这里,我们可以说,两个高斯分布相乘的结果是一个高斯函数(回忆函数在这里的意思是,值和为1的属性是不保证的)。在我们计算之前,让我们直观地测试一下。 ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown 在这里我创建了两个高斯函数,g1=$\mathcal N(0.8, 0.1)$和g2=$\mathcal N(1.3, 0.2)$并绘制它们。然后我将它们相乘,并将结果归一化。如你所见,结果看起来像一个高斯分布。高斯函数是非线性函数。一般来说,如果你把一个非线性方程乘起来你会得到一个不同类型的函数。例如,`sin(x)`的形状非常不同。 ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown 但是两个高斯分布相乘的结果是一个高斯函数。这是卡尔曼滤波器在计算上可行的一个关键原因。换句话说,卡尔曼滤波器使用高斯滤波器,因为它们在计算上很好。两个独立高斯函数的乘积由:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$两个高斯函数的和由$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$在本章的最后,我推导了这些方程。然而,理解推导过程并不是很重要。 将所有的东西放在一起现在我们准备讨论高斯函数如何用于滤波。在下一章中,我们将使用高斯函数实现一个滤波器。在这里我将解释为什么我们要使用高斯函数。在前一章中,我们用数组表示概率分布。我们通过计算该分布与代表每个点测量可能性的另一个分布的元素乘积来执行更新计算,如下所示: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown 换句话说,我们要做10次乘法才能得到这个结果。对于一个具有多维大数组的真正过滤器,我们需要数十亿次乘法运算和大量内存。 但是这个分布看起来像高斯分布。如果我们用高斯函数代替数组呢?我将计算后验均值和方差并将其与柱状图对比 ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown 这是令人印象深刻。我们可以用两个数来描述整个数的分布。也许这个例子并不具有说服力,因为分布中只有10个数字。但一个真正的问题可能有数百万个数字,但仍然只需要两个数字来描述它。接下来,回想一下我们的过滤器实现的更新函数```pythondef update(likelihood, prior): return normalize(likelihood * prior)```如果数组包含一百万个元素,那就是一百万次乘法运算。但是,如果我们用高斯函数替换数组我们就会用$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$也就是三个乘法和两个除法。 贝叶斯定理 在上一章中,我们开发了一个算法,通过对我们在每一时刻所拥有的信息进行推理,我们将这些信息表示为离散概率分布。在这个过程中,我们发现了[*贝叶斯定理*](https://en.wikipedia.org/wiki/Bayes%27_theorem)。贝叶斯定理告诉我们如何计算给定先验信息的事件的概率。我们用这个概率计算实现了`update()`函数:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ 这就是贝叶斯定理。等一下,我将发展数学,但在许多方面,这模糊了这个简单的概念,在这个方程中表达。我们把它理解为:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$其中$\| \cdot\|$表示规范化的术语。我们得出这个结论的原因很简单:一只狗在走廊里走。然而,正如我们将看到的,同样的等式适用于一系列过滤问题。我们将在以后的每一章中使用这个方程。 回顾一下,“先验”是在我们包含测量的概率之前发生的概率(“可能性”),而“后验”是我们在包含测量的信息之后计算的概率。贝叶斯定理$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$被称为[*条件概率*](https://en.wikipedia.org/wiki/Conditional_probability)。也就是说,它表示如果$B发生后$A$发生的概率。例如,如果昨天也下雨,那么今天更有可能下雨,因为降雨系统通常会持续一天以上。我们把昨天下雨的情况下今天下雨的概率写成$P$(今天下雨$\mid$昨天下雨)。我忽略了重要的一点。在上面的代码中,我们不是在处理单个概率,而是一个概率数组——一个*概率分布*。我刚才给出的贝叶斯方程使用了概率,而不是概率分布。然而,它同样适用于概率分布。我们用小写$p$表示概率分布$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$ 在上面的等式中$B$是*evidence*,$p(A)$是*先验*,$p(B \mid A)$是*可能性*,$p(A \mid B)$是*后验*。通过用相应的词替换数学术语,你可以看到贝叶斯定理与我们的更新方程相匹配。我们把这个方程写成问题的形式。我们将使用$x_i$表示*i*处的位置,$z$表示测量值。因此,我们想知道$P(x_i \mid z)$,也就是说,在给定测量$z$的情况下,狗到达$x_i$的概率。把它代入方程,解出来。$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$那看起来很难看,但实际上很简单。我们来看看右边的每一项是什么意思。首先是$p(z \mid x_i)$。这是在每个单元$x_i$处测量的概率。$p(x_i)$是*先验* -我们在纳入测量之前的信念。我们把它们相乘。这只是' update() '函数中未规范化的乘法:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```最后一项要考虑的是分母$p(z)$。这是在不考虑位置的情况下获得度量值$z$的概率。它通常被称为“evidence”。我们通过在代码中取$x$的总和或`sum(belief)`来计算。这就是我们计算标准化的方法!因此,`update()`函数只是计算贝叶斯定理。文献中经常以积分的形式给出这些方程。毕竟,积分就是对连续函数的求和。你可能会看到贝叶斯定理写成$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$这个分母通常是不可能分析解决的;当它能解出来的时候,数学就变得极其困难。最近的一篇评论文章(http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for皇家统计学会称其为“狗的早餐”[8]。采用贝叶斯方法的过滤教科书充斥着没有解析解的积分方程。不要被这些方程所吓倒,因为我们通过对后验进行标准化来处理这个积分。我们将学习更多的技术来处理这在**粒子过滤器**章。在那之前,认识到在实践中它只是一个可以求和的标准化项。我想说的是,当你面对一页的积分时,只要把它们看作是和,并把它们与本章联系起来,通常困难就会消失。问问你自己"为什么我们要把这些值加起来" "为什么要除以这一项"。令人惊讶的是,答案往往是显而易见的。令人惊讶的是,作者常常忽略了这一解释。很有可能贝叶斯定理的威力还没有完全显现出来。我们想要计算$p(x_i \mid Z)$。也就是说,在第i步,我们给出测量值的可能状态是什么。总的来说,这是一个非常困难的问题。贝叶斯定理是普遍的。我们可能想知道根据癌症测试的结果,我们患癌症的概率,或者根据不同传感器的读数,下雨的概率。说得好像这些问题是无法解决的。 但是贝叶斯定理让我们用逆$p(Z\mid x_i)$来计算它,这通常很简单$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$ 也就是说,要计算给定特定传感器读数的下雨可能性,我们只需要计算给定下雨的传感器读数的可能性!这是一个***容易得多***的问题!天气预报仍然是一个困难的问题,但贝叶斯让它变得容易处理。 同样地,正如你在离散贝叶斯那一章中所看到的,我们通过计算传感器读数显示Simon处于“x”位置的可能性,来计算Simon在走廊中任何给定位置的可能性。困难的问题变得容易了。 总概率定理现在我们知道了`update()`函数背后的数学形式;那么‘`predict()`函数呢?`predict()`实现了[*总概率定理*](https://en.wikipedia.org/wiki/Law_of_total_probability)。让我们回顾一下`predict()`计算的内容。它根据所有可能的运动事件的概率计算出在任何给定位置的概率。我们把它表示成一个方程。在时间$t$的任意位置$i$的概率可以写成$P(X_i^t)$。我们计算出$t-1时刻$ $P(X_j^{t-1})$乘以从单元格$ X_j $移动到$x_i$的概率。这是$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$这个方程叫做*总概率论*。引用维基百科[6]“它表达了一个结果可以通过几个不同事件实现的总概率”。我本可以给你那个等式并执行`predict()`,但你理解这个等式为何有效的机会很小。作为提醒,这里是计算这个方程的代码```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` 用scipy.stats计算概率在本章中,我使用了来自[FilterPy](https://github.com/rlabbe/filterpy)的代码来计算和绘制高斯函数。我这样做是为了让您有机会查看代码,并了解这些函数是如何实现的。然而,正如俗话所说,Python自带了“batteries included”,并且在模块' scipy.stats '中自带了广泛的统计函数。让我们来看看如何使用scipy。用来计算统计数据和概率。`scipy.stats`模块包含许多对象,你可以使用这些对象来计算各种概率分布的属性。这个模块的完整文档在这里:http://docs.scipy.org/doc/scipy/reference/stats.html 。我们将重点关注norm变量,它实现了正态分布。让我们看看一些使用`scipy.stats`的代码。norm 来计算高斯函数,并将其值与FilterPy的`gaussian()`函数返回的值进行比较。 ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown 调用`norm(2, 3)`创建了scipy所谓的“冻结”分布——它创建并返回一个平均值为2、标准差为3的对象。然后你可以多次使用这个对象来获得不同值的概率密度,如下所示: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown [2]的文档[scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor)列出了许多其他函数。例如,我们可以使用`rvs()`函数从分布中生成$n$ samples。 ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 1.02 -0.766 2.439 8.039 1.033 0.592 -1.492 3.568 4.615 6.621 4.631 -2.079 4.212 3.326 0.766] ###Markdown 我们可以得到[*累积分布函数(CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function),这是分布中随机抽取的值小于或等于$x$的概率。 ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown 我们可以得到分布的各种性质: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown 使用高斯模型来模拟世界的局限性前面我提到了*中心极限定理*,该定理指出,在一定条件下,任何独立随机变量的算术和都是正态分布的,不管随机变量是如何分布的。这对我们很重要,因为自然界充满了非正态分布,但当我们把中心极限定理应用到大的总体上时,我们最终得到的是正态分布。然而,证明的关键部分是“在一定条件下”。这些条件通常并不适用于物质世界。例如,厨房秤的读数不能低于0,但如果我们将测量误差表示为高斯分布,曲线的左侧会延伸到负无穷,这意味着给出负读数的可能性非常小。 这是一个广泛的话题,我不会详尽地讨论。让我们考虑一个简单的例子。我们认为考试分数是正态分布。如果你曾经遇到过一位教授“给曲线打分”,你就会受到这种假设的影响。当然,考试成绩不能服从正态分布。这是因为无论距离均值有多远,该分布都为*任意*值分配了一个非零概率分布。比如,均值是90,标准差是13。正态分布假设得到90的概率很大,得到40的概率很小正态分布假设得到90的概率很大,得到40的概率很小。然而,这也意味着有人得到-10分或150分的机会很小。它赋予获得$-10^{300}$或$10^{32986}$得分的极小机会。高斯分布的尾部无穷长。但作为测试,我们知道这不是真的。如果不考虑额外的学分,你的分数不可能少于0,也不可能超过100。让我们用正态分布来画出这些值的范围,看看它代表的真实考试分数分布有多差。 ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown 曲线下的面积不能等于1,所以它不是概率分布。实际发生的情况是,比正态分布预测的更多的学生得分更接近范围的上限(例如),而这条尾巴变得“胖”了。此外,该测试可能无法完美区分学生在技能上的细微差异,所以均值左侧的分布可能在某些地方有些扎堆。 传感器测量世界。传感器测量中的误差很少是真正的高斯分布。现在谈论这给卡尔曼滤波器设计者带来的困难还为时过早。值得记住的是,卡尔曼滤波数学是基于一个理想化的世界模型。现在,我将提供一些代码,我将在本书的后面使用这些代码来形成分布,以模拟各种进程和传感器。这个分布称为[*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution)。 假设我想建模一个输出有白噪声的传感器。简单起见,假设信号是常数10,而噪声的标准差是2。我们可以使用函数`numpy.random.randn()`来获得一个均值为0,标准差为1的随机数。我可以用: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown 让我们画出这个信号,看看它是什么样的。 ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown 这和我想的一样。信号的中心在10点左右。标准差为2意味着68%的测量值将在$\pm$ 2(10)以内,99%将在$\pm$ 6(10)以内,这看起来就像正在发生的事情。 现在让我们看看由Student的$t$-分布生成的分布。我将不涉及数学,只是给你它的源代码,然后用它绘制一个分布。 ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown 从图中我们可以看到,虽然输出与正态分布相似,但也有离均值超过3个标准差的异常值(7到13)。 学生的$t$-分布不太可能是你的传感器(比如,GPS或多普勒)如何执行的精确模型,而且这不是一本关于如何建模物理系统的书。然而,当呈现真实世界的噪声时,它确实会产生合理的数据来测试您的过滤器的性能。在我们的模拟和测试中,我们将在书的其余部分使用这些分布。 这不是杞人忧天。卡尔曼滤波方程假设噪声是正态分布的,如果这不是真的,则执行次最优。任务关键型滤波器的设计人员,如航天器上的滤波器,需要掌握航天器上传感器性能的大量理论和经验知识。例如,我在NASA的一次任务中看到的一篇演讲说,虽然理论上说他们应该使用3个标准偏差来区分噪音和有效测量结果,但实际上他们必须使用5到6个标准偏差。这是他们通过实验确定的。rand_student_t的代码包含在' filterpy.stats '中。 ```pythonfrom filterpy.stats import rand_student_t``` 虽然我不会在这里讨论它,但统计学定义了描述概率分布形状的方法,即概率分布与指数分布的区别。正态分布是围绕平均值对称形成的,就像钟形曲线。然而,概率分布可能是不对称的。这个测量方法叫做[*skew*](https://en.wikipedia.org/wiki/Skewness)。尾巴可以变短、变胖、变薄,或者形状与指数分布不同。这种度量称为[*峰度*](https://en.wikipedia.org/wiki/Kurtosis)。“scipy。Stats '模块包含了' description '函数,用于计算这些统计数据。 ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown 让我们来看看两个正常的群体,一个小,一个大: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.612305407192634, 1.3993311044560806), mean=0.18368165685455, variance=1.0302338908047082, skewness=-0.3189528779904156, kurtosis=-1.00039757782245) DescribeResult(nobs=300000, minmax=(-4.638869372819046, 4.74986868845982), mean=-0.0013202416761363633, variance=0.9991969257471748, skewness=0.0007986056292588667, kurtosis=0.003159950599910477) ###Markdown [Table of Contents](./table_of_contents.ipynb) Probabilities, Gaussians, and Bayes' Theorem ###Code from __future__ import division, print_function %matplotlib inline #format the book import book_format book_format.set_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information about the outcome. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. I always use bold symbols for vectors and matrices, which helps distinguish between the two. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|We denote this distribution with a lower case p: $p(x)$. Using ordinary function notation, we would write:$$P(X{=}4) = p(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$". Note the subtle notational difference. The capital $P$ denotes the probability of a single event, and the lower case $p$ is the probability distribution function. This can lead you astray if you are not observent. Some texts use $Pr$ instead of $P$ to ameliorate this. Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions.In the previous chapter we used probability distributions to estimate the position of a dog in a hallway. For example: ###Code import numpy as np import kf_book.book_plots as book_plots belief = np.array([1, 4, 2, 0, 8, 2, 2, 35, 4, 3, 2]) belief = belief / np.sum(belief) with book_plots.figsize(y=2): book_plots.bar_plot(belief) print('sum = ', np.sum(belief)) ###Output sum = 1.0 ###Markdown Each position has a probability between 0 and 1, and the sum of all equals one, so this makes it a probability distribution. Each probability is discrete, so we can more precisely call this a discrete probability distribution. In practice we leave out the terms discrete and continuous unless we have a particular reason to make that distinction. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average of a set of data, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code x = [1.8, 2.0, 1.7, 1.9, 1.6] np.mean(x) ###Output _____no_output_____ ###Markdown As a convenience NumPy arrays provide the method `mean()`. ###Code x = np.array([1.8, 2.0, 1.7, 1.9, 1.6]) x.mean() ###Output _____no_output_____ ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. In this case the median equals the mean, but that is not generally true. ###Code np.median(x) ###Output _____no_output_____ ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{a}^b\, xf(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter.We can write a bit of Python to simulate this. Here I take 1,000,000 samples and compute the expected value of the distribution we just computed analytically. ###Code total = 0 N = 1000000 for r in np.random.rand(N): if r <= .80: total += 1 elif r < .95: total += 3 else: total += 5 total / N ###Output _____no_output_____ ###Markdown You can see that the computed value is close to the analytically derived value. It is not exact because getting an exact values requires an infinite sample size. ExerciseWhat is the expected value of a die role? SolutionEach side is equally likely, so each has a probability of 1/6. Hence$$\begin{aligned}\mathbb E[X] &= 1/6\times1 + 1/6\times 2 + 1/6\times 3 + 1/6\times 4 + 1/6\times 5 + 1/6\times6 \\&= 1/6(1+2+3+4+5+6)\\&= 3.5\end{aligned}$$ ExerciseGiven the uniform continuous distribution$$f(x) = \frac{1}{b - a}$$compute the expected value for $a=0$ and $b=20$. Solution$$\begin{aligned}\mathbb E[X] &= \int_0^{20}\, x\frac{1}{20} \,dx \\&= \bigg[\frac{x^2}{40}\bigg]_0^{20} \\&= 10 - 0 \\&= 10\end{aligned}$$ Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X), np.mean(Y), np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = \mathbb E[(X - \mu)^2]$$Ignoring the square for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. The formula for the expected value is $\mathbb E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print("{:.2f} meters squared".format(np.var(X))) ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. Let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = 1.8 + randn(100)*.1414 mean, std = data.mean(), data.std() plot_height_std(data, lw=2) print('mean = {:.3f}'.format(mean)) print('std = {:.3f}'.format(std)) ###Output _____no_output_____ ###Markdown By eye roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8, but we can verify this with code. ###Code np.sum((data > mean-std) & (data < mean+std)) / len(data) * 100. ###Output _____no_output_____ ###Markdown We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero.$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not to consider these issues in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom=False) ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, -1, 2, 1, 2, -1, 1, -1, 2, 1, -2, 100] print('Variance of X with outlier = {:6.2f}'.format(np.var(X))) print('Variance of X without outlier = {:6.2f}'.format(np.var(X[:-1]))) ###Output Variance of X with outlier = 621.45 Variance of X without outlier = 2.03 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.03$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the variance computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? Again, you tell me. Obviously it depends on your problem.I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [4]. In this book we will always use variance and standard deviation as defined by Gauss.The point to gather from this is that these *summary* statistics always tell an incomplete story about our data. In this example variance as defined by Gauss does not tell us we have a single large outlier. However, it is a powerful tool, as we can concisely describe a large data set with a few numbers. If we had 1 billion data points we would not want to inspect plots by eye or look at lists of numbers; summary statistics give us a way to describe the shape of the data in a useful way. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. We can tell from the chart student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m. Put another way, many students will have a height near 1.8 m, and very few students will have a height of 1.4 m or 2.2 meters. Finally, notice that the curve is centered over the mean of 1.8 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [0., 0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. Though these charts do not show it, the *tails* of the distribution extend out to infinity. *Tails* are the far ends of the curve where the values are the lowest. Of course human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var, normed=True)`. Shorn of the constants, you can see it is a simple exponential: $$f(x)\propto e^{-x^2}$$which has the familiar bell curve shape ###Code x = np.arange(-3, 3, .01) plt.plot(x, np.exp(-x**2)); ###Output _____no_output_____ ###Markdown Let's remind ourselves how to look at the code for a function. In a cell, type the function name followed by two question marks and press CTRL+ENTER. This will open a popup window displaying the source. Uncomment the next cell and try it now. ###Code from filterpy.stats import gaussian #gaussian?? ###Output _____no_output_____ ###Markdown Let's plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can see it is proportional to the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infinitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. Thinking back to the rock, what is the weight of an single point on the rock? An infinitesimal point must have no weight. It makes no sense to ask the weight of a single point, and it makes no sense to ask about the probability of a continuous distribution having a single value. The answer for both is obviously zero.In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$This is called the *cumulative probability distribution*, commonly abbreviated *cdf*.I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Cumulative probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Cumulative probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Cumulative probability of range 21.5 to 22.5 is 19.74% Cumulative probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements over any range.Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically. We will use the aforementioned `filterpy.stats.gaussian` which can take either a single value or array of values. ###Code from filterpy.stats import gaussian print(gaussian(x=3.0, mean=2.0, var=1)) print(gaussian(x=[3.0, 2.0], mean=2.0, var=1)) ###Output 0.24197072451914337 [0.378 0.622] ###Markdown By default `gaussian` normalizes the output, which turns the output back into a probability distribution. Use the argument`normed` to control this. ###Code print(gaussian(x=[3.0, 2.0], mean=2.0, var=1, normed=False)) ###Output [0.242 0.399] ###Markdown If the Gaussian is not normalized it is called a *Gaussian function* instead of *Gaussian distribution*. ###Code xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.2**2), label='$\sigma^2=0.2^2$') plt.plot(xs, gaussian(xs, 23, .5**2), label='$\sigma^2=0.5^2$', ls=':') plt.plot(xs, gaussian(xs, 23, 1**2), label='$\sigma^2=1^2$', ls='--') plt.legend(); ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.2^2$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that: within $\pm 0.2$ std. In contrast, the Gaussian with $\sigma^2=1^2$ also believes that $x=23$, but we are much less sure about that. Our belief that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.2^2$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=1^2$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.2^2$ represents a very accurate thermometer, and curve for $\sigma^2=1^2$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much the data deviates from the mean. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$. As you saw in the last section, writing $\sigma^2 = 0.2^2$ can make this somewhat more meaningful, since the 0.2 is in the same units as the data.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, FloatSlider def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.01) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim(0, 0.04) interact(plt_g, mu=FloatSlider(value=5, min=3, max=7), variance=FloatSlider(value = .03, min=.01, max=1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansThe discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. A remarkable property of Gaussian distributions is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian. There we can say that the result of multipying two Gaussian distributions is a Gaussian function (recall function in this context means that the property that the values sum to one is not guaranteed).Before we do the math, let's test this visually. ###Code x = np.arange(-1, 3, 0.01) g1 = gaussian(x, mean=0.8, var=.1) g2 = gaussian(x, mean=1.3, var=.2) plt.plot(x, g1, x, g2) g = g1 * g2 # element-wise multiplication g = g / sum(g) # normalize plt.plot(x, g, ls='-.'); ###Output _____no_output_____ ###Markdown Here I created two Gaussians, g1=$\mathcal N(0.8, 0.1)$ and g2=$\mathcal N(1.3, 0.2)$ and plotted them. Then I multiplied them together and normalized the result. As you can see the result *looks* like a Gaussian distribution.Gaussians are nonlinear functions. Typically, if you multiply a nonlinear equations you end up with a different type of function. For example, the shape of multiplying two sins is very different from `sin(x)`. ###Code x = np.arange(0, 4*np.pi, 0.01) plt.plot(np.sin(1.2*x)) plt.plot(np.sin(1.2*x) * np.sin(2*x)); ###Output _____no_output_____ ###Markdown But the result of multiplying two Gaussians distributions is a Gaussian function. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$At the end of the chapter I derive these equations. However, understanding the deriviation is not very important. Putting it all TogetherNow we are ready to talk about how Gaussians can be used in filtering. In the next chapter we will implement a filter using Gaussins. Here I will explain why we would want to use Gaussians.In the previous chapter we represented probability distributions with an array. We performed the update computation by computing the element-wise product of that distribution with another distribution representing the likelihood of the measurement at each point, like so: ###Code def normalize(p): return p / sum(p) def update(likelihood, prior): return normalize(likelihood * prior) prior = normalize(np.array([4, 2, 0, 7, 2, 12, 35, 20, 3, 2])) likelihood = normalize(np.array([3, 4, 1, 4, 2, 38, 20, 18, 1, 16])) posterior = update(likelihood, prior) book_plots.bar_plot(posterior) ###Output _____no_output_____ ###Markdown In other words, we have to compute 10 multiplications to get this result. For a real filter with large arrays in multiple dimensions we'd require billions of multiplications, and vast amounts of memory. But this distribution looks like a Gaussian. What if we use a Gaussian instead of an array? I'll compute the mean and variance of the posterior and plot it against the bar chart. ###Code xs = np.arange(0, 10, .01) def mean_var(p): x = np.arange(len(p)) mean = np.sum(p * x,dtype=float) var = np.sum((x - mean)**2 * p) return mean, var mean, var = mean_var(posterior) book_plots.bar_plot(posterior) plt.plot(xs, gaussian(xs, mean, var, normed=False), c='r'); print('mean: %.2f' % mean, 'var: %.2f' % var) ###Output mean: 5.88 var: 1.24 ###Markdown This is impressive. We can describe an entire distribution of numbers with only two numbers. Perhaps this example is not persuasive, given there are only 10 numbers in the distribution. But a real problem could have millions of numbers, yet still only require two numbers to describe it.Next, recall that our filter implements the update function with```pythondef update(likelihood, prior): return normalize(likelihood * prior)```If the arrays contain a million elements, that is one million multiplications. However, if we replace the arrays with a Gaussian then we would perform that calculation with$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$which is three multiplications and two divisions. Bayes TheoremIn the last chapter we developed an algorithm by reasoning about the information we have at each moment, which we expressed as discrete probability distributions. In the process we discovered [*Bayes' Theorem*](https://en.wikipedia.org/wiki/Bayes%27_theorem). Bayes theorem tells us how to compute the probability of an event given prior information. We implemented the `update()` function with this probability calculation:$$ \mathtt{posterior} = \frac{\mathtt{likelihood}\times \mathtt{prior}}{\mathtt{normalization}}$$ It turns out that this is Bayes' theorem. In a second I will develop the mathematics, but in many ways that obscures the simple idea expressed in this equation. We read this as:$$ updated\,knowledge = \big\|likelihood\,of\,new\,knowledge\times prior\, knowledge \big\|$$where $\| \cdot\|$ expresses normalizing the term.We came to this with simple reasoning about a dog walking down a hallway. Yet, as we will see, the same equation applies to a universe of filtering problems. We will use this equation in every subsequent chapter.To review, the *prior* is the probability of something happening before we include the probability of the measurement (the *likelihood*) and the *posterior* is the probability we compute after incorporating the information from the measurement.Bayes theorem is$$P(A \mid B) = \frac{P(B \mid A)\, P(A)}{P(B)}$$$P(A \mid B)$ is called a [*conditional probability*](https://en.wikipedia.org/wiki/Conditional_probability). That is, it represents the probability of $A$ happening *if* $B$ happened. For example, it is more likely to rain today compared to a typical day if it also rained yesterday because rain systems usually last more than one day. We'd write the probability of it raining today given that it rained yesterday as $P$(rain today $\mid$ rain yesterday).I've glossed over an important point. In our code above we are not working with single probabilities, but an array of probabilities - a *probability distribution*. The equation I just gave for Bayes uses probabilities, not probability distributions. However, it is equally valid with probability distributions. We use a lower case $p$ for probability distributions$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{p(B)}$$In the equation above $B$ is the *evidence*, $p(A)$ is the *prior*, $p(B \mid A)$ is the *likelihood*, and $p(A \mid B)$ is the *posterior*. By substituting the mathematical terms with the corresponding words you can see that Bayes theorem matches our update equation. Let's rewrite the equation in terms of our problem. We will use $x_i$ for the position at *i*, and $z$ for the measurement. Hence, we want to know $P(x_i \mid z)$, that is, the probability of the dog being at $x_i$ given the measurement $z$. So, let's plug that into the equation and solve it.$$p(x_i \mid z) = \frac{p(z \mid x_i) p(x_i)}{p(z)}$$That looks ugly, but it is actually quite simple. Let's figure out what each term on the right means. First is $p(z \mid x_i)$. This is the likelihood, or the probability for the measurement at every cell $x_i$. $p(x_i)$ is the *prior* - our belief before incorporating the measurements. We multiply those together. This is just the unnormalized multiplication in the `update()` function:```pythondef update(likelihood, prior): posterior = prior * likelihood p(z|x) * p(x) return normalize(posterior)```The last term to consider is the denominator $p(z)$. This is the probability of getting the measurement $z$ without taking the location into account. It is often called the *evidence*. We compute that by taking the sum of $x$, or `sum(belief)` in the code. That is how we compute the normalization! So, the `update()` function is doing nothing more than computing Bayes' theorem.The literature often gives you these equations in the form of integrals. After all, an integral is just a sum over a continuous function. So, you might see Bayes' theorem written as$$p(A \mid B) = \frac{p(B \mid A)\, p(A)}{\int p(B \mid A_j) p(A_j) \,\, \mathtt{d}A_j}\cdot$$This denominator is usually impossible to solve analytically; when it can be solved the math is fiendishly difficult. A recent [opinion piece ](http://www.statslife.org.uk/opinion/2405-we-need-to-rethink-how-we-teach-statistics-from-the-ground-up)for the Royal Statistical Society called it a "dog's breakfast" [8]. Filtering textbooks that take a Bayesian approach are filled with integral laden equations with no analytic solution. Do not be cowed by these equations, as we trivially handled this integral by normalizing our posterior. We will learn more techniques to handle this in the **Particle Filters** chapter. Until then, recognize that in practice it is just a normalization term over which we can sum. What I'm trying to say is that when you are faced with a page of integrals, just think of them as sums, and relate them back to this chapter, and often the difficulties will fade. Ask yourself "why are we summing these values", and "why am I dividing by this term". Surprisingly often the answer is readily apparent. Surprisingly often the author neglects to mention this interpretation.It's probable that the strength of Bayes' theorem is not yet fully apparent to you. We want to compute $p(x_i \mid Z)$. That is, at step i, what is our probable state given a measurement. That's an extraordinarily difficult problem in general. Bayes' Theorem is general. We may want to know the probability that we have cancer given the results of a cancer test, or the probability of rain given various sensor readings. Stated like that the problems seem unsolvable.But Bayes' Theorem lets us compute this by using the inverse $p(Z\mid x_i)$, which is often straightforward to compute$$p(x_i \mid Z) \propto p(Z\mid x_i)\, p(x_i)$$That is, to compute how likely it is to rain given specific sensor readings we only have to compute the likelihood of the sensor readings given that it is raining! That's a ***much*** easier problem! Well, weather prediction is still a difficult problem, but Bayes makes it tractable. Likewise, as you saw in the Discrete Bayes chapter, we computed the likelihood that Simon was in any given part of the hallway by computing how likely a sensor reading is given that Simon is at position `x`. A hard problem becomes easy. Total Probability TheoremWe now know the formal mathematics behind the `update()` function; what about the `predict()` function? `predict()` implements the [*total probability theorem*](https://en.wikipedia.org/wiki/Law_of_total_probability). Let's recall what `predict()` computed. It computed the probability of being at any given position given the probability of all the possible movement events. Let's express that as an equation. The probability of being at any position $i$ at time $t$ can be written as $P(X_i^t)$. We computed that as the sum of the prior at time $t-1$ $P(X_j^{t-1})$ multiplied by the probability of moving from cell $x_j$ to $x_i$. That is$$P(X_i^t) = \sum_j P(X_j^{t-1}) P(x_i | x_j)$$That equation is called the *total probability theorem*. Quoting from Wikipedia [6] "It expresses the total probability of an outcome which can be realized via several distinct events". I could have given you that equation and implemented `predict()`, but your chances of understanding why the equation works would be slim. As a reminder, here is the code that computes this equation```pythonfor i in range(N): for k in range (kN): index = (i + (width-k) - offset) % N result[i] += prob_dist[index] * kernel[k]``` Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.13114657203397997 0.13114657203397995 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-0.08 2.024 1.4 3.024 5.799 0.989 2.083 0.978 7.542 -2.22 4.984 0.626 4.387 3.676 -0.12 ] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Limitations of Using Gaussians to Model the WorldEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. For example, a kitchen scale cannot read below zero, but if we represent the measurement error as a Gaussian the left side of the curve extends to negative infinity, implying a very small chance of giving a negative reading. This is a broad topic which I will not treat exhaustively. Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an extremely small chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution to see how poorly this represents real test scores distributions. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like what I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown We can see from the plot that while the output is similar to the normal distribution there are outliers that go far more than 3 standard deviations from the mean (7 to 13). It is unlikely that the Student's $t$-distribution is an accurate model of how your sensor (say, a GPS or Doppler) performs, and this is not a book on how to model physical systems. However, it does produce reasonable data to test your filter's performance when presented with real world noise. We will be using distributions like these throughout the rest of the book in our simulations and tests. This is not an idle concern. The Kalman filter equations assume the noise is normally distributed, and perform sub-optimally if this is not true. Designers for mission critical filters, such as the filters on spacecraft, need to master a lot of theory and empirical knowledge about the performance of the sensors on their spacecraft. For example, a presentation I saw on a NASA mission stated that while theory states that they should use 3 standard deviations to distinguish noise from valid measurements in practice they had to use 5 to 6 standard deviations. This was something they determined by experiments.The code for rand_student_t is included in `filterpy.stats`. You may use it with```pythonfrom filterpy.stats import rand_student_t```While I'll not cover it here, statistics has defined ways of describing the shape of a probability distribution by how it varies from an exponential distribution. The normal distribution is shaped symmetrically around the mean - like a bell curve. However, a probability distribution can be asymmetrical around the mean. The measure of this is called [*skew*](https://en.wikipedia.org/wiki/Skewness). The tails can be shortened, fatter, thinner, or otherwise shaped differently from an exponential distribution. The measure of this is called [*kurtosis*](https://en.wikipedia.org/wiki/Kurtosis). the `scipy.stats` module contains the function `describe` which computes these statistics, among others. ###Code import scipy scipy.stats.describe(zs) ###Output _____no_output_____ ###Markdown Let's examine two normal populations, one small, one large: ###Code print(scipy.stats.describe(np.random.randn(10))) print() print(scipy.stats.describe(np.random.randn(300000))) ###Output DescribeResult(nobs=10, minmax=(-1.8106190910322406, 1.7202801709655346), mean=0.03998695860303425, variance=1.2099810612140205, skewness=0.054824114606583485, kurtosis=-0.8322079773586668) DescribeResult(nobs=300000, minmax=(-5.136201903633123, 4.498934900223554), mean=0.0016752908705450242, variance=1.0019122279656631, skewness=0.002460339180965745, kurtosis=-0.0022807108788165387) ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib notebook from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. As you might guess from the chapter name, Gaussian distributions provide all of these features. Mean, Variance, and Standard Deviations Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get 1 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). *Random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining things, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. In later chapters we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we will want to know the *average* height of the students. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.85, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.81 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than te set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.85, 2.0, 1.7, 1.9, 1.6} is 1.85, because 1.85 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.85 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \sum_{i=1}^n \frac{1}{n}x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squared terms for a moment, you can see that the variance is the *expected value* for how much the sample space ($X$) varies from the mean (squared, of course). We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$, and we will assume that any height is equally probable, so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from book_format import set_figsize, figsize from code.book_plots import interactive_plot from code.gaussian_internal import plot_height_std import matplotlib.pyplot as plt with interactive_plot(): plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] with interactive_plot(): plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.4f} m'.format(np.std(Y))) ###Output std of Y is 0.3899 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! It's too early to understand why, but we will not normally be faced with these problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code with interactive_plot(): X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the correct formula we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that is is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have $X = [1,-1,1,-2,3,2,100]$. ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() ax = plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf') ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.1 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. In fact, this is the curve for the student heights given earlier. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter. They were not perfect Gaussian curves, but they were similar, as in the plot below. We will be using Gaussians to replace the discrete probabilities used in that chapter! ###Code import code.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] with interactive_plot(): book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code with interactive_plot(): ax = plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)') ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis.You may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $-\infty$. This is true, but this is a common limitation of mathematical modeling. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will see these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code from filterpy.stats import gaussian, norm_cdf with interactive_plot(): ax = plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$') ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22 is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code import numpy as np import matplotlib.pyplot as plt xs = np.arange(15, 30, 0.05) with interactive_plot(): plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05', c='b') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':', c='b') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--', c='b') plt.legend() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from code.gaussian_internal import display_stddev_plot with interactive_plot(): display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from IPython.html.widgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the product of two independent Gaussians is another Gaussian! The sum is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding probabilities. I'm getting ahead of myself, but the Kalman filter uses Gaussians instead of probabilities, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function, and typically if you multiply a nonlinear equation with itself you end up with a different equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a fundamental property, and a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansThe product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$You can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality lets us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{x - z - \mu_z}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{x - \mu_p}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [ 6.7 5.323 3.043 3.361 4.981 3.122 2.841 0.552 6.937 5.474 0.829 1.398 0.555 -3.212 1.555] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The *tails* of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10,100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] with interactive_plot(): plt.plot(xs, ys, label='var=0.2') plt.xlim((0,120)) plt.ylim(0, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish incredibly minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Kalman filters use sensors to measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] with interactive_plot(): plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] with interactive_plot(): plt.plot(zs, lw=1) ###Output _____no_output_____ ###Markdown [Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb) Gaussian Probabilities ###Code #format the book %matplotlib inline from __future__ import division, print_function from book_format import load_style load_style() ###Output _____no_output_____ ###Markdown IntroductionThe last chapter ended by discussing some of the drawbacks of the Discrete Bayesian filter. For many tracking and filtering problems our desire is to have a filter that is *unimodal* and *continuous*. That is, we want to model our system using floating point math (continuous) and to have only one belief represented (unimodal). For example, we want to say an aircraft is at (12.34, -95.54, 2389.5) where that is latitude, longitude, and altitude. We do not want our filter to tell us "it might be at (1.65, -78.01, 2100.45) or it might be at (34.36, -98.23, 2543.79)." That doesn't match our physical intuition of how the world works, and as we discussed, it can be prohibitively expensive to compute the multimodal case. And, of course, multiple position estimates makes navigating impossible.We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate. Gaussian distributions provide all of these features. Mean, Variance, and Standard DeviationsMost of you will have had exposure to at least basic statistics, but allow me to cover this material anyway. I ask that you read the material even if you are sure you know it well. I ask for two reasons. First, I want to be sure that we are using terms in the same way. Second, I strive to form an intuitive understanding of statistics that will serve you well in later chapters. It's easy to go through a stats course and only remember the formulas and calculations, and perhaps be fuzzy on the implications of what you have learned. Random VariablesEach time you roll a die the *outcome* will be between 1 and 6. If we rolled a fair die a million times we'd expect to get a one 1/6 of the time. Thus we say the *probability*, or *odds* of the outcome 1 is 1/6. Likewise, if I asked you the chance of 1 being the result of the next roll you'd reply 1/6. This combination of values and associated probabilities is called a [*random variable*](https://en.wikipedia.org/wiki/Random_variable). Here *random* does not mean the process is nondeterministic, only that we lack information. The result of a die toss is deterministic, but we lack enough information to compute the result. We don't know what will happen, except probabilistically.While we are defining terms, the range of values is called the [*sample space*](https://en.wikipedia.org/wiki/Sample_space). For a die the sample space is {1, 2, 3, 4, 5, 6}. For a coin the sample space is {H, T}. *Space* is a mathematical term which means a set with structure. The sample space for the die is a subset of the natural numbers in the range of 1 to 6.Another example of a random variable is the heights of students in a university. Here the sample space is a range of values in the real numbers between two limits defined by biology.Random variables such as coin tosses and die rolls are *discrete random variables*. This means their sample space is represented by either a finite number of values or a countably infinite number of values such as the natural numbers. Heights of humans are called *continuous random variables* since they can take on any real value between two limits.Do not confuse the *measurement* of the random variable with the actual value. If we can only measure the height of a person to 0.1 meters we would only record values from 0.1, 0.2, 0.3...2.7, yielding 27 discrete choices. Nonetheless a person's height can vary between any arbitrary real value between those ranges, and so height is a continuous random variable. In statistics capital letters are used for random variables, usually from the latter half of the alphabet. So, we might say that $X$ is the random variable representing the die toss, or $Y$ are the heights of the students in the freshmen poetry class. Later chapters use linear algebra to solve these problems, and so there we will follow the convention of using lower case for vectors, and upper case for matrices. Unfortunately these conventions clash, and you will have to determine which an author is using from context. Probability DistributionThe [*probability distribution*](https://en.wikipedia.org/wiki/Probability_distribution) gives the probability for the random variable to take any value in a sample space. For example, for a fair six sided die we might say:|Value|Probability||-----|-----------||1|1/6||2|1/6||3|1/6||4|1/6||5|1/6||6|1/6|Some sources call this the *probability function*. Using ordinary function notation, we would write:$$P(X{=}4) = f(4) = \frac{1}{6}$$This states that the probability of the die landing on 4 is $\frac{1}{6}$. $P(X{=}x_k)$ is notation for "the probability of $X$ being $x_k$. Some texts use $Pr$ or $Prob$ instead of $P$.Another example is a fair coin. It has the sample space {H, T}. The coin is fair, so the probability for heads (H) is 50%, and the probability for tails (T) is 50%. We write this as$$\begin{gathered}P(X{=}H) = 0.5\\P(X{=}T)=0.5\end{gathered}$$Sample spaces are not unique. One sample space for a die is {1, 2, 3, 4, 5, 6}. Another valid sample space would be {even, odd}. Another might be {dots in all corners, not dots in all corners}. A sample space is valid so long as it covers all possibilities, and any single event is described by only one element. {even, 1, 3, 4, 5} is not a valid sample space for a die since a value of 4 is matched both by 'even' and '4'.The probabilities for all values of a *discrete random value* is known as the *discrete probability distribution* and the probabilities for all values of a *continuous random value* is known as the *continuous probability distribution*.To be a probability distribution the probability of each value $x_i$ must be $x_i \ge 0$, since no probability can be less than zero. Secondly, the sum of the probabilities for all values must equal one. This should be intuitively clear for a coin toss: if the odds of getting heads is 70%, then the odds of getting tails must be 30%. We formulize this requirement as$$\sum\limits_u P(X{=}u)= 1$$for discrete distributions, and as $$\int\limits_u P(X{=}u) \,du= 1$$for continuous distributions. The Mean, Median, and Mode of a Random VariableGiven a set of data we often want to know a representative or average value for that set. There are many measures for this, and the concept is called a [*measure of central tendency*](https://en.wikipedia.org/wiki/Central_tendency). For example we might want to know the *average* height of the students in a class. We all know how to find the average, but let me belabor the point so I can introduce more formal notation and terminology. Another word for average is the *mean*. We compute the mean by summing the values and dividing by the number of values. If the heights of the students in meters is $$X = \{1.8, 2.0, 1.7, 1.9, 1.6\}$$we compute the mean as$$\mu = \frac{1.8 + 2.0 + 1.7 + 1.9 + 1.6}{5} = 1.8$$It is traditional to use the symbol $\mu$ (mu) to denote the mean.We can formalize this computation with the equation$$ \mu = \frac{1}{n}\sum^n_{i=1} x_i$$NumPy provides `numpy.mean()` for computing the mean. ###Code import numpy as np x = [1.8, 2.0, 1.7, 1.9, 1.6] print(np.mean(x)) ###Output 1.8 ###Markdown The *mode* of a set of numbers is the number that occurs most often. If only one number occurs most often we say it is a *unimodal* set, and if two or more numbers occur the most with equal frequency than the set is *multimodal*. For example the set {1, 2, 2, 2, 3, 4, 4, 4} has modes 2 and 4, which is multimodal, and the set {5, 7, 7, 13} has the mode 7, and so it is unimodal. We will not be computing the mode in this manner in this book, but we do use the concepts of unimodal and multimodal in a more general sense. For example, in the **Discrete Bayes** chapter we talked about our belief in the dog's position as a *multimodal distribution* because we assigned different probabilities to different positions.Finally, the *median* of a set of numbers is the middle point of the set so that half the values are below the median and half are above the median. Here, above and below is in relation to the set being sorted. If the set contains an even number of values then the two middle numbers are averaged together.Numpy provides `numpy.median()` to compute the median. As you can see the median of {1.8, 2.0, 1.7, 1.9, 1.6} is 1.8, because 1.8 is the third element of this set after being sorted. ###Code print(np.median(x)) ###Output 1.8 ###Markdown Expected Value of a Random VariableThe [*expected value*](https://en.wikipedia.org/wiki/Expected_value) of a random variable is the average value it would have if we took an infinite number of samples of it and then averaged those samples together. Let's say we have $x=[1,3,5]$ and each value is equally probable. What value would we *expect* $x$ to have, on average?It would be the average of 1, 3, and 5, of course, which is 3. That should make sense; we would expect equal numbers of 1, 3, and 5 to occur, so $(1+3+5)/3=3$ is clearly the average of that infinite series of samples. In other words, here the expected value is the *mean* of the sample space.Now suppose that each value has a different probability of happening. Say 1 has an 80% chance of occurring, 3 has an 15% chance, and 5 has only a 5% chance. In this case we compute the expected value by multiplying each value of $x$ by the percent chance of it occurring, and summing the result. For this case we could compute$$\mathbb E[X] = (1)(0.8) + (3)(0.15) + (5)(0.05) = 1.5$$Here I have introduced the notation $\mathbb E[X]$ for the expected value of $x$. Some texts use $E(x)$. The value 1.5 for $x$ makes intuitive sense because $x$ is far more likely to be 1 than 3 or 5, and 3 is more likely than 5 as well.We can formalize this by letting $x_i$ be the $i^{th}$ value of $X$, and $p_i$ be the probability of its occurrence. This gives us$$\mathbb E[X] = \sum_{i=1}^n p_ix_i$$A trivial bit of algebra shows that if the probabilities are all equal, the expected value is the same as the mean:$$\mathbb E[X] = \sum_{i=1}^n p_ix_i = \frac{1}{n}\sum_{i=1}^n x_i = \mu_x$$If $x$ is continuous we substitute the sum for an integral, like so$$\mathbb E[X] = \int_{-\infty}^\infty x\, f(x) \,dx$$where $f(x)$ is the probability distribution function of $x$. We won't be using this equation yet, but we will be using it in the next chapter. Variance of a Random VariableThe computation above tells us the average height of the students, but it doesn't tell us everything we might want to know. For example, suppose we have three classes of students, which we label $X$, $Y$, and $Z$, with these heights: ###Code X = [1.8, 2.0, 1.7, 1.9, 1.6] Y = [2.2, 1.5, 2.3, 1.7, 1.3] Z = [1.8, 1.8, 1.8, 1.8, 1.8] ###Output _____no_output_____ ###Markdown Using NumPy we see that the mean height of each class is the same. ###Code print(np.mean(X)) print(np.mean(Y)) print(np.mean(Z)) ###Output 1.8 1.8 1.8 ###Markdown The mean of each class is 1.8 meters, but notice that there is a much greater amount of variation in the heights in the second class than in the first class, and that there is no variation at all in the third class.The mean tells us something about the data, but not the whole story. We want to be able to specify how much *variation* there is between the heights of the students. You can imagine a number of reasons for this. Perhaps a school district needs to order 5,000 desks, and they want to be sure they buy sizes that accommodate the range of heights of the students. Statistics has formalized this concept of measuring variation into the notion of [*standard deviation*](https://en.wikipedia.org/wiki/Standard_deviation) and [*variance*](https://en.wikipedia.org/wiki/Variance). The equation for computing the variance is$$\mathit{VAR}(X) = E[(X - \mu)^2]$$Ignoring the squaring for a moment, you can see that the variance is the *expected value* for how much the sample space $X$ varies from the mean $\mu:$ ($X-\mu)$. I will explain the purpose of the squared term later. We have the formula for the expected value $E[X] = \sum\limits_{i=1}^n p_ix_i$ so we can substitute that into the equation above to get$$\mathit{VAR}(X) = \frac{1}{n}\sum_{i=1}^n (x_i - \mu)^2$$ Let's compute the variance of the three classes to see what values we get and to become familiar with this concept.The mean of $X$ is 1.8 ($\mu_x = 1.8$) so we compute$$ \begin{aligned}\mathit{VAR}(X) &=\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5} \\&= \frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5} \\\mathit{VAR}(X)&= 0.02 \, m^2\end{aligned}$$NumPy provides the function `var()` to compute the variance: ###Code print(np.var(X), "meters squared") ###Output 0.02 meters squared ###Markdown This is perhaps a bit hard to interpret. Heights are in meters, yet the variance is meters squared. Thus we have a more commonly used measure, the *standard deviation*, which is defined as the square root of the variance:$$\sigma = \sqrt{\mathit{VAR}(X)}=\sqrt{\frac{1}{n}\sum_{i=1}^n(x_i - \mu)^2}$$It is typical to use $\sigma$ for the *standard deviation* and $\sigma^2$ for the *variance*. In most of this book I will be using $\sigma^2$ instead of $\mathit{VAR}(X)$ for the variance; they symbolize the same thing.For the first class we compute the standard deviation with$$ \begin{aligned}\sigma_x &=\sqrt{\frac{(1.8-1.8)^2 + (2-1.8)^2 + (1.7-1.8)^2 + (1.9-1.8)^2 + (1.6-1.8)^2} {5}} \\&= \sqrt{\frac{0 + 0.04 + 0.01 + 0.01 + 0.04}{5}} \\\sigma_x&= 0.1414\end{aligned}$$We can verify this computation with the NumPy method `numpy.std()` which computes the standard deviation. 'std' is a common abbreviation for standard deviation. ###Code print('std {:.4f}'.format(np.std(X))) print('var {:.4f}'.format(np.std(X)**2)) ###Output std 0.1414 var 0.0200 ###Markdown And, of course, $0.1414^2 = 0.02$, which agrees with our earlier computation of the variance.What does the standard deviation signify? It tells us how much the heights vary amongst themselves. "How much" is not a mathematical term. We will be able to define it much more precisely once we introduce the concept of a Gaussian in the next section. For now I'll say that for many things 68% of all values lie within one standard deviation of the mean. In other words we can conclude that for a random class 68% of the students will have heights between 1.66 (1.8-0.1414) meters and 1.94 (1.8+0.1414) meters. We can view this in a plot: ###Code from kf_book.book_plots import set_figsize, figsize from kf_book.gaussian_internal import plot_height_std import matplotlib.pyplot as plt plot_height_std(X) ###Output _____no_output_____ ###Markdown For only 5 students we obviously will not get exactly 68% within one standard deviation. We do see that 3 out of 5 students are within $\pm1\sigma$, or 60%, which is as close as you can get to 68% with only 5 samples. I haven't yet introduced enough math or Python for you to fully understand the next bit of code, but let's look at the results for a class with 100 students.> We write one standard deviation as $1\sigma$, which is pronounced "one standard deviation", not "one sigma". Two standard deviations is $2\sigma$, and so on. ###Code from numpy.random import randn data = [1.8 + .1414*randn() for i in range(100)] plot_height_std(data, lw=2) print('mean = {:.3f}'.format(np.mean(data))) print('std = {:.3f}'.format(np.std(data))) ###Output _____no_output_____ ###Markdown We can see by eye that roughly 68% of the heights lie within $\pm1\sigma$ of the mean 1.8.We'll discuss this in greater depth soon. For now let's compute the standard deviation for $$Y = [2.2, 1.5, 2.3, 1.7, 1.3]$$The mean of $Y$ is $\mu=1.8$ m, so $$ \begin{aligned}\sigma_y &=\sqrt{\frac{(2.2-1.8)^2 + (1.5-1.8)^2 + (2.3-1.8)^2 + (1.7-1.8)^2 + (1.3-1.8)^2} {5}} \\&= \sqrt{0.152} = 0.39 \ m\end{aligned}$$We will verify that with NumPy with ###Code print('std of Y is {:.2f} m'.format(np.std(Y))) ###Output std of Y is 0.39 m ###Markdown This corresponds with what we would expect. There is more variation in the heights for $Y$, and the standard deviation is larger.Finally, let's compute the standard deviation for $Z$. There is no variation in the values, so we would expect the standard deviation to be zero. We show this to be true with$$ \begin{aligned}\sigma_z &=\sqrt{\frac{(1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2 + (1.8-1.8)^2} {5}} \\&= \sqrt{\frac{0+0+0+0+0}{5}} \\\sigma_z&= 0.0 \ m\end{aligned}$$ ###Code print(np.std(Z)) ###Output 0.0 ###Markdown Before we continue I need to point out that I'm ignoring that on average men are taller than women. In general the height variance of a class that contains only men or women will be smaller than a class with both sexes. This is true for other factors as well. Well nourished children are taller than malnourished children. Scandinavians are taller than Italians. When designing experiments statisticians need to take these factors into account. I suggested we might be performing this analysis to order desks for a school district. For each age group there are likely to be two different means - one clustered around the mean height of the females, and a second mean clustered around the mean heights of the males. The mean of the entire class will be somewhere between the two. If we bought desks for the mean of all students we are likely to end up with desks that fit neither the males or females in the school! We will not be faced with these kinds of problems in this book. Consult any standard probability text if you need to learn techniques to deal with these issues. Why the Square of the DifferencesWhy are we taking the *square* of the differences for the variance? I could go into a lot of math, but let's look at this in a simple way. Here is a chart of the values of $X$ plotted against the mean for $X=[3,-3,3,-3]$ ###Code X = [3, -3, 3, -3] mean = np.average(X) for i in range(len(X)): plt.plot([i ,i], [mean, X[i]], color='k') plt.axhline(mean) plt.xlim(-1, len(X)) plt.tick_params(axis='x', labelbottom='off') ###Output _____no_output_____ ###Markdown If we didn't take the square of the differences the signs would cancel everything out:$$\frac{(3-0) + (-3-0) + (3-0) + (-3-0)}{4} = 0$$This is clearly incorrect, as there is more than 0 variance in the data. Maybe we can use the absolute value? We can see by inspection that the result is $12/4=3$ which is certainly correct — each value varies by 3 from the mean. But what if we have $Y=[6, -2, -3, 1]$? In this case we get $12/4=3$. $Y$ is clearly more spread out than $X$, but the computation yields the same variance. If we use the formula using squares we get a variance of 3.5 for $Y$, which reflects its larger variation.This is not a proof of correctness. Indeed, Carl Friedrich Gauss, the inventor of the technique, recognized that it is somewhat arbitrary. If there are outliers then squaring the difference gives disproportionate weight to that term. For example, let's see what happens if we have: ###Code X = [1, -1, 1, -2, 3, 2, 100] print('Variance of X = {:.2f}'.format(np.var(X))) ###Output Variance of X = 1210.69 ###Markdown Is this "correct"? You tell me. Without the outlier of 100 we get $\sigma^2=2.89$, which accurately reflects how $X$ is varying absent the outlier. The one outlier swamps the computation. Do we want to swamp the computation so we know there is an outlier, or robustly incorporate the outlier and still provide an estimate close to the value absent the outlier? I will not continue down this path; if you are interested you might want to look at the work that James Berger has done on this problem, in a field called *Bayesian robustness*, or the excellent publications on *robust statistics* by Peter J. Huber [3]. In this book we will always use variance and standard deviation as defined by Gauss. GaussiansWe are now ready to learn about [Gaussians](https://en.wikipedia.org/wiki/Gaussian_function). Let's remind ourselves of the motivation for this chapter.> We desire a unimodal, continuous way to represent probabilities that models how the real world works, and that is computationally efficient to calculate.Let's look at a graph of a Gaussian distribution to get a sense of what we are talking about. ###Code from filterpy.stats import plot_gaussian_pdf plt.figure() plot_gaussian_pdf(mean=1.8, variance=0.1414**2, xlabel='Student Height', ylabel='pdf'); ###Output _____no_output_____ ###Markdown This curve is a [*probability density function*](https://en.wikipedia.org/wiki/Probability_density_function) or *pdf* for short. It shows the relative likelihood for the random variable to take on a value. In the chart above, a student is somewhat more likely to have a height near 1.8 m than 1.7 m, and far more likely to have a height of 1.9 m vs 1.4 m.> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the Supporting_Notebooks folder. You can read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb) [1].This may be recognizable to you as a 'bell curve'. This curve is ubiquitous because under real world conditions many observations are distributed in such a manner. I will not use the term 'bell curve' to refer to a Gaussian because many probability distributions have a similar bell curve shape. Non-mathematical sources might not be as precise, so be judicious in what you conclude when you see the term used without definition.This curve is not unique to heights — a vast amount of natural phenomena exhibits this sort of distribution, including the sensors that we use in filtering problems. As we will see, it also has all the attributes that we are looking for — it represents a unimodal belief or value as a probability, it is continuous, and it is computationally efficient. We will soon discover that it also has other desirable qualities which we may not realize we desire.To further motivate you, recall the shapes of the probability distributions in the *Discrete Bayes* chapter: ###Code import kf_book.book_plots as book_plots belief = [ 0.,0., 0., 0.1, 0.15, 0.5, 0.2, .15, 0, 0] book_plots.bar_plot(belief) ###Output _____no_output_____ ###Markdown They were not perfect Gaussian curves, but they were similar. We will be using Gaussians to replace the discrete probabilities used in that chapter! NomenclatureA bit of nomenclature before we continue - this chart depicts the *probability density* of a *random variable* having any value between ($-\infty..\infty)$. What does that mean? Imagine we take an infinite number of infinitely precise measurements of the speed of automobiles on a section of highway. We could then plot the results by showing the relative number of cars going past at any given speed. If the average was 120 kph, it might look like this: ###Code plot_gaussian_pdf(mean=120, variance=17**2, xlabel='speed(kph)'); ###Output _____no_output_____ ###Markdown The y-axis depicts the *probability density* — the relative amount of cars that are going the speed at the corresponding x-axis. I will explain this further in the next section.The Gaussian model is imperfect. For example, you may object that human heights or automobile speeds cannot be less than zero, let alone $-\infty$ or $\infty$. “The map is not the territory” is a common expression, and it is true for Bayesian filtering and statistics. The Gaussian distribution above models the distribution of the measured automobile speeds, but being a model it is necessarily imperfect. The difference between model and reality will come up again and again in these filters. Gaussians are used in many branches of mathematics, not because they perfectly model reality, but because they are easier to use than any other relatively accurate choice. However, even in this book Gaussians will fail to model reality, forcing us to use computationally expensive alternatives. You will hear these distributions called *Gaussian distributions* or *normal distributions*. *Gaussian* and *normal* both mean the same thing in this context, and are used interchangeably. I will use both throughout this book as different sources will use either term, and I want you to be used to seeing both. Finally, as in this paragraph, it is typical to shorten the name and talk about a *Gaussian* or *normal* — these are both typical shortcut names for the *Gaussian distribution*. Gaussian DistributionsLet's explore how Gaussians work. A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\mu$) and the variance ($\sigma^2$). It is defined as:$$ f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} \exp\big [{-\frac{(x-\mu)^2}{2\sigma^2} }\big ]$$$\exp[x]$ is notation for $e^x$. Don't be dissuaded by the equation if you haven't seen it before; you will not need to memorize or manipulate it. The computation of this function is stored in `stats.py` with the function `gaussian(x, mean, var)`.> **Optional:** Let's remind ourselves how to look at a function stored in a file by using the *%load* magic. If you type *%load -s gaussian stats.py* into a code cell and then press CTRL-Enter, the notebook will create a new input cell and load the function into it.```python%load -s gaussian stats.pydef gaussian(x, mean, var): """returns normal distribution for x given a gaussian with the specified mean and variance. """ return (np.exp((-0.5*(np.asarray(x)-mean)**2)/var) / math.sqrt(2*math.pi*var))```We will plot a Gaussian with a mean of 22 $(\mu=22)$, with a variance of 4 $(\sigma^2=4)$, and then discuss what this means. ###Code plot_gaussian_pdf(22, 4, mean_line=True, xlabel='$^{\circ}C$'); ###Output _____no_output_____ ###Markdown What does this curve *mean*? Assume we have a thermometer which reads 22°C. No thermometer is perfectly accurate, and so we expect that each reading will be slightly off the actual value. However, a theorem called [*Central Limit Theorem*](https://en.wikipedia.org/wiki/Central_limit_theorem) states that if we make many measurements that the measurements will be normally distributed. When we look at this chart we can "sort of" think of it as representing the probability of the thermometer reading a particular value given the actual temperature of 22°C. Recall that a Gaussian distribution is *continuous*. Think of an infinitely long straight line - what is the probability that a point you pick randomly is at 2. Clearly 0%, as there is an infinite number of choices to choose from. The same is true for normal distributions; in the graph above the probability of being *exactly* 2°C is 0% because there are an infinite number of values the reading can take.What is this curve? It is something we call the *probability density function.* The area under the curve at any region gives you the probability of those values. So, for example, if you compute the area under the curve between 20 and 22 the resulting area will be the probability of the temperature reading being between those two temperatures. Here is another way to understand it. What is the *density* of a rock, or a sponge? It is a measure of how much mass is compacted into a given space. Rocks are dense, sponges less so. So, if you wanted to know how much a rock weighed but didn't have a scale, you could take its volume and multiply by its density. This would give you its mass. In practice density varies in most objects, so you would integrate the local density across the rock's volume.$$M = \iiint_R p(x,y,z)\, dV$$We do the same with *probability density*. If you want to know the temperature being between 20°C and 21°C kph you would integrate the curve above from 20 to 21. As you know the integral of a curve gives you the area under the curve. Since this is a curve of the probability density, the integral of the density is the probability. What is the probability of a the temperature being exactly 22°C? Intuitively, 0. These are real numbers, and the odds of 22°C vs, say, 22.00000000000017°C is infitesimal. Mathematically, what would we get if we integrate from 22 to 22? Zero. In practice our sensors do not have infinite precision, so a reading of 22°C implies a range, such as 22 $\pm$ 0.1°C, and we can compute the probability of that range by integrating from 21.9 to 22.1.We can think of this in Bayesian terms or frequentist terms. As a Bayesian, if the thermometer reads exactly 22°C, then our belief is described by the curve - our belief that the actual (system) temperature is near 22°C is very high, and our belief that the actual temperature is near 18 is very low. As a frequentist we would say that if we took 1 billion temperature measurements of a system at exactly 22°C, then a histogram of the measurements would look like this curve. How do you compute the probability, or area under the curve? You integrate the equation for the Gaussian $$ \int^{x_1}_{x_0} \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{1}{2}{(x-\mu)^2}/\sigma^2 } dx$$I wrote `filterpy.stats.norm_cdf` which computes the integral for you. For example, we can compute ###Code from filterpy.stats import norm_cdf print('Probability of range 21.5 to 22.5 is {:.2f}%'.format( norm_cdf((21.5, 22.5), 22,4)*100)) print('Probability of range 23.5 to 24.5 is {:.2f}%'.format( norm_cdf((23.5, 24.5), 22,4)*100)) ###Output Probability of range 21.5 to 22.5 is 19.74% Probability of range 23.5 to 24.5 is 12.10% ###Markdown The mean ($\mu$) is what it sounds like — the average of all possible probabilities. Because of the symmetric shape of the curve it is also the tallest part of the curve. The thermometer reads 22°C, so that is what we used for the mean. The notation for a normal distribution for a random variable $X$ is $X \sim\ \mathcal{N}(\mu,\sigma^2)$ where $\sim$ means *distributed according to*. This means I can express the temperature reading of our thermometer as$$\text{temp} \sim \mathcal{N}(22,4)$$This is an extremely important result. Gaussians allow me to capture an infinite number of possible values with only two numbers! With the values $\mu=22$ and $\sigma^2=4$ I can compute the distribution of measurements for over any range.> Some sources use $\mathcal N (\mu, \sigma)$ instead of $\mathcal N (\mu, \sigma^2)$. Either is fine, they are both conventions. You need to keep in mind which form is being used if you see a term such as $\mathcal{N}(22,4)$. In this book I always use $\mathcal N (\mu, \sigma^2)$, so $\sigma=2$, $\sigma^2=4$ for this example. The Variance and BeliefSince this is a probability density distribution it is required that the area under the curve always equals one. This should be intuitively clear — the area under the curve represents all possible outcomes, *something* happened, and the probability of *something happening* is one, so the density must sum to one. We can prove this ourselves with a bit of code. (If you are mathematically inclined, integrate the Gaussian equation from $-\infty$ to $\infty$) ###Code print(norm_cdf((-1e8, 1e8), mu=0, var=4)) ###Output 1.0 ###Markdown This leads to an important insight. If the variance is small the curve will be narrow. this is because the variance is a measure of *how much* the samples vary from the mean. To keep the area equal to 1, the curve must also be tall. On the other hand if the variance is large the curve will be wide, and thus it will also have to be short to make the area equal to 1.Let's look at that graphically: ###Code from filterpy.stats import gaussian xs = np.arange(15, 30, 0.05) plt.plot(xs, gaussian(xs, 23, 0.05), label='$\sigma^2$=0.05') plt.plot(xs, gaussian(xs, 23, 1), label='$\sigma^2$=1', ls=':') plt.plot(xs, gaussian(xs, 23, 5), label='$\sigma^2$=5', ls='--') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown What is this telling us? The Gaussian with $\sigma^2=0.05$ is very narrow. It is saying that we believe $x=23$, and that we are very sure about that. In contrast, the Gaussian with $\sigma^2=5$ also believes that $x=23$, but we are much less sure about that. Our believe that $x=23$ is lower, and so our belief about the likely possible values for $x$ is spread out — we think it is quite likely that $x=20$ or $x=26$, for example. $\sigma^2=0.05$ has almost completely eliminated $22$ or $24$ as possible values, whereas $\sigma^2=5$ considers them nearly as likely as $23$.If we think back to the thermometer, we can consider these three curves as representing the readings from three different thermometers. The curve for $\sigma^2=0.05$ represents a very accurate thermometer, and curve for $\sigma^2=5$ represents a fairly inaccurate one. Note the very powerful property the Gaussian distribution affords us — we can entirely represent both the reading and the error of a thermometer with only two numbers — the mean and the variance.An equivalent formation for a Gaussian is $\mathcal{N}(\mu,1/\tau)$ where $\mu$ is the *mean* and $\tau$ the *precision*. $1/\tau = \sigma^2$; it is the reciprocal of the variance. While we do not use this formulation in this book, it underscores that the variance is a measure of how precise our data is. A small variance yields large precision — our measurement is very precise. Conversely, a large variance yields low precision — our belief is spread out across a large area. You should become comfortable with thinking about Gaussians in these equivalent forms. In Bayesian terms Gaussians reflect our *belief* about a measurement, they express the *precision* of the measurement, and they express how much *variance* there is in the measurements. These are all different ways of stating the same fact.I'm getting ahead of myself, but in the next chapters we will use Gaussians to express our belief in things like the estimated position of the object we are tracking, or the accuracy of the sensors we are using. The 68-95-99.7 RuleIt is worth spending a few words on standard deviation now. The standard deviation is a measure of how much variation from the mean exists. For Gaussian distributions, 68% of all the data falls within one standard deviation ($\pm1\sigma$) of the mean, 95% falls within two standard deviations ($\pm2\sigma$), and 99.7% within three ($\pm3\sigma$). This is often called the [68-95-99.7 rule](https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule). If you were told that the average test score in a class was 71 with a standard deviation of 9.4, you could conclude that 95% of the students received a score between 52.2 and 89.8 if the distribution is normal (that is calculated with $71 \pm (2 * 9.4)$). Finally, these are not arbitrary numbers. If the Gaussian for our position is $\mu=22$ meters, then the standard deviation also has units meters. Thus $\sigma=0.2$ implies that 68% of the measurements range from 21.8 to 22.2 meters. Variance is the standard deviation squared, thus $\sigma^2 = .04$ meters$^2$.The following graph depicts the relationship between the standard deviation and the normal distribution. ###Code from kf_book.gaussian_internal import display_stddev_plot display_stddev_plot() ###Output _____no_output_____ ###Markdown Interactive GaussiansFor those that are reading this in a Jupyter Notebook, here is an interactive version of the Gaussian plots. Use the sliders to modify $\mu$ and $\sigma^2$. Adjusting $\mu$ will move the graph to the left and right because you are adjusting the mean, and adjusting $\sigma^2$ will make the bell curve thicker and thinner. ###Code import math from ipywidgets import interact, interactive, fixed set_figsize(y=3) def plt_g(mu,variance): plt.figure() xs = np.arange(2, 8, 0.1) ys = gaussian(xs, mu, variance) plt.plot(xs, ys) plt.ylim((0, 1)) interact (plt_g, mu=(0., 10), variance = (.2, 1.)); ###Output _____no_output_____ ###Markdown Finally, if you are reading this online, here is an animation of a Gaussian. First, the mean is shifted to the right. Then the mean is centered at $\mu=5$ and the variance is modified. Computational Properties of GaussiansA remarkable property of Gaussians is that the sum of two independent Gaussians is another Gaussian! The product is not Gaussian, but proportional to a Gaussian.The discrete Bayes filter works by multiplying and adding arbitrary probability distributions. The Kalman filter uses Gaussians instead of arbitrary distributions, but the rest of the algorithm remains the same. This means we will need to multiply and add Gaussians. The Gaussian is a nonlinear function. Typically, if you multiply a nonlinear equation with itself you end up with a different type of equation. For example, the shape of `sin(x)sin(x)` is very different from `sin(x)`. But the result of multiplying two Gaussians is yet another Gaussian. This is a key reason why Kalman filters are computationally feasible. Said another way, Kalman filters use Gaussians *because* they are computationally nice. The product of two independent Gaussians is given by:$$\begin{aligned}\mu &=\frac{\sigma_1^2\mu_2 + \sigma_2^2\mu_1}{\sigma_1^2+\sigma_2^2}\\\sigma^2 &=\frac{\sigma_1^2\sigma_2^2}{\sigma_1^2+\sigma_2^2} \end{aligned}$$The sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$The remainder of this section is optional. I will derive the equations for the sum and product of two Gaussians. You will not need to understand this material to understand the rest of the book, so long as you accept the results. Product of GaussiansYou can find this result by multiplying the equation for two Gaussians together and combining terms. The algebra gets messy. I will derive it using Bayes theorem. We can state the problem as: let the prior be $N(\bar\mu, \bar\sigma^2)$, and measurement be $z \propto N(z, \sigma_z^2)$. What is the posterior x given the measurement z?Write the posterior as $P(x \mid z)$. Now we can use Bayes Theorem to state$$P(x \mid z) = \frac{P(z \mid x)P(x)}{P(z)}$$$P(z)$ is a normalizing constant, so we can create a proportinality$$P(x \mid z) \propto P(z|x)P(x)$$Now we subtitute in the equations for the Gaussians, which are$$P(z \mid x) = \frac{1}{\sqrt{2\pi\sigma_z^2}}\exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]$$$$P(x) = \frac{1}{\sqrt{2\pi\bar\sigma^2}}\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]$$We can drop the leading terms, as they are constants, giving us$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}\Big]\exp \Big[-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big]\\&\propto \exp \Big[-\frac{(z-x)^2}{2\sigma_z^2}-\frac{(x-\bar\mu)^2}{2\bar\sigma^2}\Big] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z-x)^2-\sigma_z^2(x-\bar\mu)^2]\Big]\end{aligned}$$Now we multiply out the squared terms and group in terms of the posterior $x$.$$\begin{aligned}P(x \mid z) &\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[\bar\sigma^2(z^2 -2xz + x^2) + \sigma_z^2(x^2 - 2x\bar\mu+\bar\mu^2)]\Big ] \\&\propto \exp \Big[-\frac{1}{2\sigma_z^2\bar\sigma^2}[x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z) + (\bar\sigma^2z^2+\sigma_z^2\bar\mu^2)]\Big ]\end{aligned}$$The last parentheses do not contain the posterior $x$, so it can be treated as a constant and discarded.$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2(\bar\sigma^2+\sigma_z^2)-2x(\sigma_z^2\bar\mu + \bar\sigma^2z)}{\sigma_z^2\bar\sigma^2}\Big ]$$Divide numerator and denominator by $\bar\sigma^2+\sigma_z^2$ to get$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{x^2-2x(\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$Proportionality allows us create or delete constants at will, so we can factor this into$$P(x \mid z) \propto \exp \Big[-\frac{1}{2}\frac{(x-\frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2})^2}{\frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}}\Big ]$$A Gaussian is$$N(\mu,\, \sigma^2) \propto \exp\Big [-\frac{1}{2}\frac{(x - \mu)^2}{\sigma^2}\Big ]$$So we can see that $P(x \mid z)$ has a mean of$$\mu_\mathtt{posterior} = \frac{\sigma_z^2\bar\mu + \bar\sigma^2z}{\bar\sigma^2+\sigma_z^2}$$and a variance of$$\sigma_\mathtt{posterior} = \frac{\sigma_z^2\bar\sigma^2}{\bar\sigma^2+\sigma_z^2}$$I've dropped the constants, and so the result is not a normal, but proportional to one. Bayes theorem normalizes with the $P(z)$ divisor, ensuring that the result is normal. We normalize in the update step of our filters, ensuring the filter estimate is Gaussian.$$\mathcal N_1 = \| \mathcal N_2\cdot \mathcal N_3\|$$ Sum of GaussiansThe sum of two Gaussians is given by$$\begin{gathered}\mu = \mu_1 + \mu_2 \\\sigma^2 = \sigma^2_1 + \sigma^2_2\end{gathered}$$There are several proofs for this. I will use convolution since we used convolution in the previous chapter for the histograms of probabilities. To find the density function of the sum of two Gaussian random variables we sum the density functions of each. They are nonlinear, continuous functions, so we need to compute the sum with an integral. If the random variables $p$ and $z$ (e.g. prior and measurement) are independent we can compute this with$p(x) = \int\limits_{-\infty}^\infty f_p(x-z)f_z(z)\, dx$This is the equation for a convolution. Now we just do some math:$p(x) = \int\limits_{-\infty}^\infty f_2(x-x_1)f_1(x_1)\, dx$$= \int\limits_{-\infty}^\infty \frac{1}{\sqrt{2\pi}\sigma_z}\exp\left[-\frac{(x - z - \mu_z)^2}{2\sigma^2_z}\right]\frac{1}{\sqrt{2\pi}\sigma_p}\exp\left[-\frac{(x - \mu_p)^2}{2\sigma^2_p}\right] \, dx$$= \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$$= \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right] \int\limits_{-\infty}^\infty\frac{1}{\sqrt{2\pi}\frac{\sigma_p\sigma_z}{\sqrt{\sigma_p^2 + \sigma_z^2}}} \exp\left[ -\frac{(x - \frac{\sigma_p^2(x-\mu_z) + \sigma_z^2\mu_p}{}))^2}{2\left(\frac{\sigma_p\sigma_x}{\sqrt{\sigma_z^2+\sigma_p^2}}\right)^2}\right] \, dx$The expression inside the integral is a normal distribution. The sum of a normal distribution is one, hence the integral is one. This gives us$$p(x) = \frac{1}{\sqrt{2\pi}\sqrt{\sigma_p^2 + \sigma_z^2}} \exp\left[ -\frac{(x - (\mu_p + \mu_z)))^2}{2(\sigma_z^2+\sigma_p^2)}\right]$$This is in the form of a normal, where$$\begin{gathered}\mu_x = \mu_p + \mu_z \\\sigma_x^2 = \sigma_z^2+\sigma_p^2\, \square\end{gathered}$$ Computing Probabilities with scipy.statsIn this chapter I used code from [FilterPy](https://github.com/rlabbe/filterpy) to compute and plot Gaussians. I did that to give you a chance to look at the code and see how these functions are implemented. However, Python comes with "batteries included" as the saying goes, and it comes with a wide range of statistics functions in the module `scipy.stats`. So let's walk through how to use scipy.stats to compute statistics and probabilities.The `scipy.stats` module contains a number of objects which you can use to compute attributes of various probability distributions. The full documentation for this module is here: http://docs.scipy.org/doc/scipy/reference/stats.html. We will focus on the norm variable, which implements the normal distribution. Let's look at some code that uses `scipy.stats.norm` to compute a Gaussian, and compare its value to the value returned by the `gaussian()` function from FilterPy. ###Code from scipy.stats import norm import filterpy.stats print(norm(2, 3).pdf(1.5)) print(filterpy.stats.gaussian(x=1.5, mean=2, var=3*3)) ###Output 0.131146572034 0.131146572034 ###Markdown The call `norm(2, 3)` creates what scipy calls a 'frozen' distribution - it creates and returns an object with a mean of 2 and a standard deviation of 3. You can then use this object multiple times to get the probability density of various values, like so: ###Code n23 = norm(2, 3) print('pdf of 1.5 is %.4f' % n23.pdf(1.5)) print('pdf of 2.5 is also %.4f' % n23.pdf(2.5)) print('pdf of 2 is %.4f' % n23.pdf(2)) ###Output pdf of 1.5 is 0.1311 pdf of 2.5 is also 0.1311 pdf of 2 is 0.1330 ###Markdown The documentation for [scipy.stats.norm](http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.htmlscipy.stats.normfor) [2] lists many other functions. For example, we can generate $n$ samples from the distribution with the `rvs()` function. ###Code np.set_printoptions(precision=3, linewidth=50) print(n23.rvs(size=15)) ###Output [-1.615 5.798 -1.458 -2.189 4.282 0.664 5.282 0.212 -1.687 5.052 -2.256 2.516 0.403 0.966 4.009] ###Markdown We can get the [*cumulative distribution function (CDF)*](https://en.wikipedia.org/wiki/Cumulative_distribution_function), which is the probability that a randomly drawn value from the distribution is less than or equal to $x$. ###Code # probability that a random value is less than the mean 2 print(n23.cdf(2)) ###Output 0.5 ###Markdown We can get various properties of the distribution: ###Code print('variance is', n23.var()) print('standard deviation is', n23.std()) print('mean is', n23.mean()) ###Output variance is 9.0 standard deviation is 3.0 mean is 2.0 ###Markdown Fat TailsEarlier I mentioned the *central limit theorem*, which states that under certain conditions the arithmetic sum of any independent random variable will be normally distributed, regardless of how the random variables are distributed. This is important to us because nature is full of distributions which are not normal, but when we apply the central limit theorem over large populations we end up with normal distributions. However, a key part of the proof is “under certain conditions”. These conditions often do not hold for the physical world. The resulting distributions are called *fat tailed*. Tails is a colloquial term for the far left and right side parts of the curve where the probability density is close to zero.Let's consider a trivial example. We think of things like test scores as being normally distributed. If you have ever had a professor “grade on a curve” you have been subject to this assumption. But of course test scores cannot follow a normal distribution. This is because the distribution assigns a nonzero probability distribution for *any* value, no matter how far from the mean. So, for example, say your mean is 90 and the standard deviation is 13. The normal distribution assumes that there is a large chance of somebody getting a 90, and a small chance of somebody getting a 40. However, it also implies that there is a tiny chance of somebody getting a grade of -10, or 150. It assigns an infinitesimal chance of getting a score of $-10^{300}$ or $10^{32986}$. The tails of a Gaussian distribution are infinitely long.But for a test we know this is not true. Ignoring extra credit, you cannot get less than 0, or more than 100. Let's plot this range of values using a normal distribution. ###Code xs = np.arange(10, 100, 0.05) ys = [gaussian(x, 90, 30) for x in xs] plt.plot(xs, ys, label='var=0.2') plt.xlim(0, 120) plt.ylim(-0.02, 0.09); ###Output _____no_output_____ ###Markdown The area under the curve cannot equal 1, so it is not a probability distribution. What actually happens is that more students than predicted by a normal distribution get scores nearer the upper end of the range (for example), and that tail becomes “fat”. Also, the test is probably not able to perfectly distinguish minute differences in skill in the students, so the distribution to the left of the mean is also probably a bit bunched up in places. The resulting distribution is called a [*fat tail distribution*](https://en.wikipedia.org/wiki/Fat-tailed_distribution). Sensors measure the world. The errors in a sensor's measurements are rarely truly Gaussian. It is far too early to be talking about the difficulties that this presents to the Kalman filter designer. It is worth keeping in the back of your mind the fact that the Kalman filter math is based on an idealized model of the world. For now I will present a bit of code that I will be using later in the book to form fat tail distributions to simulate various processes and sensors. This distribution is called the [*Student's $t$-distribution*](https://en.wikipedia.org/wiki/Student%27s_t-distribution). Let's say I want to model a sensor that has some white noise in the output. For simplicity, let's say the signal is a constant 10, and the standard deviation of the noise is 2. We can use the function `numpy.random.randn()` to get a random number with a mean of 0 and a standard deviation of 1. I can simulate this with: ###Code from numpy.random import randn def sense(): return 10 + randn()*2 ###Output _____no_output_____ ###Markdown Let's plot that signal and see what it looks like. ###Code zs = [sense() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____ ###Markdown That looks like I would expect. The signal is centered around 10. A standard deviation of 2 means that 68% of the measurements will be within $\pm$ 2 of 10, and 99% will be within $\pm$ 6 of 10, and that looks like what is happening. Now let's look at a fat tailed distribution generated with the Student's $t$-distribution. I will not go into the math, but just give you the source code for it and then plot a distribution using it. ###Code import random import math def rand_student_t(df, mu=0, std=1): """return random number distributed by Student's t distribution with `df` degrees of freedom with the specified mean and standard deviation. """ x = random.gauss(0, std) y = 2.0*random.gammavariate(0.5*df, 2.0) return x / (math.sqrt(y / df)) + mu def sense_t(): return 10 + rand_student_t(7)*2 zs = [sense_t() for i in range(5000)] plt.plot(zs, lw=1); ###Output _____no_output_____
docs/tutorials/nb_shap_feature_elimination.ipynb
###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features, that should not be eliminated. E.g incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown You can also provide a list of features that sholud not be eliminated. Say based on your prior knowledge you know that features `f10,f19,f15` are important and sholud not be eliminated.This can be done by providing a list of columns to `columns_to_keep` parameter in the fit function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y,columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=10, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=10, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5, 0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5, 0.60), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown Recursive Feature Elimination using SHAP importance and CVBackwards Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- The approach uses a tree-based model to detect the complex relations between features and the target.- Uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Allows to us [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.The disadvantages are:- One needs to manually select how many features to keep at the end of the routine, based on how the performance of the model changes between rounds.- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- numerical features- 1 categorical feature- 1 static feature- 1 feature with missing values ###Code from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 X.head() X.dtypes.head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the method. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output Removing static features ['f3_static']. The following variables contain missing values ['f2_missing']. Make sure to imputemissing or apply a model that handles them automatically. Changing dtype of ['f1_categorical'] from "object" to "category". Treating it as categorical variable. Make sure that the model handles categorical variables, or encode them first. ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code report.head() ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 9 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=9) ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case sklearn compatible search CV e.g. `GridSearchCV` or `RandomizedSearchCV` or `BayesSearchCV`are passed as clf, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of any hyperparameter search schema that is consistent with sklearn API e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of sklearn compatible hyperparameter search schemas e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV), in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features, that should not be eliminated. E.g incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code %%capture !pip install probatus !pip install lightgbm from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown You can also provide a list of features that sholud not be eliminated. Say based on your prior knowledge you know that features `f10,f19,f15` are important and sholud not be eliminated.This can be done by providing a list of columns to `columns_to_keep` parameter in the fit function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y,columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=20, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=20, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5,0.7), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5,0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) plt.axhline(y=0.5) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows you to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case sklearn compatible search CV e.g. `GridSearchCV` or `RandomizedSearchCV` or `BayesSearchCV`are passed as clf, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet it removes the lowest importance features, based on SHAP features importance. It also supports the use of any hyperparameter search schema that is consistent with sklearn API e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of sklearn compatible hyperparameter search schemas e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV), in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features that should not be eliminated e.g. incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with the lowest impact on a model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code %%capture !pip install probatus !pip install lightgbm from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown You can also provide a list of features that should not be eliminated. Say based on your prior knowledge you know that the features `f10,f19,f15` are important and should not be eliminated. This can be done by providing a list of columns to `columns_to_keep` parameter in the `fit()` function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y,columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=20, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=20, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5,0.7), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5,0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) plt.axhline(y=0.5) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=10, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=10, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5, 0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5, 0.60), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=10, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=10, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5, 0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5, 0.60), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows you to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case sklearn compatible search CV e.g. `GridSearchCV` or `RandomizedSearchCV` or `BayesSearchCV`are passed as clf, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet it removes the lowest importance features, based on SHAP features importance. It also supports the use of any hyperparameter search schema that is consistent with sklearn API e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of sklearn compatible hyperparameter search schemas e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV), in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features that should not be eliminated e.g. incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with the lowest impact on a model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended.- For large datasets, performing hyperparameter optimization can be very computationally expensive. For gradient boosted tree models, one alternative is to use early stopping of the training step. For this, see [EarlyStoppingShapRFECV](EarlyStoppingShapRFECV) Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 18 numerical features- 1 static feature- 1 static feature- 1 feature with missing values ###Code %%capture !pip install probatus !pip install lightgbm from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has the highest Validation AUC at 11 features and a peak at 6 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=6) ###Output _____no_output_____ ###Markdown You can also provide a list of features that should not be eliminated. Say based on your prior knowledge you know that the features `f10,f19,f15` are important and should not be eliminated. This can be done by providing a list of columns to `columns_to_keep` parameter in the `fit()` function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y, columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown EarlyStoppingShapRFECV[Early stopping](https://en.wikipedia.org/wiki/Early_stopping) is a type of regularization, common in [gradient boosted trees](https://en.wikipedia.org/wiki/Gradient_boostingGradient_tree_boosting). Supported packages are: [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/index.html) and [CatBoost](https://catboost.ai/en/docs/). It consists of measuring how well the model performs after each base learner is added to the ensemble tree, using a relevant scoring metric. If this metric does not improve after a certain number of training steps, the training can be stopped before the maximum number of base learners is reached. Early stopping is thus a way of mitigating overfitting in a relatively cheaply, without having to find the ideal regularization hyperparameters. It is particularly useful for handling large datasets, since it reduces the number of training steps which can decrease the modelling time.`EarlyStoppingShapRFECV` is a child of `ShapRFECV` with limited support for early stopping and the example below shows how to use it with LightGBM. ###Code from probatus.feature_elimination import EarlyStoppingShapRFECV clf = lightgbm.LGBMClassifier(n_estimators=200, max_depth=3) # Run feature elimination shap_elimination = EarlyStoppingShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', eval_metric='auc', early_stopping_rounds=5, n_jobs=3) report = shap_elimination.fit_compute(X, y) # Make plots performance_plot = shap_elimination.plot() # Get final feature set final_features_set = shap_elimination.get_reduced_features_set(num_features=9) ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows you to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case sklearn compatible search CV e.g. `GridSearchCV` or `RandomizedSearchCV` or `BayesSearchCV`are passed as clf, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet it removes the lowest importance features, based on SHAP features importance. It also supports the use of any hyperparameter search schema that is consistent with sklearn API e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of sklearn compatible hyperparameter search schemas e.g. [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) and [BayesSearchCV](https://scikit-optimize.github.io/stable/modules/generated/skopt.BayesSearchCV.htmlskopt.BayesSearchCV), in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features that should not be eliminated e.g. incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with the lowest impact on a model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended.- For large datasets, performing hyperparameter optimization can be very computationally expensive. For gradient boosted tree models, one alternative is to use early stopping of the training step. For this, see [EarlyStoppingShapRFECV](EarlyStoppingShapRFECV) Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 18 numerical features- 1 static feature- 1 static feature- 1 feature with missing values ###Code %%capture !pip install probatus !pip install lightgbm from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has the highest Validation AUC at 11 features and a peak at 6 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=6) ###Output _____no_output_____ ###Markdown You can also provide a list of features that should not be eliminated. Say based on your prior knowledge you know that the features `f10,f19,f15` are important and should not be eliminated. This can be done by providing a list of columns to `columns_to_keep` parameter in the `fit()` function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y, columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown EarlyStoppingShapRFECV[Early stopping](https://en.wikipedia.org/wiki/Early_stopping) is a type of regularization, common in [gradient boosted trees](https://en.wikipedia.org/wiki/Gradient_boostingGradient_tree_boosting), such as [LightGBM](https://lightgbm.readthedocs.io/en/latest/index.html) and [XGBoost](https://xgboost.readthedocs.io/en/latest/index.html). It consists of measuring how well the model performs after each base learner is added to the ensemble tree, using a relevant scoring metric. If this metric does not improve after a certain number of training steps, the training can be stopped before the maximum number of base learners is reached. Early stopping is thus a way of mitigating overfitting in a relatively cheaply, without having to find the ideal regularization hyperparameters. It is particularly useful for handling large datasets, since it reduces the number of training steps which can decrease the modelling time.`EarlyStoppingShapRFECV` is a child of `ShapRFECV` with limited support for early stopping and the example below shows how to use it with LightGBM. ###Code from probatus.feature_elimination import EarlyStoppingShapRFECV clf = lightgbm.LGBMClassifier(n_estimators=200, max_depth=3) # Run feature elimination shap_elimination = EarlyStoppingShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', eval_metric='auc', early_stopping_rounds=5, n_jobs=3) report = shap_elimination.fit_compute(X, y) # Make plots performance_plot = shap_elimination.plot() # Get final feature set final_features_set = shap_elimination.get_reduced_features_set(num_features=9) ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features, that should not be eliminated. E.g incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code %%capture !pip install probatus !pip install lightgbm from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown You can also provide a list of features that sholud not be eliminated. Say based on your prior knowledge you know that features `f10,f19,f15` are important and sholud not be eliminated.This can be done by providing a list of columns to `columns_to_keep` parameter in the fit function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y,columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=20, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=20, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5,0.7), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5,0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) plt.axhline(y=0.5) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____ ###Markdown ShapRFECV - Recursive Feature Elimination using SHAP importance[![open in colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ing-bank/probatus/blob/master/docs/tutorials/nb_shap_feature_elimination.ipynb)Recursive Feature Elimination allows to efficiently reduce the number of features in your dataset, without losing the predictive power of the model. `probatus` implements the following feature elimination routine for **tree-based & linear models**: While any features left, iterate: 1. (Optional) Tune hyperparameters, in case `GridSearchCV` or `RandomSearchCV` are provided as estimators, 2. Calculate SHAP feature importance using Cross-Validation, 3. Remove `step` lowest importance features.The functionality is similar to [RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html), yet, it removes the lowest importance features, based on SHAP features importance. It also supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) passed as a `clf`, thanks to which you can perform hyperparameter optimization at each step of the search.hyperparameters of the model at each round, to tune the model for each features set. Lastly, it supports categorical features (`object` and `category` dtype) and missing values in the data, as long as the model supports them. The main advantages of using this routine are:- It uses a tree-based or a linear model to detect the complex relations between features and the target.- It uses SHAP importance, which is one of the most reliable ways to estimate features importance. Unlike many other techniques, it works with missing values and categorical variables.- Supports the use of [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) in order to optimize hyperparameters at each iteration. This way you can assess if the removal of a given feature reduces the predictive power, or simply requires additional tuning of the model.- You can also provide a list of features, that should not be eliminated. E.g incase of prior knowledge.The disadvantages are:- Removing lowest [SHAP](https://shap.readthedocs.io/en/latest/) importance feature does not always translate to choosing the feature with lowest impact on model's performance. Shap importance illustrates how strongly a given feature affects the output of the model, while disregarding correctness of this prediction.- Currently, the functionality only supports tree-based & linear binary classifiers, in the future the scope might be extended. Setup the datasetIn order to use the functionality, let's set up an example dataset with:- 17 numerical features- 1 categorical feature- 1 static feature- 1 static feature- 1 feature with missing values ###Code from probatus.feature_elimination import ShapRFECV from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split import numpy as np import pandas as pd import lightgbm from sklearn.model_selection import RandomizedSearchCV feature_names = ['f1_categorical', 'f2_missing', 'f3_static', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f20'] # Prepare two samples X, y = make_classification(n_samples=1000, class_sep=0.05, n_informative=6, n_features=20, random_state=0, n_redundant=10, n_clusters_per_class=1) X = pd.DataFrame(X, columns=feature_names) X['f1_categorical'] = X['f1_categorical'].apply(lambda x: str(np.round(x*10))) X['f2_missing'] = X['f2_missing'].apply(lambda x: x if np.random.rand()<0.8 else np.nan) X['f3_static'] = 0 #First 5 rows of first 5 columns X[feature_names[:5]].head() ###Output _____no_output_____ ###Markdown Set up the model and model tuningYou need to set up the model that you would like to use in the feature elimination. `probatus` requires a **tree-based or linear binary classifier** in order to speed up the computation of SHAP feature importance at each step. We recommend using [LGBMClassifier](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMClassifier.html), which by default handles missing values and categorical features.The example below applies randomized search in order to optimize the hyperparameters of the model at each iteration of the search. ###Code clf = lightgbm.LGBMClassifier(max_depth=5, class_weight='balanced') param_grid = { 'n_estimators': [5, 7, 10], 'num_leaves': [3, 5, 7, 10], } search = RandomizedSearchCV(clf, param_grid) ###Output _____no_output_____ ###Markdown Apply ShapRFECVNow let's apply the [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html). ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3) report = shap_elimination.fit_compute(X, y) ###Output _____no_output_____ ###Markdown At the end of the process, you can investigate the results for each iteration. ###Code #First 5 rows of first 5 columns report[['num_features', 'features_set', 'val_metric_mean']] ###Output _____no_output_____ ###Markdown Once the process is completed, you can visualize the results. Let's investigate the performance plot. In this case, the Validation AUC score has a peak at 11 features. ###Code performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=11) ###Output _____no_output_____ ###Markdown You can also provide a list of features that sholud not be eliminated. Say based on your prior knowledge you know that features `f10,f19,f15` are important and sholud not be eliminated.This can be done by providing a list of columns to `columns_to_keep` parameter in the fit function. ###Code shap_elimination = ShapRFECV( clf=search, step=0.2, cv=10, scoring='roc_auc', n_jobs=3 ,min_features_to_select=4) report = shap_elimination.fit_compute(X, y,columns_to_keep=['f10','f15','f19']) performance_plot = shap_elimination.plot() ###Output _____no_output_____ ###Markdown Let's see the final feature set: ###Code shap_elimination.get_reduced_features_set(num_features=4) ###Output _____no_output_____ ###Markdown ShapRFECV vs RFECV In this section we will compare the performance of the model trained on the features selected using the probatus [ShapRFECV](https://ing-bank.github.io/probatus/api/feature_elimination.html) and the [sklearn RFECV](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html).In order to compare them let's first prepare a dataset, and a model that will be applied: ###Code from probatus.feature_elimination import ShapRFECV import numpy as np import pandas as pd import lightgbm from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split, cross_val_score from sklearn.feature_selection import RFECV import matplotlib.pyplot as plt # Prepare train and test data: X, y = make_classification(n_samples=10000, class_sep=0.1, n_informative=40, n_features=50, random_state=0, n_clusters_per_class=10) X = pd.DataFrame(X) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42) # Set up the model: clf = lightgbm.LGBMClassifier(n_estimators=10, num_leaves=7) ###Output _____no_output_____ ###Markdown Now, we can run ShapRFECV and RFECV with the same parameters, to extract the optimal feature sets: ###Code # Run RFECV and ShapRFECV with the same parameters rfe = RFECV(clf, step=1, cv=20, scoring='roc_auc', n_jobs=3).fit(X_train, y_train) shap_elimination = ShapRFECV(clf=clf, step=1, cv=20, scoring='roc_auc', n_jobs=3) shap_report = shap_elimination.fit_compute(X_train, y_train) # Compare the CV Validation AUC for different number of features in each method. ax = pd.DataFrame({'RFECV Validation AUC': list(reversed(rfe.grid_scores_)), 'ShapRFECV Validation AUC': shap_report['val_metric_mean'].values.tolist()}, index=shap_report['num_features'].values.tolist()).plot(ylim=(0.5,0.7), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) ax.set_ylabel("Model Performance") ax.set_xlabel("Number of features") ax.invert_xaxis() plt.show() ###Output _____no_output_____ ###Markdown The plot above presents the averaged CV Validation AUC of model performance for each round of the RFE process in both ShapRFECV and RFECV. The optimal number of features is 21 for the former, and 13 for the latter.Now we will compare the performance of the model trained on:- All 50 available features (baseline),- 13 features selected by RFECV (final),- 21 features selected by ShapRFECV (final),- 13 feature selected by ShapRFECV (baseline). ###Code n_features_shap = 21 n_features_rfecv = rfe.n_features_ # Calculate the AUC for the models with different feature sets test_auc_full = clf.fit(X_train, y_train).score(X_test, y_test) val_auc_full = np.mean(cross_val_score(clf, X_train, y_train, cv=10)) rfe_features_set = X_train.columns[rfe.support_] test_auc_rfe = clf.fit(X_train[rfe_features_set], y_train).score(X_test[rfe_features_set], y_test) val_auc_rfe = rfe.grid_scores_[n_features_rfecv] shap_feature_set = X_train.columns[shap_elimination.get_reduced_features_set(n_features_shap)] test_auc_shap = clf.fit(X_train[shap_feature_set], y_train).score(X_test[shap_feature_set], y_test) val_auc_shap = shap_report[shap_report.num_features == n_features_shap]['val_metric_mean'].values[0] shap_feature_set_size_rfe = X_train.columns[shap_elimination.get_reduced_features_set(n_features_rfecv)] test_auc_shap_size_rfe = clf.fit(X_train[shap_feature_set_size_rfe], y_train).score(X_test[shap_feature_set_size_rfe], y_test) val_auc_shap_size_rfe = shap_report[shap_report.num_features == n_features_rfecv]['val_metric_mean'].values[0] # Plot Test and Validation Performance variants = ('All 50 features', f'RFECV {n_features_rfecv} features', f'ShapRFECV {n_features_shap} features', f'ShapRFECV {n_features_rfecv} features') results_test = [test_auc_full, test_auc_rfe, test_auc_shap, test_auc_shap_size_rfe] results_val = [val_auc_full, val_auc_rfe, val_auc_shap, val_auc_shap_size_rfe] ax = pd.DataFrame({'CV Validation AUC': results_val, 'Test AUC': results_test}, index=variants).plot.bar(ylim=(0.5,0.6), rot=10, title='Comparison of RFECV and ShapRFECV', figsize=(10,5)) plt.axhline(y=0.5) ax.set_ylabel("Model Performance") plt.show() ###Output _____no_output_____
content/post/covid_analysis_python/DS_Py_part001.ipynb
###Markdown Welcome to the Notebook Importing modules Task 1 ###Code import pandas as pd import numpy as np import plotly.express as px import matplotlib.pyplot as plt print('modules are imported') ###Output modules are imported ###Markdown Task 1.1: Loading the Dataset ###Code dataset_url = 'https://raw.githubusercontent.com/datasets/covid-19/main/data/countries-aggregated.csv' fname = 'data/countries-aggregated.csv' df = pd.read_csv(fname) df_31May21 = df[df.Date == '2020-05-31'] df_31May21.head() ###Output _____no_output_____ ###Markdown Task 1.2: let's check the dataframe ###Code df_31May21.head() df_31May21.tail() ###Output _____no_output_____ ###Markdown let's check the shape of the dataframe ###Code df_31May21.shape df.shape ###Output _____no_output_____ ###Markdown Task 2.1 : let's do some preprocessing ###Code dfconf=df[df.Confirmed>0] dfconf.head() dfconf.shape ###Output _____no_output_____ ###Markdown let's see data related to a country for example Italy ###Code dfconf[dfconf.Country=='Italy'].head(10) ###Output _____no_output_____
met.no.ipynb
###Markdown Process meteorological data from https://api.met.no/ https://www.met.no/en/free-meteorological-data ###Code from bs4 import BeautifulSoup import urllib.request from datetime import datetime, timedelta import pandas as pd import matplotlib.pyplot as plt from matplotlib import dates %config InlineBackend.figure_format = 'retina' import seaborn as sns sns.set(style='white') location_url = 'https://api.met.no/weatherapi/locationforecast/1.9/?lat=63.44&lon=10.37' response = urllib.request.urlopen(location_url) data = response.read() soup = BeautifulSoup(data,'lxml') # Example wind dictionary # {'beaufort': '2', 'id': 'ff', 'mps': '3.0', 'name': 'Svak vind'} wind = {} clouds_temp = {} for forecast in soup.find_all(name='time'): if forecast.windspeed and forecast.areamaxwindspeed: wind[datetime.strptime(forecast.attrs['from'],'%Y-%m-%dT%H:%M:%SZ')] = [int(forecast.windspeed.attrs['beaufort']), float(forecast.windspeed.attrs['mps']), float(forecast.areamaxwindspeed.attrs['mps']), float(forecast.cloudiness.attrs['percent'])] elif forecast.precipitation and forecast.mintemperature: clouds_temp[datetime.strptime(forecast.attrs['from'],'%Y-%m-%dT%H:%M:%SZ')] = [float(forecast.mintemperature.attrs['value']), float(forecast.maxtemperature.attrs['value']), float(forecast.precipitation.attrs['value'])] wind_pd = pd.DataFrame.from_dict(wind, orient='index', columns=['bft', 'mps', 'maxwind', 'cloudiness']) temp_pd = pd.DataFrame.from_dict(clouds_temp, orient='index', columns=['min_temp', 'max_temp', 'precipitation']) temp_pd['mean_temp'] = np.mean([temp_pd.min_temp, temp_pd.max_temp], axis=0) def plot_data(wind_pd,temp_pd, sunset=21, sunrise=8): ''' Create plot from dataframe. Index has to be in timestamps''' sns.set(font_scale=1.1, style='white') figure = plt.figure(figsize=(20,4)) ax = figure.add_subplot(111) ax.plot(wind_pd.index,wind_pd.bft, color='k', label='Beaufort', lw=2.5,alpha=.7) ax.plot(wind_pd.index,wind_pd.mps, label='m/s', ls=':', color='b') ax.plot(wind_pd.index,wind_pd.maxwind, label='m/s', ls=':', color='blue',alpha=.2) ax.set_ylabel('Wind speed') for time in wind_pd.index: if (time.hour > sunset) or (time.hour < sunrise): ax.axvspan(time, time + timedelta(hours=1), alpha=0.1, color='grey', lw=0) ax.set_ylim(0,5.5) plt.xticks(rotation='45') ax2 = ax.twinx() ax2.plot(wind_pd.index,wind_pd.cloudiness, label='Cloudiness',color='g',lw=4,alpha=.4) ax2.tick_params(axis='y', labelcolor='g') ax2.set_ylabel('Cloudiness [%]') hfmt = dates.DateFormatter('%a %H:%M') ax.xaxis.set_major_locator(dates.HourLocator()) ax.xaxis.set_major_formatter(hfmt) ax.set_xlim(wind_pd.index[0], wind_pd.index[-1]) plt.title('Wind and cloudiness forecast (Norwegian Meteorological Institute)', size=15) sns.despine(left=True) xlims = ax.get_xlim() # Safe here to transfer to next figure ############################################################################################### figure = plt.figure(figsize=(20,4)) ax = figure.add_subplot(111) ax.fill_between(temp_pd.index, temp_pd.min_temp, temp_pd.max_temp, facecolor='r', alpha=.5) ax.plot(temp_pd.index,temp_pd.mean_temp, color='red', lw=2.5,alpha=.8) ax.set_ylabel('Temperature min max [C]') for time in temp_pd.index: if (time.hour > sunset) or (time.hour < sunrise): ax.axvspan(time, time + timedelta(hours=1), alpha=0.1, color='grey', lw=0) ax.set_ylim(0,30) plt.xticks(rotation='45') ax2 = ax.twinx() ax2.plot(temp_pd.index,temp_pd.precipitation, label='Precipitation',color='b',lw=4,alpha=.8) ax2.tick_params(axis='y', labelcolor='b') ax2.set_ylabel('Precipitation [mm]') ax2.set_ylim(-1,20) hfmt = dates.DateFormatter('%a %H:%M') ax.xaxis.set_major_locator(dates.HourLocator()) ax.xaxis.set_major_formatter(hfmt) ax.set_xlim(xlims[0], xlims[1]) plt.title('Temperature and precipitation forecast', size=15) sns.despine(left=True) plot_data(wind_pd.iloc[:50],temp_pd.iloc[:60]) ###Output _____no_output_____
data/data_loading.ipynb
###Markdown Load datasets to mLab* https://www.kaggle.com/c/twitter-sentiment-analysis2/discussion* https://www.kaggle.com/kazanova/sentiment140 ###Code import pymongo import pandas as pd import numpy as np import random def db_connection(collection_name): # connect to mLab DB try: with open("../credentials/mlab_credentials.txt", 'r', encoding='utf-8') as f: [name,password,url,dbname]=f.read().splitlines() db_conn = pymongo.MongoClient("mongodb://{}:{}@{}/{}".format(name,password,url,dbname)) print ("DB connected successfully!!!") except pymongo.errors.ConnectionFailure as e: print ("Could not connect to DB: %s" % e) db = db_conn[dbname] collection = db[collection_name] return collection def kaggle_id(id): return "kaggle_train_" + str(id) def sentiment140_id(id): return "sentiment140_" + str(id) def random_sentiment(sentiment): sentiments_list = [-2,-1,0,1,2] return random.choice(sentiments_list) def random_location(dummy): location = random.choice(locations_list) #location = locations_list[0] lat = random_location_lat(location) lon = random_location_lon(location) return [lat,lon] def random_location_lat(location): #location = random.choice(locations_list) #location = locations_list[0] lat = location["lat_min"] + random.random()*(location["lat_max"] - location["lat_min"]) return lat def random_location_lon(location): #location = random.choice(locations_list) #location = locations_list[0] lon = location["lon_min"] + random.random()*(location["lon_max"] - location["lon_min"]) return lon db_collection_locations = db_connection("twitter_happiness_locations") locations_list = [location for location in db_collection_locations.find()] #locations_list db_collection = db_connection("twitter_happiness_test") # uncomment to delete #result = db_collection.delete_many({}) #print(result.deleted_count, " documents deleted") data_kaggle = pd.read_csv( "source/kaggle/train.csv", encoding='latin-1', header=0, names=["id_src","class","text"] ) data_kaggle["id"] = data_kaggle["id_src"].apply(kaggle_id) data_kaggle["class"] = data_kaggle["class"].apply(random_sentiment) #data_kaggle["lat"] = data_kaggle["sentiment"].apply(random_location_lat) #data_kaggle["lon"] = data_kaggle["sentiment"].apply(random_location_lon) data_kaggle["loc"] = data_kaggle.apply(random_location, axis=1) data_kaggle["lat"] = data_kaggle["loc"].apply(lambda loc: loc[0]) data_kaggle["lon"] = data_kaggle["loc"].apply(lambda loc: loc[1]) print(data_kaggle.shape) data_kaggle.head() db_collection.insert_many(data_kaggle.to_dict('records')) print(db_collection.count()) data_sentiment140 = pd.read_csv( "source/sentiment140/training.1600000.processed.noemoticon.csv", encoding='latin-1', header=None, names=["class","id_src","date","flag","user","text"] ) data_sentiment140["id"] = data_sentiment140["id_src"].apply(kaggle_id) data_sentiment140["class"] = data_sentiment140["class"].apply(random_sentiment) #data_sentiment140["lat"] = data_sentiment140["sentiment"].apply(random_location_lat) #data_sentiment140["lon"] = data_sentiment140["sentiment"].apply(random_location_lon) data_sentiment140["loc"] = data_sentiment140.apply(random_location, axis=1) data_sentiment140["lat"] = data_sentiment140["loc"].apply(lambda loc: loc[0]) data_sentiment140["lon"] = data_sentiment140["loc"].apply(lambda loc: loc[1]) data_sentiment140 = data_sentiment140.head(100000).copy() print(data_sentiment140.shape) data_sentiment140.head() db_collection.insert_many(data_sentiment140.to_dict('records')) print(db_collection.count()) for tweet in db_collection.find()[:10]: print(tweet) print(db_collection.count()) db_collection_tweets = db_connection("tweets") for tweet in db_collection_tweets.find()[:]: #print(tweet) print(tweet[""]) type(db_collection_tweets.find()[0]["created_at"]) location_query = { "coordinates": { "$ne": None } } for tweet in db_collection_tweets.find(location_query): print(tweet["coordinates"]) ###Output {'type': 'Point', 'coordinates': [-73.98044169, 40.77817731]} {'type': 'Point', 'coordinates': [-87.62362195, 41.88270986]} {'type': 'Point', 'coordinates': [-95.60276, 30.01138]} {'type': 'Point', 'coordinates': [-95.61707723, 29.91684185]} {'type': 'Point', 'coordinates': [-95.3698028, 29.7604267]} {'type': 'Point', 'coordinates': [-71.09973744, 42.34392621]} {'type': 'Point', 'coordinates': [-71.06138382, 42.36177402]} {'type': 'Point', 'coordinates': [-71.03925, 42.37274]} {'type': 'Point', 'coordinates': [-71.059773, 42.358431]} {'type': 'Point', 'coordinates': [-71.0150608, 42.36576817]} {'type': 'Point', 'coordinates': [-71.067605, 42.3016305]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0554013, 42.354474]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.06133, 42.36366]} {'type': 'Point', 'coordinates': [-71.11018, 42.32147]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [151.233, -33.95]} {'type': 'Point', 'coordinates': [151.23779297, -33.86357381]} {'type': 'Point', 'coordinates': [151.23779297, -33.86357381]} {'type': 'Point', 'coordinates': [151.23939102, -33.91889961]} {'type': 'Point', 'coordinates': [151.20797, -33.86751]} {'type': 'Point', 'coordinates': [151.24082, -33.82416]} {'type': 'Point', 'coordinates': [151.20797, -33.86751]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.78, 45.29]} {'type': 'Point', 'coordinates': [-75.7617704, 45.3504295]} {'type': 'Point', 'coordinates': [-75.7327919, 45.3529251]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7001615, 45.4205713]} {'type': 'Point', 'coordinates': [-75.69303805, 45.4203787]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.5047333, 45.4558019]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.8986835, 45.3088185]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.69303805, 45.4203787]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.29298, 43.69045]} {'type': 'Point', 'coordinates': [-79.3798169, 43.6467155]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3472, 43.6785]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.34239, 43.69386]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.4185527, 43.6671714]} {'type': 'Point', 'coordinates': [-79.4185527, 43.6671714]} {'type': 'Point', 'coordinates': [-79.3813459, 43.6667003]} {'type': 'Point', 'coordinates': [-79.3813459, 43.6667003]} {'type': 'Point', 'coordinates': [-79.3813459, 43.6667003]} {'type': 'Point', 'coordinates': [-79.3813459, 43.6667003]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.98058, 40.75726]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.95, 40.802]} {'type': 'Point', 'coordinates': [-76.97481767, 38.84355805]} {'type': 'Point', 'coordinates': [-87.69144, 41.90996]} {'type': 'Point', 'coordinates': [-87.62603, 41.8860699]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.6570368, 41.9998468]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.64146, 41.882984]} {'type': 'Point', 'coordinates': [-87.625, 41.88095]} {'type': 'Point', 'coordinates': [-87.627227, 41.877512]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.663691, 41.949318]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-95.5564777, 29.7246822]} {'type': 'Point', 'coordinates': [-95.39346752, 29.71384233]} {'type': 'Point', 'coordinates': [-71.067605, 42.3016305]} {'type': 'Point', 'coordinates': [-71.06, 42.36]} {'type': 'Point', 'coordinates': [-71.1337112, 42.3539038]} {'type': 'Point', 'coordinates': [-71.067605, 42.3016305]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.01644033, 42.36663851]} {'type': 'Point', 'coordinates': [-71.0565, 42.3577]} {'type': 'Point', 'coordinates': [-71.0554013, 42.354474]} {'type': 'Point', 'coordinates': [-71.067605, 42.2826027]} {'type': 'Point', 'coordinates': [151.2076775, -33.8694348]} {'type': 'Point', 'coordinates': [151.24327, -33.87601]} {'type': 'Point', 'coordinates': [151.20339443, -33.86756167]} {'type': 'Point', 'coordinates': [-75.58, 45.33]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7093, 45.39975]} {'type': 'Point', 'coordinates': [-75.7327919, 45.3529251]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.68801498, 45.41494554]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6824915, 45.4227425]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-79.41368, 43.72036]} {'type': 'Point', 'coordinates': [-79.3932783, 43.68859473]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.36200039, 43.66011808]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.61037, 45.5308]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.7428838, 45.4924809]} {'type': 'Point', 'coordinates': [-73.5374235, 45.6515135]} {'type': 'Point', 'coordinates': [-73.58863, 45.52089]} {'type': 'Point', 'coordinates': [-73.6214182, 45.511234]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.7497567, 45.4985638]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.57938474, 45.48043933]} {'type': 'Point', 'coordinates': [-73.56055507, 45.50495845]} {'type': 'Point', 'coordinates': [-73.56833703, 45.51856151]} {'type': 'Point', 'coordinates': [-73.55516, 45.4973]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.12274313, 41.38079354]} {'type': 'Point', 'coordinates': [2.16243362, 41.41232416]} {'type': 'Point', 'coordinates': [2.19827193, 41.40045791]} {'type': 'Point', 'coordinates': [2.12798063, 41.41079152]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.133314, 41.384445]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.15345471, 41.41370308]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [-74.00404355, 40.71098913]} {'type': 'Point', 'coordinates': [-73.99306342, 40.7577084]} {'type': 'Point', 'coordinates': [-73.96300948, 40.766445]} {'type': 'Point', 'coordinates': [-74.0039243, 40.7499179]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-74.00578, 40.71964]} {'type': 'Point', 'coordinates': [-73.98591185, 40.7577936]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.97034766, 40.7522775]} {'type': 'Point', 'coordinates': [-77.0313, 38.91673]} {'type': 'Point', 'coordinates': [-76.9734004, 38.8614408]} {'type': 'Point', 'coordinates': [-77.0420765, 38.9228277]} {'type': 'Point', 'coordinates': [-77.0420765, 38.9228277]} {'type': 'Point', 'coordinates': [-77.0420765, 38.9228277]} {'type': 'Point', 'coordinates': [-87.67878863, 41.9615796]} {'type': 'Point', 'coordinates': [-87.6353, 41.88192]} {'type': 'Point', 'coordinates': [-87.64191073, 41.88121028]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.6359787, 41.8787003]} {'type': 'Point', 'coordinates': [-87.71635778, 41.84626066]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-95.52901, 29.68837]} {'type': 'Point', 'coordinates': [-71.00018119, 42.39408989]} {'type': 'Point', 'coordinates': [-71.024414, 42.363075]} {'type': 'Point', 'coordinates': [-71.0565, 42.3577]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.06398, 42.35626]} {'type': 'Point', 'coordinates': [-71.05347, 42.3214]} {'type': 'Point', 'coordinates': [151.02, -33.86]} {'type': 'Point', 'coordinates': [151.18333333, -33.86666667]} {'type': 'Point', 'coordinates': [151.1865065, -33.8731368]} {'type': 'Point', 'coordinates': [151.20876, -33.870451]} {'type': 'Point', 'coordinates': [151.2797891, -33.76017393]} {'type': 'Point', 'coordinates': [151.208114, -33.874639]} {'type': 'Point', 'coordinates': [-75.58, 45.33]} {'type': 'Point', 'coordinates': [-75.7581845, 45.3900918]} {'type': 'Point', 'coordinates': [-75.80736458, 45.34724048]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-75.58, 45.33]} {'type': 'Point', 'coordinates': [-75.71881, 45.4066199]} {'type': 'Point', 'coordinates': [-75.69362686, 45.38641267]} {'type': 'Point', 'coordinates': [-79.4125137, 43.6345291]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.4289633, 43.6375599]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.37777, 43.6574]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.4942, 43.6128]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.76316, 45.48747]} {'type': 'Point', 'coordinates': [-73.57643863, 45.49237025]} {'type': 'Point', 'coordinates': [-73.66053, 45.50012]} {'type': 'Point', 'coordinates': [-73.57860492, 45.49685204]} {'type': 'Point', 'coordinates': [-73.6182, 45.46964]} {'type': 'Point', 'coordinates': [-73.6254739, 45.4789233]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.56814, 45.50027]} {'type': 'Point', 'coordinates': [-73.63763883, 45.504715]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.18333, 41.3833]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.15, 41.4]} {'type': 'Point', 'coordinates': [2.1734035, 41.3850639]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.17181, 41.39176]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.12265401, 41.39560055]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.13518712, 41.38954466]} {'type': 'Point', 'coordinates': [-74.0059413, 40.7127837]} {'type': 'Point', 'coordinates': [-73.99302306, 40.75059998]} {'type': 'Point', 'coordinates': [-74.0059413, 40.7127837]} {'type': 'Point', 'coordinates': [-74.01222238, 40.70231549]} {'type': 'Point', 'coordinates': [-74.00130077, 40.71628954]} {'type': 'Point', 'coordinates': [-73.99636635, 40.75872486]} {'type': 'Point', 'coordinates': [-73.9383, 40.8508]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.99192296, 40.75039654]} {'type': 'Point', 'coordinates': [-74.00614, 40.74389]} {'type': 'Point', 'coordinates': [-77.00530868, 38.89972008]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.65, 41.89972222]} {'type': 'Point', 'coordinates': [-87.6248878, 41.8819021]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-87.6297982, 41.8781136]} {'type': 'Point', 'coordinates': [-95.3698028, 29.7604267]} {'type': 'Point', 'coordinates': [-95.6588541, 29.7881495]} {'type': 'Point', 'coordinates': [-95.4648744, 29.7279726]} {'type': 'Point', 'coordinates': [-71.0554013, 42.354474]} {'type': 'Point', 'coordinates': [-71.0565, 42.3577]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.1151431, 42.3097365]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0736535, 42.3508018]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.0391693, 42.3357697]} {'type': 'Point', 'coordinates': [-71.0588801, 42.3600825]} {'type': 'Point', 'coordinates': [-71.08436, 42.34739]} {'type': 'Point', 'coordinates': [-71.06319444, 42.36875]} {'type': 'Point', 'coordinates': [151.08525, -33.85528]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.9127706, 45.2997419]} {'type': 'Point', 'coordinates': [-75.6957885, 45.42063]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.90298521, 45.34098257]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.9127706, 45.2997419]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.7048576, 45.3458686]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-75.7, 45.4167]} {'type': 'Point', 'coordinates': [-75.80727974, 45.33497665]} {'type': 'Point', 'coordinates': [-75.6971931, 45.4215296]} {'type': 'Point', 'coordinates': [-75.6128607, 45.4096358]} {'type': 'Point', 'coordinates': [-75.90298521, 45.34098257]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-79.42459, 43.64328]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3407, 43.7166]} {'type': 'Point', 'coordinates': [-79.3501304, 43.65901871]} {'type': 'Point', 'coordinates': [-79.4006, 43.67596]} {'type': 'Point', 'coordinates': [-79.3831843, 43.653226]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.5632, 45.5022]} {'type': 'Point', 'coordinates': [-73.6494733, 45.531899]} {'type': 'Point', 'coordinates': [-73.7497567, 45.4985638]} {'type': 'Point', 'coordinates': [-73.7497567, 45.4985638]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.62822712, 45.58633929]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.567256, 45.5016889]} {'type': 'Point', 'coordinates': [-73.55911016, 45.51559463]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.5728669, 45.5017156]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [-73.5744, 45.5038]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.16455, 41.3925976]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.16536, 41.39138]} {'type': 'Point', 'coordinates': [2.17726672, 41.38242521]} {'type': 'Point', 'coordinates': [2.133314, 41.384445]} {'type': 'Point', 'coordinates': [2.133314, 41.384445]} {'type': 'Point', 'coordinates': [2.17345834, 41.38419063]} {'type': 'Point', 'coordinates': [2.17345834, 41.38419063]} {'type': 'Point', 'coordinates': [2.1681, 41.3888]} {'type': 'Point', 'coordinates': [2.16445548, 41.38306596]} {'type': 'Point', 'coordinates': [2.16445548, 41.38306596]} {'type': 'Point', 'coordinates': [2.16188294, 41.41917814]} {'type': 'Point', 'coordinates': [2.17537, 41.3929]} {'type': 'Point', 'coordinates': [2.16977071, 41.37430312]} {'type': 'Point', 'coordinates': [-73.97967924, 40.76257395]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.98831993, 40.74442425]} {'type': 'Point', 'coordinates': [-73.9201, 40.86094]} {'type': 'Point', 'coordinates': [-73.98158, 40.72628]} {'type': 'Point', 'coordinates': [-73.95990307, 40.78192554]} {'type': 'Point', 'coordinates': [-77.0368707, 38.9071923]} {'type': 'Point', 'coordinates': [-77.04214, 38.90659]} {'type': 'Point', 'coordinates': [-117.93136482, 34.07063067]} {'type': 'Point', 'coordinates': [-117.9214325, 34.0705376]} {'type': 'Point', 'coordinates': [-117.8395, 34.0687]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.632496, 41.883222]} {'type': 'Point', 'coordinates': [-87.65, 41.89972222]} {'type': 'Point', 'coordinates': [-87.60624, 41.81614]} {'type': 'Point', 'coordinates': [-73.98986, 40.75467]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.9675, 40.7779]} {'type': 'Point', 'coordinates': [-73.994508, 40.763186]} {'type': 'Point', 'coordinates': [-74.0064, 40.7142]} {'type': 'Point', 'coordinates': [-73.99185, 40.7315]}
Module-02/Strings.ipynb
###Markdown Solutions for strings exercises 1. Create a string for a variable “text”. ###Code text = "I love ST" ###Output _____no_output_____ ###Markdown 2. Find the length of “text”. ###Code len(text) ###Output _____no_output_____ ###Markdown 3. Change the “text” to all upper case letters. ###Code text.upper() ###Output _____no_output_____ ###Markdown 4. Capitalize the “text”. ###Code text.capitalize() ###Output _____no_output_____ ###Markdown 5. Create another string and set it as the variable “text02”. Add “text” with “text02” and turn it into one sentence ###Code text02 = 'so much' text + text02 text + " " + text02 + "." ###Output _____no_output_____ ###Markdown 6. Multiply “text” with 5. ###Code text * 5 ###Output _____no_output_____ ###Markdown 7. Access text[0] and text[2]. ###Code text[0] text[2] ###Output _____no_output_____
07_01.ipynb
###Markdown ###Code import numpy as np import random !git clone https://github.com/lmcanavals/acomplex.git import acomplex.graphstuff as gs def randomG(n, m): G = [] numvertices = [0]*n for _ in range(m): i = random.randint(0, n-1) numvertices[i] += 1 for i in range(n): G.append(random.sample(range(n), numvertices[i])) return G G = randomG(20, 35) G gs.adjlShow(G, directed=True, layout="dot") ###Output _____no_output_____ ###Markdown Exhaustivo 1 ###Code def exhaus(G): n = len(G) CFC = [-1]*n for x in range(n): CFC[x] = x for y in G[x]: if CFC[x] != CFC[y]: for z in range(n): if CFC[z] == CFC[y]: CFC[z] = CFC[x] return CFC ###Output _____no_output_____ ###Markdown Kosaraju ###Code def reverseGraph(G): n = len(G) Grev = [[] for _ in range(n)] for u in range(n): for v in G[u]: Grev[v].append(u) return Grev def dfs(G, s, lst, visited): stack = [[s, False]] while stack: elem = stack[-1] u, ok = elem if ok: if u not in lst: lst.append(u) stack.pop() continue elem[1] = True if visited[u]: continue visited[u] = True for v in reversed(G[u]): if not visited[v]: stack.append([v, False]) def dfs(G, u, lst, visited): visited[u] = True for v in G[u]: if not visited[v]: dfs(G, v, lst, visited) lst.append(u) def kosaraju(G): n = len(G) visited = [False]*n f = [] Grev = reverseGraph(G) # step 1 for u in range(n): # step 2 if not visited[u]: dfs(Grev, u, f, visited) visited = [False]*n # step 3 scc = [] for u in reversed(f): if not visited[u]: cc = [] dfs(G, u, cc, visited) scc.append(cc) return scc kosaraju(G) kosaraju(G) exhaus(G) ###Output _____no_output_____ ###Markdown Tiempo empíricamente ###Code %time exhaus(G) %time kosaraju(G) %timeit exhaus(G) %timeit kosaraju(G) import time def test(f, params, n): t = [0]*n for i in range(n): start = time.time() f(*params) t[i] = time.time() - start return np.median(t) * 1e6, np.min(t) * 1e6 test(exhaus, [G], 10) test(exhaus, [G], 10000) test(kosaraju, [G], 10000) Gx = randomG(1000, 1200) test(exhaus, [Gx], 100) test(kosaraju, [Gx], 100) Gy = randomG(10_000, 12_000) test(kosaraju, [Gy], 10) ###Output _____no_output_____ ###Markdown SCC counter ###Code def sccCounter(scc): counts = dict() for cc in scc: l = len(cc) if not l in counts: counts[l] = 0 counts[l] += 1 for l in counts: if l == 1: print(f"There are {counts[l]} single SCC") else: print(f"There are {counts[l]} SCC of size {l}") sccCounter(kosaraju(G)) G2 = randomG(10_000, 5000) sccCounter(kosaraju(G2)) G3 = randomG(10_000, 10_000) sccCounter(kosaraju(G3)) G4 = randomG(4_000, 10_000) sccCounter(kosaraju(G4)) def exhaus(G): n = len(G) CFC = [-1]*n for x in range(n): CFC[x] = x for y in G[x]: if CFC[x] != CFC[y]: for z in range(n): if CFC[z] == CFC[y]: CFC[z] = CFC[x] return CFC ###Output _____no_output_____
blog/work stage/aws.ipynb
###Markdown Lyrics Fetch ###Code artist_name = 'Ed Sheeran'.replace(' ', '%20') song = 'Perfect'.replace(' ', '%20') !curl --request GET -o lyrics_raw.json "https://api.lyrics.ovh/v1/$artist_name/$song" !jq '.lyrics' < lyrics_raw.json > lyrics.json lyrics = open("lyrics.json").read() import re true_lyric = re.sub('\s+', ' ', lyrics.replace('\\r','').replace('\\\'','\'').replace('\\n', ' ').replace('\n','').replace('"','')) import boto3 import s3fs #!pip install tscribe import tscribe import json import urllib.request s3 = boto3.client('s3') filename = 'Perfect_normal.mp3' bucket_name = 'notebook-dylan' s3.upload_file(filename, bucket_name, filename) fs = s3fs.S3FileSystem() file = fs.open('s3://{}/{}'.format(bucket_name, filename)) transcribe = boto3.client('transcribe') transcribe.start_transcription_job( TranscriptionJobName= "Perfect_normal", Media={'MediaFileUri': 's3://{}/{}'.format(bucket_name, filename)}, MediaFormat='mp3', LanguageCode='en-US' ) status = transcribe.get_transcription_job(TranscriptionJobName='Perfect_normal') url = status['TranscriptionJob']['Transcript']['TranscriptFileUri'] url data = urllib.request.urlopen(url).read().decode() # parse json object obj = json.loads(data) trans_lyric = obj['results']['transcripts'][0]['transcript'] print(trans_lyric) import numpy as np def levenshtein_ratio_and_distance(s, t, ratio_calc = False): """ levenshtein_ratio_and_distance: Calculates levenshtein distance between two strings. If ratio_calc = True, the function computes the levenshtein distance ratio of similarity between two strings For all i and j, distance[i,j] will contain the Levenshtein distance between the first i characters of s and the first j characters of t """ # Initialize matrix of zeros rows = len(s)+1 cols = len(t)+1 distance = np.zeros((rows,cols),dtype = int) # Populate matrix of zeros with the indeces of each character of both strings for i in range(1, rows): for k in range(1,cols): distance[i][0] = i distance[0][k] = k # Iterate over the matrix to compute the cost of deletions,insertions and/or substitutions for col in range(1, cols): for row in range(1, rows): if s[row-1] == t[col-1]: cost = 0 # If the characters are the same in the two strings in a given position [i,j] then the cost is 0 else: # In order to align the results with those of the Python Levenshtein package, if we choose to calculate the ratio # the cost of a substitution is 2. If we calculate just distance, then the cost of a substitution is 1. if ratio_calc == True: cost = 2 else: cost = 1 distance[row][col] = min(distance[row-1][col] + 1, # Cost of deletions distance[row][col-1] + 1, # Cost of insertions distance[row-1][col-1] + cost) # Cost of substitutions if ratio_calc == True: # Computation of the Levenshtein Distance Ratio Ratio = ((len(s)+len(t)) - distance[row][col]) / (len(s)+len(t)) return Ratio else: # print(distance) # Uncomment if you want to see the matrix showing how the algorithm computes the cost of deletions, # insertions and/or substitutions # This is the minimum number of edits needed to convert string a to string b return "The strings are {} edits away".format(distance[row][col]) #true_lyric = "Wise men say Only fools rush in But I can't help falling in love with you Shall I stay? Would it be a sin If I can't help falling in love with you? Like a river flows Surely to the sea Darling, so it goes Some things are meant to be Take my hand Take my whole life too For I can't help falling in love with you Like a river flows Surely to the sea Darling, so it goes Some things are meant to be Take my hand Take my whole life too For I can't help falling in love with you For I can't help falling in love with you" true_lyric = true_lyric.replace('.', '').replace(',', '').replace('?', '').replace('!', '').lower() trans_lyric = trans_lyric.replace('.', '').replace(',', '').replace('?', '').replace('!', '').lower() true_lyric trans_lyric Distance = levenshtein_ratio_and_distance(trans_lyric,true_lyric) print(Distance) Ratio = levenshtein_ratio_and_distance(trans_lyric,true_lyric,ratio_calc = True) # do not count upper/lower case as error print(Ratio) #!pip install audio_metadata import audio_metadata metadata = audio_metadata.load('Perfect_normal.mp3') metadata metadata['streaminfo']['bitrate'] metadata['streaminfo']['sample_rate'] ###Output _____no_output_____
LFPy-example-04.ipynb
###Markdown Example plot for LFPy: Hay et al. (2011) spike waveformsRun Hay et al. (2011) layer 5b pyramidal cell model, generating and plotting asingle action potential and corresponding extracellular potentials (spikes)Copyright (C) 2017 Computational Neuroscience Group, NMBU.This program is free software: you can redistribute it and/or modifyit under the terms of the GNU General Public License as published bythe Free Software Foundation, either version 3 of the License, or(at your option) any later version.This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See theGNU General Public License for more details. ###Code import numpy as np import sys from urllib.request import urlopen import ssl from warnings import warn import zipfile import os import matplotlib.pyplot as plt from matplotlib.collections import LineCollection import LFPy import neuron ###Output _____no_output_____ ###Markdown Fetch Hay et al. 2011 model files ###Code if not os.path.isfile('L5bPCmodelsEH/morphologies/cell1.asc'): #get the model files: u = urlopen('http://senselab.med.yale.edu/ModelDB/eavBinDown.asp?o=139653&a=23&mime=application/zip', context=ssl._create_unverified_context()) localFile = open('L5bPCmodelsEH.zip', 'wb') localFile.write(u.read()) localFile.close() #unzip: myzip = zipfile.ZipFile('L5bPCmodelsEH.zip', 'r') myzip.extractall('.') myzip.close() #compile mod files every time, because of incompatibility with Mainen96 files: if "win32" in sys.platform: pth = "L5bPCmodelsEH/mod/" warn("no autompile of NMODL (.mod) files on Windows.\n" + "Run mknrndll from NEURON bash in the folder L5bPCmodelsEH/mod and rerun example script") if not pth in neuron.nrn_dll_loaded: neuron.h.nrn_load_dll(pth+"nrnmech.dll") neuron.nrn_dll_loaded.append(pth) else: os.system(''' cd L5bPCmodelsEH/mod/ nrnivmodl ''') neuron.load_mechanisms('L5bPCmodelsEH/mod/') ###Output _____no_output_____ ###Markdown Simulation parameters: ###Code # define cell parameters used as input to cell-class cellParameters = { 'morphology' : 'L5bPCmodelsEH/morphologies/cell1.asc', 'templatefile' : ['L5bPCmodelsEH/models/L5PCbiophys3.hoc', 'L5bPCmodelsEH/models/L5PCtemplate.hoc'], 'templatename' : 'L5PCtemplate', 'templateargs' : 'L5bPCmodelsEH/morphologies/cell1.asc', 'passive' : False, 'nsegs_method' : None, 'dt' : 2**-6, 'tstart' : -159, 'tstop' : 10, 'v_init' : -60, 'celsius': 34, 'pt3d' : True, } # Generate the grid in xz-plane over which we calculate local field potentials X, Y, Z = np.mgrid[-4:5:1, 1:2, -4:5:1] * 20 # define parameters for extracellular recording electrode, using optional method electrodeParameters = { 'sigma' : 0.3, # extracellular conductivity 'x' : X.flatten(), # x,y,z-coordinates of contacts 'y' : Y.flatten(), 'z' : Z.flatten(), 'method' : 'root_as_point', #sphere source soma segment 'N' : np.array([[0, 1, 0]]*X.size), #surface normals 'r' : 2.5, # contact site radius 'n' : 20, # datapoints for averaging } ###Output _____no_output_____ ###Markdown Main simulation procedure, setting up extracellular electrode, cell, synapse: ###Code # delete old sections from NEURON namespace LFPy.cell.neuron.h("forall delete_section()") # Initialize cell instance, using the LFPy.Cell class cell = LFPy.TemplateCell(**cellParameters) cell.set_rotation(x=4.729, y=-3.166) # Override passive reversal potential, AP is generated for sec in cell.allseclist: for seg in sec: seg.e_pas = -59.5 # create extracellular electrode object for LFPs on grid electrode = LFPy.RecExtElectrode(cell=cell, **electrodeParameters) # perform NEURON simulation # Simulated results saved as attribute `data` in the RecExtElectrode instance cell.simulate(probes=[electrode]) ###Output _____no_output_____ ###Markdown Plot output ###Code def plotstuff(cell, electrode): '''plotting''' fig = plt.figure(dpi=160) ax1 = fig.add_axes([0.05, 0.1, 0.55, 0.9], frameon=False) cax = fig.add_axes([0.05, 0.115, 0.55, 0.015]) ax1.plot(electrode.x, electrode.z, 'o', markersize=1, color='k', zorder=0) #normalize to min peak LFPmin = electrode.data.min(axis=1) LFPnorm = -(electrode.data.T / LFPmin).T i = 0 zips = [] for x in LFPnorm: zips.append(list(zip(cell.tvec*1.6 + electrode.x[i] + 2, x*12 + electrode.z[i]))) i += 1 line_segments = LineCollection(zips, linewidths = (1), linestyles = 'solid', cmap='nipy_spectral', zorder=1, rasterized=False) line_segments.set_array(np.log10(-LFPmin)) ax1.add_collection(line_segments) axcb = fig.colorbar(line_segments, cax=cax, orientation='horizontal') axcb.outline.set_visible(False) xticklabels = np.array([-0.1 , -0.05 , -0.02 , -0.01 , -0.005, -0.002]) xticks = np.log10(-xticklabels) axcb.set_ticks(xticks) axcb.set_ticklabels(np.round(-10**xticks, decimals=3)) axcb.set_label('spike amplitude (mV)', va='center') ax1.plot([22, 38], [100, 100], color='k', lw = 1) ax1.text(22, 102, '10 ms') ax1.plot([60, 80], [100, 100], color='k', lw = 1) ax1.text(60, 102, '20 $\mu$m') ax1.set_xticks([]) ax1.set_yticks([]) axis = ax1.axis(ax1.axis('equal')) ax1.set_xlim(axis[0]*1.02, axis[1]*1.02) # plot morphology zips = [] for x, z in cell.get_pt3d_polygons(): zips.append(list(zip(x, z))) from matplotlib.collections import PolyCollection polycol = PolyCollection(zips, edgecolors='none', facecolors='gray', zorder=-1, rasterized=False) ax1.add_collection(polycol) ax1.text(-0.05, 0.95, 'a', horizontalalignment='center', verticalalignment='center', fontsize=16, fontweight='demibold', transform=ax1.transAxes) # plot extracellular spike in detail ind = np.where(electrode.data == electrode.data.min())[0][0] timeind = (cell.tvec >= 0) & (cell.tvec <= 10) xticks = np.arange(10) xticklabels = xticks LFPtrace = electrode.data[ind, ] vline0 = cell.tvec[cell.somav==cell.somav.max()] vline1 = cell.tvec[LFPtrace == LFPtrace.min()] vline2 = cell.tvec[LFPtrace == LFPtrace.max()] # plot asterix to link trace in (a) and (c) ax1.plot(electrode.x[ind], electrode.z[ind], '*', markersize=5, markeredgecolor='none', markerfacecolor='k') ax2 = fig.add_axes([0.75, 0.6, 0.2, 0.35], frameon=True) ax2.plot(cell.tvec[timeind], cell.somav[timeind], lw=1, color='k', clip_on=False) ax2.vlines(vline0, cell.somav.min(), cell.somav.max(), 'k', 'dashed', lw=0.25) ax2.vlines(vline1, cell.somav.min(), cell.somav.max(), 'k', 'dashdot', lw=0.25) ax2.vlines(vline2, cell.somav.min(), cell.somav.max(), 'k', 'dotted', lw=0.25) ax2.set_xticks(xticks) ax2.set_xticklabels(xticks) ax2.axis(ax2.axis('tight')) ax2.set_ylabel(r'$V_\mathrm{soma}(t)$ (mV)') for loc, spine in ax2.spines.items(): if loc in ['right', 'top']: spine.set_color('none') ax2.xaxis.set_ticks_position('bottom') ax2.yaxis.set_ticks_position('left') ax2.set_title('somatic potential', va='center') ax2.text(-0.3, 1.0, 'b', horizontalalignment='center', verticalalignment='center', fontsize=16, fontweight='demibold', transform=ax2.transAxes) ax3 = fig.add_axes([0.75, 0.1, 0.2, 0.35], frameon=True) ax3.plot(cell.tvec[timeind], LFPtrace[timeind], lw=1, color='k', clip_on=False) ax3.plot(0.5, 0, '*', markersize=5, markeredgecolor='none', markerfacecolor='k') ax3.vlines(vline0, LFPtrace.min(), LFPtrace.max(), 'k', 'dashed', lw=0.25) ax3.vlines(vline1, LFPtrace.min(), LFPtrace.max(), 'k', 'dashdot', lw=0.25) ax3.vlines(vline2, LFPtrace.min(), LFPtrace.max(), 'k', 'dotted', lw=0.25) ax3.set_xticks(xticks) ax3.set_xticklabels(xticks) ax3.axis(ax3.axis('tight')) for loc, spine in ax3.spines.items(): if loc in ['right', 'top']: spine.set_color('none') ax3.xaxis.set_ticks_position('bottom') ax3.yaxis.set_ticks_position('left') ax3.set_xlabel(r'$t$ (ms)', va='center') ax3.set_ylabel(r'$\Phi(\mathbf{r},t)$ (mV)') ax3.set_title('extracellular spike', va='center') ax3.text(-0.3, 1.0, 'c', horizontalalignment='center', verticalalignment='center', fontsize=16, fontweight='demibold', transform=ax3.transAxes) return fig # Plotting of simulation results: fig = plotstuff(cell, electrode) # Optional: save image # fig.savefig('LFPy-example-4.pdf') ###Output _____no_output_____
notebooks/2018-6-27-Virtebi_Algorithm_and_Markov_Chain_Part_1.ipynb
###Markdown ---layout: posttitle: Virtebi Algorithm and Markov Chain - Part 1--- What is The Problem? To find the **most probable** sequence of states given an observation (or you can call it a result). This **most probable** sequence of states is also called the **Virtebi Path**, sounds cool eh?. Hidden Markov Model and Markov ChainWait wait wait wait... now there is also [Markov Chain](https://en.wikipedia.org/wiki/Markov_chain) instead of [Hidden Markov Model](https://en.wikipedia.org/wiki/Hidden_Markov_model) ?! Yes! looking at definitions at wikipedia, we could summarize that:- Hidden Markov Model: has states that did not directly visible to observer- Markov Chain : More general term, or you can call this is the parent for Hidden Markov Model.n.b: throughout this post, I will use *Hidden Markov Model* and *Markov Chain* interchangeably. The Components of Markov ChainA Markov Chain, in the simplest form, is essentially a [graph](https://en.wikipedia.org/wiki/Graph_(abstract_data_type)) that has:1. Initial States2. Transistion Probabilities3. Emission Probabilities A Toy Example: Gandalf's Hidden Markov Model vs BalrogIn the lord of the ring, first movie/book (The Fellowship of the Ring), there is a wizard named Gandalf fighting A demon named Balrog In fighting Balrog, Gandalf could have 3 possible actions: **magic**, **defend**, and **run**. How do Gandalf decides which action to take? It depends on Balrog states! say Balrog can have 3 possible states: **Far_Attack**, **Move_To_Bridge**, **Run**. Says, you were Frodo, and watching Gandalf's fight, you can observe what actions did he took, but you want to know what state that Gandalf saw so that he has taken those action sequence. Assuming we know the components of Hidden Markov Model ###Code # I will use numpy array to shorten the code import numpy as np obs = ('defend', 'defend', 'magic') possible_actions = ('magic', 'defend', 'run') states = ('Far_Attack', 'Move_To_Bridge', 'Run') start_p = {'Far_Attack': 0.5, 'Move_To_Bridge': 0.4, 'Run': 0.1} trans_p = { 'Far_Attack': {'Far_Attack': 0.4, 'Move_To_Bridge': 0.55, 'Run': 0.05}, 'Move_To_Bridge': {'Far_Attack': 0.9, 'Move_To_Bridge': 0.05, 'Run': 0.05}, 'Run': {'Far_Attack': 0.05, 'Move_To_Bridge': 0.05, 'Run': 0.9}, } emit_p = { 'Far_Attack' : {'magic': 0.05, 'defend': 0.9, 'run': 0.05}, 'Move_To_Bridge' : {'magic': 0.5, 'defend': 0.4, 'run': 0.1}, 'Run' : {'magic': 0.1, 'defend': 0, 'run': 0.9} } import graphviz as gz import pygraphviz as pgv from IPython.display import Image def draw(dot): return Image(pgv.AGraph(dot).draw(format='png', prog='dot')) graph = gz.Digraph() graph.node('START', 'START', shape='doublecircle', color='blue') for state in states: graph.node(state, state) for initial_transision in start_p: weight = start_p[initial_transision] graph.edge('START', initial_transision, label='{}'.format(weight), weight='{}'.format(weight), penwidth='{}'.format(max(weight, 0.3) * 2)) for transision_state_from in trans_p: transision = trans_p[transision_state_from] for transision_state_to in transision: weight = transision[transision_state_to] graph.edge(transision_state_from, transision_state_to, label='{}'.format(weight), weight='{}'.format(weight), penwidth='{}'.format(max(weight, 0.3) * 2)) print('Markov Chain Representation of Gandalf') draw(graph.source) graph = gz.Digraph() graph.node('START', 'START', shape='doublecircle', color='blue') for state in states: graph.node(state, state) for action in possible_actions: graph.node(action, action, shape='rectangle', color='red') for initial_transision in start_p: weight = start_p[initial_transision] graph.edge('START', initial_transision, label='{}'.format(weight), weight='{}'.format(weight), penwidth='{}'.format(max(weight, 0.3) * 2)) for transision_state_from in trans_p: transision = trans_p[transision_state_from] for transision_state_to in transision: weight = transision[transision_state_to] graph.edge(transision_state_from, transision_state_to, label='{}'.format(weight), weight='{}'.format(weight), penwidth='{}'.format(max(weight, 0.3) * 2)) for emission_state_from in emit_p: emission = emit_p[emission_state_from] for action in emission: weight = emission[action] graph.edge(emission_state_from, action, label='{}'.format(weight), weight='{}'.format(weight), arrowsize='{}'.format(weight)) print('Full Markov Chain Representation of Gandalf') draw(graph.source) ###Output Full Markov Chain Representation of Gandalf ###Markdown Naive Approach to Answer This ProblemSo how to answer "What was Gandalf sees?" when he takes action "defend, defend, magic" ? we need to find the most probable sequence! so how do we do it? Well we can try to generate all the combination and count the probabilities of that sequence happening, given the observation / action taken in this case...$$P (state\_sequence | action\_sequence) = (initial\_probability * emission_{1}) * \prod\limits_{i=2}^{I}{transition_{i-1 -> i} * emission_{i}}$$$i$ is the sequence indexFor example: our observation is: **defend, defend, magic**then, the probability of Balrog in state "Far_Attack", "Far_Attack", "Far_Attack" is :$$ (0.5 * 0.9) * (0.4 * 0.9) * (0.4 * 0.05) = 0.00324 $$So we draw a table that looks like:| Seq1 | Seq2 | Seq3 | Probability ||------|------|------|-------------|| Far_Attack | Far_Attack | Far_Attack | 0.00324 || Far_Attack | Far_Attack | Run | 0,00081 |And it's goes on until all the sequence possibility is exhausted, but lets generate the naive algorithm to do that! ###Code def generate_combination(element_space, n): """ helper function to generate sequence combination from element space for n sequence """ if n <= 1: return element_space else: combination = list() for el1 in element_space: x = list() for el2 in generate_combination( element_space, n - 1): # flatten the list by # appending el1 into the first element of el2 if isinstance(el2, list): x = [el1] + el2 else: x = [el1, el2] combination.append(x) return combination def naive(state_space, initial_probabilities, observations, transition_matrix, emission_matrix): """ Find the most probable sequence in naive manner. Compute all the probabilities for all sequences then find the sequence that has maximum probability of happening """ # generate sequences of state all_possible_state_seq = generate_combination( state_space, len(observations)) # calculate each sequence probabilities, given the observation: all_seq_prob = list() for possible_state_seq in all_possible_state_seq: p = 1 for state_i in range(len(possible_state_seq)): current_state = possible_state_seq[state_i] current_observation = observations[state_i] # use initiate state probability # if it is the first sequence: # otherwise use transition, given previous state if state_i == 0: p1 = initial_probabilities[current_state] else: p1 = transition_matrix[ prev_state ][current_state] # find the P(state|observation) #, example: P(Healthy|cold) p2 = emission_matrix[ current_state ][current_observation] prev_state = current_state # calculate product of # P(state|prev_state) * P(state|observation) p *= p1 * p2 all_seq_prob.append(p) max_index = np.argmax(all_seq_prob) return (all_possible_state_seq[max_index], all_seq_prob[max_index]) seq, p = naive(states, start_p, obs, trans_p, emit_p) print('The most probable state of Balrog, when Gandalf has taken the action ("defend, defend, magic") is:') print('{} with probability of {}'.format(seq, p)) %%timeit naive(states, start_p, obs, trans_p, emit_p) ###Output 68.4 µs ± 415 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) ###Markdown A Smarter Move: Virtebi Algorithm ###Code def virtebi(states, initial_probabilities, observations, transition_matrix, emission_matrix): V = [{}] # fill in the initial probability for state in states: V[0][state] = { 'prob': initial_probabilities[state] * emission_matrix[state][observations[0]], 'prev_state': None } # now instead of re-calculating the probabilities, we could use max of previous value for i in range(1, len(obs)): V.append({}) for state in states: max_prev_prob = 0 for prev_state in states: prev_prob = V[i-1][prev_state]['prob'] * transition_matrix[prev_state][state] if max_prev_prob < prev_prob: max_prev_prob = prev_prob max_prev_state = prev_state V[i][state] = { 'prob': max_prev_prob * emission_matrix[state][obs[i]], 'prev_state': max_prev_state } # after we have constructed the markov chain, # we can then find the most probable sequence of states through backtrack max_val = None max_prob = 0 max_state = None for st in V[-1]: val = V[-1][st] if max_prob < val['prob']: max_state = st max_val = val max_prob = max_val['prob'] # do the backtrack max_sequence = list() prev_st = max_val['prev_state'] max_sequence.append(prev_st) max_sequence.append(max_state) for i in range(len(obs) - 2, 0, -1): prev_st = V[i][prev_st]['prev_state'] max_sequence.insert(0, prev_st) return (max_sequence, max_prob, V) max_seq, p_seq, markov_chain = virtebi(states, start_p, obs, trans_p, emit_p) print('The most probable state of Balrog, when Gandalf has taken the action ("defend, defend, magic") is:') print('{} with probability of {}'.format(max_seq, p_seq)) ###Output The most probable state of Balrog, when Gandalf has taken the action ("defend, defend, magic") is: ['Far_Attack', 'Far_Attack', 'Move_To_Bridge'] with probability of 0.04455000000000001 ###Markdown Wow! it produces the same result! Now I will visualize how this model works and which path that it takes. ###Code graph = gz.Digraph() graph.attr(rankdir='LR') for i in range(len(markov_chain)): for state_key in markov_chain[i]: graph.node( '{}_{}'.format(state_key, i), '{} \nP={:.3f}'.format(state_key, markov_chain[i][state_key]['prob']), ) for i in range(1, len(markov_chain)): for state_key in markov_chain[i]: for prev_state in states: color = 'black' if max_seq[i] == state_key and max_seq[i-1] == prev_state: color = 'red' graph.edge( '{}_{}'.format(prev_state, i-1), '{}_{}'.format(state_key, i), color=color ) draw(graph.source) ###Output _____no_output_____ ###Markdown In the figure above, the red arrow is the sequence that the model took. Determining the sequence is actually a *backtrack* process, moving from the final state with maximum likelihood/probabilty and move back to a state which will cause it.And now, last but not least, I should show you the improvement from the **naive** algorithm ###Code %%timeit max_seq, p_seq, markov_chain = virtebi(states, start_p, obs, trans_p, emit_p) ###Output 9.44 µs ± 108 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
tutorials/robot-marbles-part-5/robot-marbles-part-5.ipynb
###Markdown cadCAD Tutorials: The Robot and the Marbles Part 5 - NetworksTo expand upon our previous examples, we will introduce the concept of using a graph network object that is updated during each state update. The ability to essential embed a graph 'database' into a state is a game changer for scalability, allowing increased complexity with multiple agents or components is represented, easily updated. Below, building upon our previous examples, we will represent the Robots and Marbles example with n boxes, and a variable number of marbles. Behavior and Mechanisms:* A network of robotic arms is capable of taking a marble from one of their boxes and dropping it into the other one. * Each robotic arm in the network only controls two boxes and they act by moving a marble from one box to the other.* Each robotic arm is programmed to take one marble at a time from the box containing the most significant number of marbles and drop it in the other box. It repeats that process until the boxes contain an equal number of marbles.* For our analysis of this system, suppose we are only interested in monitoring the number of marbles in only their two boxes. ###Code from cadCAD.engine import ExecutionMode, ExecutionContext, Executor from cadCAD.configuration import Configuration import networkx as nx import matplotlib.pyplot as plt import numpy as np import pandas as pd #from copy import deepcopy %matplotlib inline # define global variables T = 25 #iterations in our simulation boxes=5 #number of boxes in our network m= 2 #for barabasi graph type number of edges is (n-2)*m ###Output _____no_output_____ ###Markdown We create a [Barabási–Albert](https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model) graph and then fill the 5 boxes with between 1 and 10 balls. You can create as many different nodes or types of nodes as needed ###Code # create graph object with the number of boxes as nodes network = nx.barabasi_albert_graph(boxes, m) # add balls to box nodes for node in network.nodes: network.nodes[node]['balls'] = np.random.randint(1,10) ###Output _____no_output_____ ###Markdown Now we will plot the network of boxes and with their labels showing how many balls are in each box. ###Code # plot of boxes and balls nx.draw_kamada_kawai(network,labels=nx.get_node_attributes(network,'balls')) # we initialize the cadCAD state as a network object initial_conditions = {'network':network} #Behavior: node by edge dimensional operator #input the states of the boxes output the deltas along the edges # We specify the robotic networks logic in a Policy/Behavior Function # unlike previous examples our policy controls a vector valued action, defined over the edges of our network def robotic_network(params, step, sL, s): network = s['network'] delta_balls = {} for e in network.edges: src = e[0] dst = e[1] #transfer one ball across the edge in the direction of more balls to less delta_balls[e] = np.sign(network.nodes[src]['balls']-network.nodes[dst]['balls']) return({'delta': delta_balls}) #mechanism: edge by node dimensional operator #input the deltas along the edges and update the boxes # We make the state update functions less "intelligent", # ie. they simply add the number of marbles specified in _input # (which, per the policy function definition, may be negative) def update_network(params, step, sL, s, _input): network = s['network'] #deepcopy(s['network']) delta_balls = _input['delta'] for e in network.edges: move_ball = delta_balls[e] src = e[0] dst = e[1] if (network.nodes[src]['balls'] >= move_ball) and (network.nodes[dst]['balls'] >= -move_ball): network.nodes[src]['balls'] = network.nodes[src]['balls']-move_ball network.nodes[dst]['balls'] = network.nodes[dst]['balls']+move_ball return ('network', network) # wire up the mechanisms and states partial_state_update_blocks = [ { 'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions 'action': robotic_network }, 'variables': { # The following state variables will be updated simultaneously 'network': update_network } } ] # Settings of general simulation parameters, unrelated to the system itself # `T` is a range with the number of discrete units of time the simulation will run for; # `N` is the number of times the simulation will be run (Monte Carlo runs) simulation_parameters = { 'T': range(T), 'N': 1, 'M': {} } # The configurations above are then packaged into a `Configuration` object config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions sim_config=simulation_parameters #dict containing simulation parameters ) # Run the simulations exec_mode = ExecutionMode() exec_context = ExecutionContext(exec_mode.single_proc) executor = Executor(exec_context, [config]) # Pass the configuration object inside an array raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results df = pd.DataFrame(raw_result) ###Output single_proc: [<cadCAD.configuration.Configuration object at 0x7f736c140940>] [<cadCAD.configuration.Configuration object at 0x7f736c140940>] ###Markdown We create some helper functions to extract the networkx graph object from the Pandas dataframe and plot it. ###Code #NetworkX helper functions def get_nodes(g): return [node for node in g.nodes if g.nodes[node]] def pad(vec, length,fill=True): if fill: padded = np.zeros(length,) else: padded = np.empty(length,) padded[:] = np.nan for i in range(len(vec)): padded[i]= vec[i] return padded def make2D(key, data, fill=False): maxL = data[key].apply(len).max() newkey = 'padded_'+key data[newkey] = data[key].apply(lambda x: pad(x,maxL,fill)) reshaped = np.array([a for a in data[newkey].values]) return reshaped ###Output _____no_output_____ ###Markdown Using our helper function get_nodes() we pull out the boxes ball quantity and save it to a new dataframe column. ###Code df['Balls'] = df.network.apply(lambda g: np.array([g.nodes[j]['balls'] for j in get_nodes(g)])) ###Output _____no_output_____ ###Markdown Next we will plot the number of balls in each box over the simulation time period. We can see an oscillation occurs never reaching an equilibrium due to the uneven nature of the boxes and balls. ###Code plt.plot(df.timestep,make2D('Balls', df)) plt.title('Number of balls in boxes over simulation period') plt.ylabel('Qty') plt.xlabel('Iteration') plt.legend(['Box #'+str(node) for node in range(boxes)], ncol = 2) ###Output _____no_output_____ ###Markdown cadCAD Tutorials: The Robot and the Marbles Part 5 - NetworksTo expand upon our previous examples, we will introduce the concept of using a graph network object that is updated during each state update. The ability to essential embed a graph 'database' into a state is a game changer for scalability, allowing increased complexity with multiple agents or components is represented, easily updated. Below, building upon our previous examples, we will represent the Robots and Marbles example with n boxes, and a variable number of marbles. Behavior and Mechanisms:* A network of robotic arms is capable of taking a marble from their one of their boxes and dropping it into the other one. * Each robotic arm in the network only controls two boxes and they act by moving a marble from one box to the other.* Each robotic arm is programmed to take one marble at a time from the box containing the most significant number of marbles and drop it in the other box. It repeats that process until the boxes contain an equal number of marbles.* For our analysis of this system, suppose we are only interested in monitoring the number of marbles in only their two boxes. ###Code from cadCAD.engine import ExecutionMode, ExecutionContext, Executor from cadCAD.configuration.utils import config_sim from cadCAD.configuration import Experiment from cadCAD import configs import networkx as nx import matplotlib.pyplot as plt import numpy as np import pandas as pd #from copy import deepcopy %matplotlib inline # define global variables T = 25 #iterations in our simulation boxes=5 #number of boxes in our network m= 2 #for barabasi graph type number of edges is (n-2)*m ###Output _____no_output_____ ###Markdown We create a [Barabási–Albert](https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model) graph and then fill the 5 boxes with between 1 and 10 balls. You can create as many different nodes or types of nodes as needed ###Code # create graph object with the number of boxes as nodes network = nx.barabasi_albert_graph(boxes, m) # add balls to box nodes for node in network.nodes: network.nodes[node]['balls'] = np.random.randint(1,10) ###Output _____no_output_____ ###Markdown Now we will plot the network of boxes and with their labels showing how many balls are in each box. ###Code # plot of boxes and balls nx.draw_kamada_kawai(network,labels=nx.get_node_attributes(network,'balls')) # we initialize the cadCAD state as a network object genesis_states = {'network':network} #Behavior: node by edge dimensional operator #input the states of the boxes output the deltas along the edges # We specify the robotic networks logic in a Policy/Behavior Function # unlike previous examples our policy controls a vector valued action, defined over the edges of our network def robotic_network(params, step, sH, s): network = s['network'] delta_balls = {} for e in network.edges: src = e[0] dst = e[1] #transfer one ball across the edge in the direction of more balls to less delta_balls[e] = np.sign(network.nodes[src]['balls']-network.nodes[dst]['balls']) return({'delta': delta_balls}) #mechanism: edge by node dimensional operator #input the deltas along the edges and update the boxes # We make the state update functions less "intelligent", # ie. they simply add the number of marbles specified in _input # (which, per the policy function definition, may be negative) def update_network(params, step, sH, s, _input): network = s['network'] #deepcopy(s['network']) delta_balls = _input['delta'] for e in network.edges: move_ball = delta_balls[e] src = e[0] dst = e[1] if (network.nodes[src]['balls'] >= move_ball) and (network.nodes[dst]['balls'] >= -move_ball): network.nodes[src]['balls'] = network.nodes[src]['balls']-move_ball network.nodes[dst]['balls'] = network.nodes[dst]['balls']+move_ball return ('network', network) # wire up the mechanisms and states partial_state_update_blocks = [ { 'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions 'action': robotic_network }, 'variables': { # The following state variables will be updated simultaneously 'network': update_network } } ] # Settings of general simulation parameters, unrelated to the system itself # `T` is a range with the number of discrete units of time the simulation will run for; # `N` is the number of times the simulation will be run (Monte Carlo runs) sim_config_dict = { 'T': range(T), 'N': 1, #'M': {} } del configs[:] exp = Experiment() c = config_sim(sim_config_dict) # The configurations above are then packaged into a `Configuration` object exp.append_configs(initial_state=genesis_states, #dict containing variable names and initial values partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions sim_configs=c #preprocessed dictionaries containing simulation parameters ) # Run the simulations exec_mode = ExecutionMode() local_mode_ctx = ExecutionContext(exec_mode.local_mode) executor = Executor(local_mode_ctx, configs) # Pass the configuration object inside an array raw_result, tensor, sessions = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results df = pd.DataFrame(raw_result) ###Output _____no_output_____ ###Markdown We create some helper functions to extract the networkx graph object from the Pandas dataframe and plot it. ###Code #NetworkX helper functions def get_nodes(g): return [node for node in g.nodes if g.nodes[node]] def pad(vec, length,fill=True): if fill: padded = np.zeros(length,) else: padded = np.empty(length,) padded[:] = np.nan for i in range(len(vec)): padded[i]= vec[i] return padded def make2D(key, data, fill=False): maxL = data[key].apply(len).max() newkey = 'padded_'+key data[newkey] = data[key].apply(lambda x: pad(x,maxL,fill)) reshaped = np.array([a for a in data[newkey].values]) return reshaped ###Output _____no_output_____ ###Markdown Using our helper function get_nodes() we pull out the boxes ball quantity and save it to a new dataframe column. ###Code df['Balls'] = df.network.apply(lambda g: np.array([g.nodes[j]['balls'] for j in get_nodes(g)])) ###Output _____no_output_____ ###Markdown Next we will plot the number of balls in each box over the simulation time period. We can see an oscillation occurs never reaching an equilibrium due to the uneven nature of the boxes and balls. ###Code plt.plot(df.timestep,make2D('Balls', df)) plt.title('Number of balls in boxes over simulation period') plt.ylabel('Qty') plt.xlabel('Iteration') plt.legend(['Box #'+str(node) for node in range(boxes)], ncol = 2) ###Output _____no_output_____ ###Markdown cadCAD Tutorials: The Robot and the Marbles Part 5 - NetworksTo expand upon our previous examples, we will introduce the concept of using a graph network object that is updated during each state update. The ability to essential embed a graph 'database' into a state is a game changer for scalability, allowing increased complexity with multiple agents or components is represented, easily updated. Below, building upon our previous examples, we will represent the Robots and Marbles example with n boxes, and a variable number of marbles. Behavior and Mechanisms:* A network of robotic arms is capable of taking a marble from their one of their boxes and dropping it into the other one. * Each robotic arm in the network only controls two boxes and they act by moving a marble from one box to the other.* Each robotic arm is programmed to take one marble at a time from the box containing the most significant number of marbles and drop it in the other box. It repeats that process until the boxes contain an equal number of marbles.* For our analysis of this system, suppose we are only interested in monitoring the number of marbles in only their two boxes. ###Code from cadCAD.engine import ExecutionMode, ExecutionContext, Executor from cadCAD.configuration import Configuration import networkx as nx import matplotlib.pyplot as plt import numpy as np import pandas as pd #from copy import deepcopy %matplotlib inline # define global variables T = 25 #iterations in our simulation boxes=5 #number of boxes in our network m= 2 #for barabasi graph type number of edges is (n-2)*m ###Output _____no_output_____ ###Markdown We create a [Barabási–Albert](https://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model) graph and then fill the 5 boxes with between 1 and 10 balls. You can create as many different nodes or types of nodes as needed ###Code # create graph object with the number of boxes as nodes network = nx.barabasi_albert_graph(boxes, m) # add balls to box nodes for node in network.nodes: network.nodes[node]['balls'] = np.random.randint(1,10) ###Output _____no_output_____ ###Markdown Now we will plot the network of boxes and with their labels showing how many balls are in each box. ###Code # plot of boxes and balls nx.draw_kamada_kawai(network,labels=nx.get_node_attributes(network,'balls')) # we initialize the cadCAD state as a network object initial_conditions = {'network':network} #Behavior: node by edge dimensional operator #input the states of the boxes output the deltas along the edges # We specify the robotic networks logic in a Policy/Behavior Function # unlike previous examples our policy controls a vector valued action, defined over the edges of our network def robotic_network(params, step, sL, s): network = s['network'] delta_balls = {} for e in network.edges: src = e[0] dst = e[1] #transfer one ball across the edge in the direction of more balls to less delta_balls[e] = np.sign(network.nodes[src]['balls']-network.nodes[dst]['balls']) return({'delta': delta_balls}) #mechanism: edge by node dimensional operator #input the deltas along the edges and update the boxes # We make the state update functions less "intelligent", # ie. they simply add the number of marbles specified in _input # (which, per the policy function definition, may be negative) def update_network(params, step, sL, s, _input): network = s['network'] #deepcopy(s['network']) delta_balls = _input['delta'] for e in network.edges: move_ball = delta_balls[e] src = e[0] dst = e[1] if (network.nodes[src]['balls'] >= move_ball) and (network.nodes[dst]['balls'] >= -move_ball): network.nodes[src]['balls'] = network.nodes[src]['balls']-move_ball network.nodes[dst]['balls'] = network.nodes[dst]['balls']+move_ball return ('network', network) # wire up the mechanisms and states partial_state_update_blocks = [ { 'policies': { # The following policy functions will be evaluated and their returns will be passed to the state update functions 'action': robotic_network }, 'variables': { # The following state variables will be updated simultaneously 'network': update_network } } ] # Settings of general simulation parameters, unrelated to the system itself # `T` is a range with the number of discrete units of time the simulation will run for; # `N` is the number of times the simulation will be run (Monte Carlo runs) simulation_parameters = { 'T': range(T), 'N': 1, 'M': {} } # The configurations above are then packaged into a `Configuration` object config = Configuration(initial_state=initial_conditions, #dict containing variable names and initial values partial_state_update_blocks=partial_state_update_blocks, #dict containing state update functions sim_config=simulation_parameters #dict containing simulation parameters ) # Run the simulations exec_mode = ExecutionMode() exec_context = ExecutionContext(exec_mode.single_proc) executor = Executor(exec_context, [config]) # Pass the configuration object inside an array raw_result, tensor = executor.execute() # The `execute()` method returns a tuple; its first elements contains the raw results df = pd.DataFrame(raw_result) ###Output single_proc: [<cadCAD.configuration.Configuration object at 0x7f736c140940>] [<cadCAD.configuration.Configuration object at 0x7f736c140940>] ###Markdown We create some helper functions to extract the networkx graph object from the Pandas dataframe and plot it. ###Code #NetworkX helper functions def get_nodes(g): return [node for node in g.nodes if g.nodes[node]] def pad(vec, length,fill=True): if fill: padded = np.zeros(length,) else: padded = np.empty(length,) padded[:] = np.nan for i in range(len(vec)): padded[i]= vec[i] return padded def make2D(key, data, fill=False): maxL = data[key].apply(len).max() newkey = 'padded_'+key data[newkey] = data[key].apply(lambda x: pad(x,maxL,fill)) reshaped = np.array([a for a in data[newkey].values]) return reshaped ###Output _____no_output_____ ###Markdown Using our helper function get_nodes() we pull out the boxes ball quantity and save it to a new dataframe column. ###Code df['Balls'] = df.network.apply(lambda g: np.array([g.nodes[j]['balls'] for j in get_nodes(g)])) ###Output _____no_output_____ ###Markdown Next we will plot the number of balls in each box over the simulation time period. We can see an oscillation occurs never reaching an equilibrium due to the uneven nature of the boxes and balls. ###Code plt.plot(df.timestep,make2D('Balls', df)) plt.title('Number of balls in boxes over simulation period') plt.ylabel('Qty') plt.xlabel('Iteration') plt.legend(['Box #'+str(node) for node in range(boxes)], ncol = 2) ###Output _____no_output_____
notebooks/AlongshoreTransporter.ipynb
###Markdown The Alongshore Transporter ClassAlongshoreTransporter is a stand-alone module in BRIE for diffusing sediment along a straight (non-complex) coast. The formulations are detailed in Neinhuis and Lorenzo-Trueba, 2019 [1] and Nienhuis et al., 2015 [2], but stem primarily from the alongshore transport model of Ashton and Murray, 2006 [3]. This notebook provides additional documentation for understanding the functions that comprise AlongshoreTransporter. We first provide AlongshoreTransporter with an arbitrary coastline and then explain the implementation of AlongshoreTransporter within BRIE. **Figure 1. Orientation of the coastline for the AlongshoreTransporter class (Figure from NLT19).** InitializationAlongshoreTransporter must be initialized with a shoreline array; the remaining variables are optional. If a wave distribution is not provided, a uniform distribution is applied from -90 to 90 degrees. Here, we are going to provide the AM06 wave distribution (`ashton`), which divides [0, 1] into quartiles that are uniform distributions defined by the shape parameters $a$ (the asymmetry) and $h$ (high fraction). ###Code import numpy as np import matplotlib.pyplot as plt import scipy.sparse import scipy.constants import sys sys.path.append("..") from brie.alongshore_transporter import AlongshoreTransporter, calc_shoreline_angles, calc_coast_diffusivity, _build_matrix from brie.waves import ashton, WaveAngleGenerator shoreline_x = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 10, 10, 10, 10, 10] # shoreline array waves = ashton(a=0.5, h=0, loc=-np.pi/2, scale=np.pi) # wave distribution from -90 to 90 degrees # waves = WaveAngleGenerator(asymmetry=0.5, high_fraction=0) # our old wave_angle_generator class # initialize AlongshoreTransporter transporter = AlongshoreTransporter(shoreline_x, wave_distribution=waves, alongshore_section_length=100, time_step=1) transporter.update() # advance one time step # plot shoreline change after 1 model years plt.plot(np.arange(0, np.size(shoreline_x)*100, 100), shoreline_x) # initial shoreline plt.plot(np.arange(0, np.size(shoreline_x)*100, 100), transporter.shoreline_x) plt.xlabel('alongshore y (m)') plt.ylabel('cross-shore x (m)') ###Output _____no_output_____ ###Markdown **Figure 2. An example shoreline change after one model year using AlongshoreTransporter for a symmetric wave climate**Once we initialize *AlongshoreTranporter*, we advance the model by one timestep using the `update` function. This example, while ugly, illustrates the periodic boundary conditions used in calculating shoreline diffusivity, explained in more detail below. Wave distribution and shoreline anglesThe incoming offshore wave angle $\phi$ shown in Figure 1 is measured counter-clockwise from the positive x-axis (to the wave front). Hence, in this offshore-looking orientation, a wave angle of -90 degrees corresponds to waves approaching from the right and 90 degrees from the left. For a wave asymmetry of 0.8 and highness of 0.2, we are defining a distribution where 80% of the waves approach from the left looking offshore and 20% of the waves approach at angles higher than 45 degrees (from shore normal). This is a little bit of a mind game because looking at the array above, values $>$0 degrees are on the right-side of the array, but keep telling yourself, these waves are coming from the left! ###Code # an example wave angle array angle_array, step = np.linspace(-89.5, 89.5, 5, retstep=True) # not including the boundaries # angle_array, step = np.linspace(-90, 90, 5, retstep=True) print('example wave angle array =', angle_array) # and corresponding pdf print('wave pdf =', waves.pdf(np.deg2rad(angle_array)) * np.deg2rad(step)) ###Output example wave angle array = [-89.5 -44.75 0. 44.75 89.5 ] wave pdf = [0. 0.49722222 0.49722222 0.49722222 0. ] ###Markdown Upon initialization of *AlongshoreTransporter* (i.e., within the `__init__` function), we use the function `calc_shoreline_angles` to calculate the coastline angles of the series of (equally spaced) coastline positions with respect to the *x*-axis ($\theta$ in Figure 1), and from the perspective of looking offshore (from the shoreline). Angles at the first and last points are calculated using wrap-around boundaries. Here we provide another example, a delta ($dy=$100 m), for easy interpretation. ###Code # calculate shoreline angles shoreline_x = [0, 0, 0, 100, 100, 100, 200, 200, 200, 300, 300, 300, 200, 200, 200, 100, 100, 100, 0, 0, 0] dy = 100 # m shoreline_angles = calc_shoreline_angles(shoreline_x, spacing=dy) print('shoreline angles =', np.rad2deg(shoreline_angles)) ###Output shoreline angles = [ 0. 0. 45. 0. 0. 45. 0. 0. 45. 0. 0. -45. 0. 0. -45. 0. 0. -45. 0. 0. 0.] ###Markdown The `update` function Now lets look under the hood at the functions that are called within `update`. Sediment is diffused along the shoreline using a nonlinear diffusion formulation given by AM06 (Eq. 37-39 in NLT19), which is convolved with the wave climate pdf (the normalized angular distribution of wave energy) to get a *wave-climate averaged shoreline diffusivity* for every alongshore location. Erm, what? Lets break it down. Diffusivity $D$ depends on the relative wave angle $\phi_0-\theta$: from AM06, we know that diffisuvity "decreases from a maximum for waves approaching directly onshore, passes through zero at the angle maximizing alongshore sediment transport ($Qs$), and becomes negative for even more oblique wave angles". In *AlongshoreTransporter*, diffusivity is calculated in the function `calc_coast_diffusivity`. For a symmetric wave climate and a straight shoreline (i.e., $\theta=0$), $Qs$ is maximized for waves approaching at $\pm$45 degrees and maximum diffusivity occurs at 0 degrees (Figure 3a). As shown in Figure 3b-c, for assymetric wave climates, maximum diffusivity shifts toward positive (negative) relative wave angles when waves come predominantly from the left (right). Offshore wave climates that include high angle waves ($>\pm 45$ degrees) are less diffusive; this stems from wave refraction, which for waves approaching at a high angle leads to a reduction in wave height as the wave crests becomes stretched. Note that here, the default wave height, period, and berm elevation are used ($H_0$=1 m, $T_0$=10 sec, berm_ele = 2 m). ###Code n_bins = 181 # The number of bins used for the wave resolution: if 181, symmetrical about zero, spaced by 1 deg rel_wave_angles, step = np.linspace(-90, 90, n_bins, retstep=True) # here, one degree of resolution per bin fig, axs = plt.subplots(1, 3, figsize=(20, 5), sharey=True) for ax in axs.flat: ax.set(xlabel='$\phi_0 - \Theta$') axs[0].set(ylabel='Diffusivity ($m^2/s$)') def plot_diffusivity(iAx, assymetry, highness): wave_dist = ashton(a=assymetry, h=highness, loc=-np.pi/2, scale=np.pi) coast_diff, coast_diff_phi0_theta = calc_coast_diffusivity(wave_dist.pdf, shoreline_angles, n_bins=n_bins) axs[iAx].plot(rel_wave_angles, coast_diff_phi0_theta) # plot diffusivity as a function of relative wave angle return wave_dist, coast_diff, coast_diff_phi0_theta # symmetric wave climate waves_sym, coast_diff_sym, coast_diff_phi0_theta_sym = plot_diffusivity(iAx=0, assymetry=0.5, highness=0) waves_sym_high, coast_diff_sym_high, coast_diff_phi0_theta_sym_high = plot_diffusivity(iAx=0, assymetry=0.5, highness=0.2) axs[0].legend(["symmetric wave climate", "symmetric wave climate + high angle waves"]) # asymmetric wave climate (80% coming from the LEFT) waves_asym_left, coast_diff_asym_left, coast_diff_phi0_theta_asym_left = plot_diffusivity(iAx=1, assymetry=0.2, highness=0) waves_asym_high_left, coast_diff_asym_high_left, coast_diff_phi0_theta_asym_high_left = plot_diffusivity(iAx=1, assymetry=0.2, highness=0.2) axs[1].legend(['asymmetry = 80% waves from LEFT', 'asymmetric wave climate + high angle waves']) # asymmetric wave climate (80% coming from the RIGHT) waves_asym_right, coast_diff_asym_right, coast_diff_phi0_theta_asym_right = plot_diffusivity(iAx=2, assymetry=0.8, highness=0) waves_asym_high_right, coast_diff_asym_high_right, coast_diff_phi0_theta_asym_high_right = plot_diffusivity(iAx=2, assymetry=0.8, highness=0.2) axs[2].legend(['asymmetry = 80% waves from RIGHT', 'asymmetric wave climate + high angle waves']) ###Output _____no_output_____ ###Markdown **Figure 3. Shoreline diffusivity for a) a symmetric wave climate and b-c) asymmetric wave climates.**What does this mean in terms of shoreline change? If we evaluate $D(-\theta)$ for the delta shoreline above (i.e., at the relative wave angles), we can see the direction of coastal diffusion at each alongshore grid cell. Here, we make the diffusion term non-dimensional by multiplying by the time step and alongshore grid discretization (i.e., $\beta$ in Eq.41 of NLT19). ###Code dt = 1 # year dy = 100 # m print('relative wave angles =', np.rad2deg(-shoreline_angles)) print('shoreline diffusivity (a=0.5) = ', coast_diff_sym * dt / (2.0 * dy ** 2)) print('shoreline diffusivity (a=0.2, LEFT) = ', coast_diff_asym_left * dt / (2.0 * dy ** 2)) print('shoreline diffusivity (a=0.8, RIGHT) = ', coast_diff_asym_right * dt / (2.0 * dy ** 2)) ###Output relative wave angles = [ -0. -0. -45. -0. -0. -45. -0. -0. -45. -0. -0. 45. -0. -0. 45. -0. -0. 45. -0. -0. -0.] shoreline diffusivity (a=0.5) = [ 8.59636619 8.59636619 0.11044074 8.59636619 8.59636619 0.11044074 8.59636619 8.59636619 0.11044074 8.59636619 8.59636619 -0.05049086 8.59636619 8.59636619 -0.05049086 8.59636619 8.59636619 -0.05049086 8.59636619 8.59636619 8.59636619] shoreline diffusivity (a=0.2, LEFT) = [ 8.70186201 8.70186201 -5.08661035 8.70186201 8.70186201 -5.08661035 8.70186201 8.70186201 -5.08661035 8.70186201 8.70186201 5.03212755 8.70186201 8.70186201 5.03212755 8.70186201 8.70186201 5.03212755 8.70186201 8.70186201 8.70186201] shoreline diffusivity (a=0.8, RIGHT) = [ 8.49087037 8.49087037 5.30749183 8.49087037 8.49087037 5.30749183 8.49087037 8.49087037 5.30749183 8.49087037 8.49087037 -5.13310928 8.49087037 8.49087037 -5.13310928 8.49087037 8.49087037 -5.13310928 8.49087037 8.49087037 8.49087037] ###Markdown Now that we know the diffusivity at each point, we have all of the variables needed to solve the shoreline diffusion equation (Equation 40 in NLT19). We solve this linear diffusion equation by inverting a nearly tridiagonal matrix, which is created using the `_build_matrix` function. Note that all of the functions we've utilized so far (`calc_coast_diffusivity`, `calc_shoreline_angles`) are called within `_build_matrix`. Figure 3 shows an example of shoreline change for a more realistic shoreline position -- a small delta -- for the wave climate described above, for a 100-m alongshore grid discretization and 1-yr time-step (after one model year). ###Code shoreline_x_pyramid_orig = [0, 0, 0, 100, 100, 100, 200, 200, 200, 300, 300, 300, 200, 200, 200, 100, 100, 100, 0, 0, 0] fig, axs = plt.subplots(1, 2, figsize=(10, 5), sharey=True) dy = 100 ny = np.size(shoreline_x_pyramid_orig) grid = np.arange(0, ny*dy, dy) for ax in axs.flat: ax.set(xlabel='alongshore y (m)') axs[0].set(ylabel='cross-shore x (m)') axs[0].plot(grid, shoreline_x_pyramid_orig) axs[1].plot(grid, shoreline_x_pyramid_orig) def plot_pyramid(iAx, wave_dist): mat, rhs, r_ipl = _build_matrix(shoreline_x_pyramid_orig, wave_dist, dy=100, dt=1.0, dx_dt=0) shoreline_x_pyramid = scipy.sparse.linalg.spsolve(mat, rhs) # invert matrix axs[iAx].plot(grid, shoreline_x_pyramid) # new shoreline # print(r_ipl) plot_pyramid(iAx=0, wave_dist=waves_sym) # symmetric wave climate, no high angle waves plot_pyramid(iAx=1, wave_dist=waves_sym_high) # symmetric wave climate, high angle waves plot_pyramid(iAx=0, wave_dist=waves_asym_left) # asymmetric wave climate (80% from LEFT), no high angle waves plot_pyramid(iAx=1, wave_dist=waves_asym_high_left) # asymmetric wave climate (80% from LEFT), high angle waves plot_pyramid(iAx=0, wave_dist=waves_asym_right) # asymmetric wave climate (80% from RIGHT), no high angle waves plot_pyramid(iAx=1, wave_dist=waves_asym_high_right) # asymmetric wave climate (80% from RIGHT), high angle waves axs[0].legend(["original shoreline", "symmetric wave climate", "asymmetry = 80% waves from LEFT", 'asymmetry = 80% waves from RIGHT'], loc='lower left') axs[1].legend(["original shoreline", "symmetric + high angle waves", 'asymmetry LEFT + high angle waves', "asymmetry RIGHT + high angle waves"], loc='lower left') ###Output _____no_output_____ ###Markdown **Figure 4. Comparison of shoreline change after one model year for a symmetric and asymmetric wave climate with a) no high angle waves and b) where 20% of the waves come from a high angle ($\pm45$ degrees).** As you can see in Figure 4a, because we convolved the wave climate pdf (the normalized angular distribution of wave energy) to get a wave-climate averaged shoreline diffusivity for every alongshore location, the difference in shoreline change due to wave asymmetry (for a given wave height and period) is subtle, with slightly more diffusion of the delta by the asymmetric wave climate. ###Code shoreline_x_pyramid_orig = [0, 0, 0, -100, -100, -100, -200, -200, -200, -300, -300, -300, -200, -200, -200, -100, -100, -100, 0, 0, 0] fig, axs = plt.subplots(1, 2, figsize=(10, 5)) dy = 100 ny = np.size(shoreline_x_pyramid_orig) grid = np.arange(0, ny*dy, dy) for ax in axs.flat: ax.set(xlabel='alongshore y (m)') axs[0].set(ylabel='cross-shore x (m)') axs[0].plot(grid, shoreline_x_pyramid_orig) axs[1].plot(grid, shoreline_x_pyramid_orig) def plot_pyramid(iAx, wave_dist): mat, rhs, r_ipl = _build_matrix(shoreline_x_pyramid_orig, wave_dist, dy=100, dt=1.0, dx_dt=0) shoreline_x_pyramid = scipy.sparse.linalg.spsolve(mat, rhs) # invert matrix axs[iAx].plot(grid, shoreline_x_pyramid) # new shoreline plot_pyramid(iAx=0, wave_dist=waves_sym) # symmetric wave climate, no high angle waves plot_pyramid(iAx=1, wave_dist=waves_sym_high) # symmetric wave climate, high angle waves plot_pyramid(iAx=0, wave_dist=waves_asym_left) # asymmetric wave climate (80% from LEFT), no high angle waves plot_pyramid(iAx=1, wave_dist=waves_asym_high_left) # asymmetric wave climate (80% from LEFT), high angle waves plot_pyramid(iAx=0, wave_dist=waves_asym_right) # asymmetric wave climate (80% from RIGHT), no high angle waves plot_pyramid(iAx=1, wave_dist=waves_asym_high_right) # asymmetric wave climate (80% from RIGHT), high angle waves axs[0].legend(["original", "a=0.5, h=0", "a=0.2 (LEFT), h=0", "a=0.8 (RIGHT), h=0"]) axs[1].legend(["original", "a=0.5, h=0.2", "a=0.2 (LEFT), h=0.2", "a=0.8 (RIGHT), h=0.2"]) ###Output _____no_output_____ ###Markdown Now, because I don't really know which way is up here, I'm going to run the first case for 5 years ###Code shoreline_x_pyramid_orig = [0, 0, 0, 100, 100, 100, 200, 200, 200, 300, 300, 300, 200, 200, 200, 100, 100, 100, 0, 0, 0] nt = 5 # 5 year simulation fig, axs = plt.subplots(1, 2, figsize=(10, 5)) dy = 100 ny = np.size(shoreline_x_pyramid_orig) grid = np.arange(0, ny*dy, dy) for ax in axs.flat: ax.set(xlabel='alongshore y (m)') axs[0].set(ylabel='cross-shore x (m)') axs[0].plot(grid, shoreline_x_pyramid_orig) axs[1].plot(grid, shoreline_x_pyramid_orig) def loop_transporter(iAx, wave_dist, nt): transporter = AlongshoreTransporter(shoreline_x_pyramid_orig, wave_distribution=wave_dist, alongshore_section_length=dy, time_step=1) for year in range(nt): transporter.update() # advance one time step axs[iAx].plot(grid, transporter.shoreline_x) # new shoreline after loop loop_transporter(iAx=0, wave_dist=waves_sym, nt=nt) # symmetric wave climate, no high angle waves loop_transporter(iAx=1, wave_dist=waves_sym_high, nt=nt) # symmetric wave climate, high angle waves loop_transporter(iAx=0, wave_dist=waves_asym_left, nt=nt) # asymmetric wave climate (80% from LEFT), no high angle waves loop_transporter(iAx=1, wave_dist=waves_asym_high_left, nt=nt) # asymmetric wave climate (80% from LEFT), high angle waves loop_transporter(iAx=0, wave_dist=waves_asym_right, nt=nt) # asymmetric wave climate (80% from RIGHT), no high angle waves loop_transporter(iAx=1, wave_dist=waves_asym_high_right, nt=nt) # asymmetric wave climate (80% from RIGHT), high angle waves axs[0].legend(["original", "a=0.5, h=0", "a=0.2 (LEFT), h=0", "a=0.8 (RIGHT), h=0"]) axs[1].legend(["original", "a=0.5, h=0.2", "a=0.2 (LEFT), h=0.2", "a=0.8 (RIGHT), h=0.2"]) ###Output _____no_output_____
nlp-getting-started-tutorial.ipynb
###Markdown NLP TutorialNLP - or *Natural Language Processing* - is shorthand for a wide array of techniques designed to help machines learn from text. Natural Language Processing powers everything from chatbots to search engines, and is used in diverse tasks like sentiment analysis and machine translation.In this tutorial we'll look at this competition's dataset, use a simple technique to process it, build a machine learning model, and submit predictions for a score! ###Code # import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn import feature_extraction, linear_model, model_selection #, preprocessing train_df = pd.read_csv("resources/kaggle/input/nlp-getting-started/train.csv") test_df = pd.read_csv("resources/kaggle/input/nlp-getting-started/test.csv") ###Output _____no_output_____ ###Markdown A quick look at our dataLet's look at our data... first, an example of what is NOT a disaster tweet. ###Code train_df[train_df["target"] == 0]["text"].values[1] ###Output _____no_output_____ ###Markdown And one that is: ###Code train_df[train_df["target"] == 1]["text"].values[1] ###Output _____no_output_____ ###Markdown Building vectorsThe theory behind the model we'll build in this notebook is pretty simple: the words contained in each tweet are a good indicator of whether they're about a real disaster or not (this is not entirely correct, but it's a great place to start).We'll use scikit-learn's `CountVectorizer` to count the words in each tweet and turn them into data our machine learning model can process.Note: a `vector` is, in this context, a set of numbers that a machine learning model can work with. We'll look at one in just a second. ###Code count_vectorizer = feature_extraction.text.CountVectorizer() ## let's get counts for the first 5 tweets in the data example_train_vectors = count_vectorizer.fit_transform(train_df["text"][0:5]) ## we use .todense() here because these vectors are "sparse" (only non-zero elements are kept to save space) print(example_train_vectors[0].todense().shape) print(example_train_vectors[0].todense()) ###Output (1, 54) [[0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 1 0]] ###Markdown The above tells us that:1. There are 54 unique words (or "tokens") in the first five tweets.2. The first tweet contains only some of those unique tokens - all of the non-zero counts above are the tokens that DO exist in the first tweet.Now let's create vectors for all of our tweets. ###Code train_vectors = count_vectorizer.fit_transform(train_df["text"]) ## note that we're NOT using .fit_transform() here. Using just .transform() makes sure # that the tokens in the train vectors are the only ones mapped to the test vectors - # i.e. that the train and test vectors use the same set of tokens. test_vectors = count_vectorizer.transform(test_df["text"]) ###Output _____no_output_____ ###Markdown Our modelAs we mentioned above, we think the words contained in each tweet are a good indicator of whether they're about a real disaster or not. The presence of particular word (or set of words) in a tweet might link directly to whether or not that tweet is real.What we're assuming here is a _linear_ connection. So let's build a linear model and see! ###Code ## Our vectors are really big, so we want to push our model's weights ## toward 0 without completely discounting different words - ridge regression ## is a good way to do this. clf = linear_model.RidgeClassifier() ###Output _____no_output_____ ###Markdown Let's test our model and see how well it does on the training data. For this we'll use `cross-validation` - where we train on a portion of the known data, then validate it with the rest. If we do this several times (with different portions) we can get a good idea for how a particular model or method performs.The metric for this competition is F1, so let's use that here. ###Code scores = model_selection.cross_val_score(clf, train_vectors, train_df["target"], cv=3, scoring="f1") scores ###Output _____no_output_____ ###Markdown The above scores aren't terrible! It looks like our assumption will score roughly 0.65 on the leaderboard. There are lots of ways to potentially improve on this (TFIDF, LSA, LSTM / RNNs, the list is long!) - give any of them a shot!In the meantime, let's do predictions on our training set and build a submission for the competition. ###Code clf.fit(train_vectors, train_df["target"]) sample_submission = pd.read_csv("/kaggle/input/nlp-getting-started/sample_submission.csv") sample_submission["target"] = clf.predict(test_vectors) sample_submission.head() sample_submission.to_csv("submission.csv", index=False) ###Output _____no_output_____ ###Markdown NLP TutorialNLP - or *Natural Language Processing* - is shorthand for a wide array of techniques designed to help machines learn from text. Natural Language Processing powers everything from chatbots to search engines, and is used in diverse tasks like sentiment analysis and machine translation.In this tutorial we'll look at this competition's dataset, use a simple technique to process it, build a machine learning model, and submit predictions for a score! ###Code import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn import feature_extraction, linear_model, model_selection, preprocessing train_df = pd.read_csv("./data/train.csv") test_df = pd.read_csv("./data/test.csv") ###Output _____no_output_____ ###Markdown A quick look at our dataLet's look at our data... first, an example of what is NOT a disaster tweet. ###Code train_df[train_df["target"] == 0]["text"].values[1] ###Output _____no_output_____ ###Markdown And one that is: ###Code train_df[train_df["target"] == 1]["text"].values[1] ###Output _____no_output_____ ###Markdown Buildheadg vectorsThe theory behind the model we'll build in this notebook is pretty simple: the words contained in each tweet are a good indicator of whether they're about a real disaster or not (this is not entirely correct, but it's a great place to start).We'll use scikit-learn's `CountVectorizer` to count the words in each tweet and turn them into data our machine learning model can process.Note: a `vector` is, in this context, a set of numbers that a machine learning model can work with. We'll look at one in just a second. ###Code count_vectorizer = feature_extraction.text.CountVectorizer() ## let's get counts for the first 5 tweets in the data example_train_vectors = count_vectorizer.fit_transform(train_df["text"][0:5]) ## we use .todense() here because these vectors are "sparse" (only non-zero elements are kept to save space) print(example_train_vectors[0].todense().shape) print(example_train_vectors[0].todense()) ###Output (1, 54) [[0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 0 1 0]] ###Markdown The above tells us that:1. There are 54 unique words (or "tokens") in the first five tweets.2. The first tweet contains only some of those unique tokens - all of the non-zero counts above are the tokens that DO exist in the first tweet.Now let's create vectors for all of our tweets. ###Code train_vectors = count_vectorizer.fit_transform(train_df["text"]) ## note that we're NOT using .fit_transform() here. Using just .transform() makes sure # that the tokens in the train vectors are the only ones mapped to the test vectors - # i.e. that the train and test vectors use the same set of tokens. test_vectors = count_vectorizer.transform(test_df["text"]) ###Output _____no_output_____ ###Markdown Our modelAs we mentioned above, we think the words contained in each tweet are a good indicator of whether they're about a real disaster or not. The presence of particular word (or set of words) in a tweet might link directly to whether or not that tweet is real.What we're assuming here is a _linear_ connection. So let's build a linear model and see! ###Code ## Our vectors are really big, so we want to push our model's weights ## toward 0 without completely discounting different words - ridge regression ## is a good way to do this. clf = linear_model.RidgeClassifier() ###Output _____no_output_____ ###Markdown Let's test our model and see how well it does on the training data. For this we'll use `cross-validation` - where we train on a portion of the known data, then validate it with the rest. If we do this several times (with different portions) we can get a good idea for how a particular model or method performs.The metric for this competition is F1, so let's use that here. ###Code scores = model_selection.cross_val_score(clf, train_vectors, train_df["target"], cv=3, scoring="f1") scores ###Output _____no_output_____ ###Markdown The above scores aren't terrible! It looks like our assumption will score roughly 0.65 on the leaderboard. There are lots of ways to potentially improve on this (TFIDF, LSA, LSTM / RNNs, the list is long!) - give any of them a shot!In the meantime, let's do predictions on our training set and build a submission for the competition. ###Code clf.fit(train_vectors, train_df["target"]) sample_submission = pd.read_csv("./sample_submission.csv") sample_submission["target"] = clf.predict(test_vectors) sample_submission.head() sample_submission.to_csv("submission.csv", index=False) ###Output _____no_output_____ ###Markdown Now, in the viewer, you can submit the above file to the competition! Good luck! Pretrained Word Embeddings TF Hub ###Code # Inports import tensorflow as tf import tensorflow_hub as hub import pandas as pd import re import seaborn as sns # from google.colab import files from IPython import display import logging logging.getLogger('googleapiclient.discovery_cache').setLevel(logging.ERROR) train_df['tar']= train_df.target # tenforflow don't work with target as a column name train_df_ = train_df.drop(labels=['id', 'keyword', 'location', 'target'], axis=1) test_df_ = test_df.drop(labels=['id', 'keyword', 'location'], axis=1) test_df_.head() train_df_.head() train_df_.dtypes # Training input on the whole training set with no limit on training epochs. train_input_fn = tf.estimator.inputs.pandas_input_fn( train_df_, train_df_["tar"], num_epochs=None, shuffle=True) # .head(10) # Prediction on the whole training set. predict_train_input_fn = tf.estimator.inputs.pandas_input_fn( train_df_, train_df_["tar"], shuffle=False) # Prediction on the test set. predict_test_input_fn = tf.estimator.inputs.pandas_input_fn( test_df_, shuffle=False) embedded_text_feature_column = hub.text_embedding_column( key="text", module_spec="https://tfhub.dev/google/Wiki-words-250-with-normalization/1", trainable=True) # adding trainable to (True) estimator = tf.estimator.DNNClassifier( hidden_units=[512, 511, 512], feature_columns=[embedded_text_feature_column], n_classes=2, optimizer=tf.train.AdamOptimizer(learning_rate=0.003)) estimator.train(input_fn=train_input_fn, steps=100); train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn) print(f"Training set accuracy: {train_eval_result['accuracy']*100:.1f} %") ###Output _____no_output_____
stylegan3_blending_public.ipynb
###Markdown Blending of stylegan3 and stylegan2 models with stylegan3 codebase.Made by [Alex Spirin](https://twitter.com/devdef)The idea of layer blending was inspired by [Justin Pinkney](https://twitter.com/Buntworthy) and his [stylegan blending example](https://github.com/justinpinkney/toonify/blob/master/StyleGAN-blending-example.ipynb)If you like what I'm doing - check out my [Patreon page](https://www.patreon.com/sxela) ###Code #@title This colab is distributed under the MIT license """MIT License Copyright (c) 2021 Alex Spirin Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.""" !pip install ninja %cd /content/ !git clone https://github.com/NVlabs/stylegan3 %cd /content/stylegan3/ from google.colab import drive drive.mount('/content/drive/') #common functions import pickle, torch, PIL, copy, cv2, math import numpy as np def get_model(path): with open(path, 'rb') as f: _G = pickle.load(f)['G_ema'].cuda() return _G #tensor to PIL image def t2i(t): return PIL.Image.fromarray((t*127.5+127).clamp(0,255)[0].permute(1,2,0).cpu().numpy().astype('uint8')) #stack an array of PIL images horizontally def add_imgs(images): widths, heights = zip(*(i.size for i in images)) total_width = sum(widths) max_height = max(heights) new_im = PIL.Image.new('RGB', (total_width, max_height)) x_offset = 0 for im in images: new_im.paste(im, (x_offset,0)) x_offset += im.size[0] return new_im def apply_mask(matrix, mask, fill_value): masked = np.ma.array(matrix, mask=mask, fill_value=fill_value) return masked.filled() def apply_threshold(matrix, low_value, high_value): low_mask = matrix < low_value matrix = apply_mask(matrix, low_mask, low_value) high_mask = matrix > high_value matrix = apply_mask(matrix, high_mask, high_value) return matrix # A simple color correction script to brighten overly dark images def simplest_cb(img, percent): assert img.shape[2] == 3 assert percent > 0 and percent < 100 half_percent = percent / 200.0 channels = cv2.split(img) out_channels = [] for channel in channels: assert len(channel.shape) == 2 # find the low and high precentile values (based on the input percentile) height, width = channel.shape vec_size = width * height flat = channel.reshape(vec_size) assert len(flat.shape) == 1 flat = np.sort(flat) n_cols = flat.shape[0] low_val = flat[math.floor(n_cols * half_percent)-1] high_val = flat[math.ceil( n_cols * (1.0 - half_percent))-1] # saturate below the low percentile and above the high percentile thresholded = apply_threshold(channel, low_val, high_val) # scale the channel normalized = cv2.normalize(thresholded, thresholded.copy(), 0, 255, cv2.NORM_MINMAX) out_channels.append(normalized) return cv2.merge(out_channels) def normalize(inf, thresh): img = np.array(inf) out_img = simplest_cb(img, thresh) return PIL.Image.fromarray(out_img) ###Output _____no_output_____ ###Markdown Stylegan3 model blending ###Code #Download pretrained checkpoint init_model = 'stylegan3-t-ffhqu-256x256.pkl' !wget -P /content https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/{init_model} net_raw = f'/content/{init_model}' G = get_model(net_raw) #specify fine-tuned checkpoint and load both networks net_tuned = '/content/drive/MyDrive/00000-stylegan3-t-ffhq-256x256-gpus1-batch16-gamma6.6/network-snapshot-000160.pkl' %cd /content/stylegan3/ G_new = copy.deepcopy(G) G_tuned = get_model(net_tuned) # Blend based on layer number thresold. All layers before the threshold are taken from the init gen, all layers after the threshold - from the fine-tuned gen. # If the layer name is bigger than the threshold, we take the fine-tuned gen layers, else init gen layers. def doBlend(blend_thresh=7): newDictSynt = G_tuned.synthesis.state_dict().copy() GSyntKeys = G.synthesis.state_dict().keys() for key in GSyntKeys: if key[:1]!='L': continue if int(key.split('_')[0][1:]) > blend_thresh: l = 1 else: l = 0 if 'affine' in key: l = 0 newDictSynt[key] = newDictSynt[key]*l + G.synthesis.state_dict()[key]*(1-l) G_new.synthesis.load_state_dict(newDictSynt) doBlend() #Blend using mask. Number of layers in stylegan3 depends on config and not on gen resolution, as it was with stylegan2. # blend = [0,0,0,0,0,0,0,0.2,0.5,0.7,0.9,1,1,1,1] # blend = [0,0,0,0,0,0,0,0.2,0.5,0.7,0.8,.8,.8,.8,.8] blend = [0,0,0,0,0,0.2,0.2,0.2,0.5,0.7,0.8,.8,.8,.8,1] # blend = [0]*7+[0.8]*(15-7) # Not blending affine layers gives us colors closer to the original gen, without affecting the geometry much. def doBlend(): newDictSynt = G_tuned.synthesis.state_dict().copy() GSyntKeys = G.synthesis.state_dict().keys() for key in GSyntKeys: if key[:1]!='L': continue l = blend[int(key.split('_')[0][1:])] if 'affine' in key: l = 0 newDictSynt[key] = newDictSynt[key]*l + G.synthesis.state_dict()[key]*(1-l) G_new.synthesis.load_state_dict(newDictSynt) doBlend() # Pick a seed and blend threshold or blend mask. The image will be saved in /content/ folder. # Init one of blend functions from above (I suggest the latter) seed = 5 blend_thresh = 13 blend = [0,0,0,0,0,0.2,0.2,0.2,0.5,0.7,0.8,.8,.8,.8,1] psi = 0.5 bl_str = ('_').join([str(o) for o in blend]) net = net_tuned.split('/')[-1] rnd = np.random.RandomState(seed) z = torch.tensor(rnd.randn(1,G.z_dim)).cuda() doBlend() w = G.mapping(z, None, truncation_psi=psi, truncation_cutoff=8) im1 = G.synthesis(w, noise_mode='const', force_fp32=True) im3 = G_new.synthesis(w, noise_mode='const', force_fp32=True) im1 = t2i(im1) im3 = t2i(im3) im = add_imgs([im3,im1]) im.save(f'/content/m{net}_psi{psi}_b{bl_str}_s{seed}.jpg'); im #Generate image pairs from tqdm.notebook import trange import os out_dir = '/content/out/' images = 100 os.makedirs(out_dir, exist_ok=True) bl_str = ('_').join([str(o) for o in blend]) net = net_tuned.split('/')[-1] blend_thresh = 7 blend = [0,0,0,0,0,0.2,0.2,0.2,0.5,0.7,0.8,.8,.8,.8,1] psi = 0.5 doBlend() for i in trange(images): seed =i rnd = np.random.RandomState(seed) z = torch.tensor(rnd.randn(1,G.z_dim)).cuda() w1 = G.mapping(z, None, truncation_psi=psi, truncation_cutoff=8) im1 = G.synthesis(w1, noise_mode='const', force_fp32=True) im2 = G_tuned.synthesis(w1, noise_mode='const', force_fp32=True) im3 = G_new.synthesis(w1, noise_mode='const', force_fp32=True) im1 = t2i(im1) im3 = t2i(im3) im = add_imgs([im3,im1]) im.save(f'{out_dir}/b{bl_str}_s{seed}_m{net}.jpg') #pack results !zip -qq b{bl_str}_m{net}.zip {out_dir}/* ###Output _____no_output_____ ###Markdown Stylegan 2 model blending (stylegan3 codebase) ###Code #Download pretrained checkpoint init_model = 'stylegan2-ffhq-256x256.pkl' !wget https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan2/versions/1/files/{init_model} net_raw = '/content/stylegan3/stylegan2-ffhq-256x256.pkl' G = get_model(net_raw) #specify fine-tuned checkpoint and load both networks net_tuned = '/content/drive/MyDrive/00002-stylegan2-notoned_noblur-256x256-gpus1-batch16-gamma6.6/network-snapshot-000160.pkl' %cd /content/stylegan3/ G_new = copy.deepcopy(G) G_tuned = get_model(net_tuned) #Blend using mask. Number of layers in stylegan3 depends on config and not on gen resolution, as it was with stylegan2. #If you're using a 512 or 1024 stylegan2 model, just set everything above 128 to 1 and tune later. blend = { '4':0, '8':0, '16':0, '32':0, '64':0.5, '128':1, '256':0.7, } #main def doBlend(): newDictSynt = G_tuned.synthesis.state_dict().copy() GSyntKeys = G.synthesis.state_dict().keys() for key in GSyntKeys: if key[:1]!='b': continue if 'conv'in key: l = blend[key.split('.')[0][1:]] newDictSynt[key] = newDictSynt[key]*l + G.synthesis.state_dict()[key]*(1-l) G_new.synthesis.load_state_dict(newDictSynt) doBlend() # Pick a seed and blend mask. The image will be saved in /content/ folder. seed = 0 blend = { '4':0, '8':0, '16':0, '32':0, '64':0.5, '128':1, '256':0.7, } psi = 0.5 bl_str = ('_').join([str(blend[o]) for o in blend]) net = net_tuned.split('/')[-1] rnd = np.random.RandomState(seed) z = torch.tensor(rnd.randn(1,G.z_dim)).cuda() doBlend() w = G.mapping(z, None, truncation_psi=psi, truncation_cutoff=8) im1 = G.synthesis(w, noise_mode='const', force_fp32=True) im3 = G_new.synthesis(w, noise_mode='const', force_fp32=True) im1 = normalize(t2i(im1), 0.005) im3 = normalize(t2i(im3),0.005) im = add_imgs([im3,im1]) im.save(f'/content/m{net}_sg2_norm_b{bl_str}_s{seed}.jpg'); im ###Output _____no_output_____ ###Markdown Test modelsTest all checkpoints in a folder ###Code import glob checkpoint_folder = '/content/drive/MyDrive/deep_learning/arcane/training-runs/00002-stylegan2-notoned_noblur-256x256-gpus1-batch16-gamma6.6' models = glob.glob(f'{checkpoint_folder}/**/*.pkl', recursive=True) from tqdm.notebook import trange import os psi=0.5 blend = { '4':0, '8':0, '16':0, '32':0, '64':0.5, '128':1, '256':0.7, } #main bl_str = ('_').join([str(blend[o]) for o in blend]) net = net_tuned.split('/')[-1] doBlend() for m in models: G_tuned = get_model(m) doBlend() out_dir = f"/content/m{m.split('/')[-1]}_sg2_norm_b{bl_str}" os.makedirs(out_dir, exist_ok=1) for i in trange(100): seed =i rnd = np.random.RandomState(seed) z = torch.tensor(rnd.randn(1,G.z_dim)).cuda() w1 = G.mapping(z, None, truncation_psi=psi, truncation_cutoff=8) im1 = G.synthesis(w1, noise_mode='const', force_fp32=True) im3 = G_new.synthesis(w1, noise_mode='const', force_fp32=True) im1 = t2i(im1) im3 = normalize(t2i(im3),0.005) im = add_imgs([im3,im1]) im.save(f'{out_dir}/m{net}_sg2_norm_b{bl_str}_s{seed}.jpg'); im #archive results !zip -r /content/tests_sg2_norm_b{bl_str}.zip /content/mnetwork-snapshot-000*.pkl_sg2_norm_b{bl_str} ###Output _____no_output_____ ###Markdown Make dataset ###Code from tqdm.notebook import trange import os images = 2000 blend = { '4':0, '8':0, '16':0, '32':0, '64':0.5, '128':1, '256':0.7, } #main psi = 0.5 bl_str = ('_').join([str(blend[o]) for o in blend]) out_dir = f'/content/ds_m{net}_sg2_norm_b{bl_str}' dir_a = f'{out_dir}/trainA' dir_b = f'{out_dir}/trainB' os.makedirs(out_dir, exist_ok=1) os.makedirs(dir_a, exist_ok=1) os.makedirs(dir_b, exist_ok=1) net = net_tuned.split('/')[-1] doBlend() for i in trange(images): seed =i rnd = np.random.RandomState(seed) z = torch.tensor(rnd.randn(1,G.z_dim)).cuda() w = G.mapping(z, None, truncation_psi=psi, truncation_cutoff=8) im1 = G.synthesis(w, noise_mode='const', force_fp32=True) im3 = G_new.synthesis(w, noise_mode='const', force_fp32=True) im1 = t2i(im1) im3 = normalize(t2i(im3),0.005) im1.save(f'{dir_a}/{seed}.jpg') im3.save(f'{dir_b}/{seed}.jpg') !zip -r /content/{out_dir}.zip /content/{out_dir}/* ###Output _____no_output_____ ###Markdown Upscale ###Code %cd /content/ !git clone https://github.com/sberbank-ai/Real-ESRGAN %cd /content/Real-ESRGAN !pip install -r requirements.txt # download model weights # x2 !gdown https://drive.google.com/uc?id=1pG2S3sYvSaO0V0B8QPOl1RapPHpUGOaV -O weights/RealESRGAN_x2.pth from realesrgan import RealESRGAN from PIL import Image import numpy as np import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('device:', device) model = RealESRGAN(device, scale=2) model.load_weights('weights/RealESRGAN_x2.pth') def proc_img(path_to_image, result_image_path): image = Image.open(path_to_image).convert('RGB') sr_image = model.predict(np.array(image)) sr_image.save(result_image_path) from glob import glob from tqdm.notebook import tqdm input_folder = '/content/ds_mnetwork-snapshot-000160.pkl_sg2_norm_b0_0_0_0_0.5_1_0.7' output_folder = f'/content/{input_folder.split('/')[-1]}_upscaled' imgs = glob(f'/{input_folder}/**/*', recursive=True) for im in tqdm(imgs): proc_img(im,f'{output_folder}/{im.split('/')[-1]}') !zip -qq /content/{output_folder}.zip /content/{output_folder}/**/* ###Output _____no_output_____
tsne_notebook.ipynb
###Markdown Additional settings ###Code OXFORD_PATH = os.path.join('..','data','oxbuild_images_zipped','oxbuild_images') PARIS_PATH = os.path.join('..','data','paris_zipped','paris') PARIS_EMBEDS = os.path.join('..','outputs','paris_embed') OXFORD_EMBEDS = os.path.join('..','outputs','oxford_embed') WHICH_DATASET = 'oxford' # 'oxford' or 'paris' OUTPUT_PATH = '../outputs/tsne_graphs/{}'.format(WHICH_DATASET) PCA_LEVEL = 256 ITERATIONS = 5000 OXFORD_PATH_TXT = os.path.join(OXFORD_PATH, 'gt_files_170407') PARIS_PATH_TXT = os.path.join(PARIS_PATH, 'paris_120310') PARIS_LABELS = ['defense','eiffel','invalides','louvre','moulinrouge','museedorsay','notredame','pantheon','pompidou','sacrecoeur','triomphe'] OXFORD_LABELS = ['all_souls','ashmolean','balliol','bodleian','christ_church','cornmarket','hertford','keble','magdalen','pitt_rivers','radcliffe_camera'] NUMBER_OF_LABELS = 11 # keep this as the same as the number of types of labels in the set. Both paris and oxford have 11 different types IMAGES_PER_LABEL = 5 blacklist = ["paris_louvre_000136.jpg", "paris_louvre_000146.jpg", "paris_moulinrouge_000422.jpg", "paris_museedorsay_001059.jpg", "paris_notredame_000188.jpg", "paris_pantheon_000284.jpg", "paris_pantheon_000960.jpg", "paris_pantheon_000974.jpg", "paris_pompidou_000195.jpg", "paris_pompidou_000196.jpg", "paris_pompidou_000201.jpg", "paris_pompidou_000467.jpg", "paris_pompidou_000640.jpg", "paris_sacrecoeur_000299.jpg", "paris_sacrecoeur_000330.jpg", "paris_sacrecoeur_000353.jpg", "paris_triomphe_000662.jpg", "paris_triomphe_000833.jpg", "paris_triomphe_000863.jpg", "paris_triomphe_000867.jpg",] current_time = datetime.datetime.now().strftime("%Y%m%d-%H%M%S") if WHICH_DATASET == 'paris': label_list = PARIS_LABELS labels_dictionary, junk_dictionary = emb.images_with_labels(PARIS_PATH_TXT, PARIS_LABELS) path2txt = PARIS_PATH_TXT embed_path = PARIS_EMBEDS elif WHICH_DATASET == 'oxford': label_list = OXFORD_LABELS labels_dictionary, junk_dictionary = emb.images_with_labels(OXFORD_PATH_TXT, OXFORD_LABELS) path2txt = OXFORD_PATH_TXT embed_path = OXFORD_EMBEDS else: raise ValueError('WHICH_DATASET has an invalid string.') ###Output _____no_output_____ ###Markdown Apply PCA ###Code def apply_pca(output_dim, embedding_path, whitening = True): PCA_obj = PCA(output_dim, whiten = whitening) big_list = [] name_list = [] for dirname, _, filenames in os.walk(embedding_path): for filename in filenames: npy_path = os.path.join(dirname, filename) embedding = np.load(npy_path) big_list.append(embedding) filename = filename.replace('.npy','.jpg') name_list.append(filename) big_list = tf.convert_to_tensor(big_list) result = PCA_obj.fit_transform(big_list) result = tf.convert_to_tensor(result) return result, name_list def images_with_labels_for_tsne(path2txt, label_list, filetype='jpg'): path2txt = path2txt#.decode() rev_label_dict = {} for i in range(len(label_list)): label = label_list[i]#.decode() query_name = label+'_1' query = query_name+'_query.txt' good = query_name+'_good.txt' ok = query_name+'_ok.txt' junk = query_name+'_junk.txt' ### get the query image path from the _query.txt with open(os.path.join(path2txt,query)) as file: contents = file.read().split(' ') query_image_name = contents[0].replace('oxc1_','')+"."+filetype #remove the oxc1_ part which exists in the oxford query txt list_of_good = pipe.generate_img_list(path2txt, good) list_of_ok = pipe.generate_img_list(path2txt, ok) list_of_junk = pipe.generate_img_list(path2txt, junk) tmp_list = list_of_good + list_of_ok for img_name in tmp_list: rev_label_dict[img_name] = i return rev_label_dict def create_label_list_for_tsne(name_list, rev_label_dict): label_list = [] for i in name_list: try: label_list.append(rev_label_dict[i]) except: label_list.append(11) return label_list ###Output _____no_output_____ ###Markdown Perform t-SNE ###Code # Perform pca result, name_list = apply_pca(PCA_LEVEL,embed_path,whitening=False) # Get labels, and delete junk images reverse_dictionary = images_with_labels_for_tsne(path2txt, label_list) label_index_list = np.asarray(create_label_list_for_tsne(name_list, reverse_dictionary)) junk_indices = np.where(label_index_list == 11) label_index_list_2 = np.delete(label_index_list, junk_indices, 0) result_2 = np.delete(np.asarray(result), junk_indices, 0) # perform TSNE tsne = TSNE(verbose=1, n_iter=ITERATIONS) tsne_results = tsne.fit_transform(result_2) x = np.asarray(tsne_results[:, 0]) y = np.asarray(tsne_results[:, 1]) # plot plt.figure(figsize=(16,10)) plt.scatter(x, y, c=label_index_list_2, cmap=plt.cm.get_cmap("jet", len(label_list))) plt.colorbar(ticks=range(len(label_list))) plt.title('{} t-SNE graph'.format(WHICH_DATASET)) plt.savefig('{}/{}_{}_{}'.format(OUTPUT_PATH,current_time,PCA_LEVEL,ITERATIONS)) plt.show() ###Output _____no_output_____
Lesson06/Exercise02.ipynb
###Markdown Exercise 2: Generate the Feature Importance of the Target Variable and Carry Out EDA ###Code import numpy as np import pandas as pd import seaborn as sns import time import re import os import matplotlib.pyplot as plt sns.set(style="ticks") # read the downloaded input data (marketing data) df = pd.read_csv('https://raw.githubusercontent.com/TrainingByPackt/Big-Data-Analysis-with-Python/master/Lesson07/Dataset/bank.csv', sep=';') df['y'].replace(['yes','no'],[1,0],inplace=True) df['default'].replace(['yes','no'],[1,0],inplace=True) df['housing'].replace(['yes','no'],[1,0],inplace=True) df['loan'].replace(['yes','no'],[1,0],inplace=True) corr_df = df.corr() sns.heatmap(corr_df, xticklabels=corr_df.columns.values, yticklabels=corr_df.columns.values, annot = True, annot_kws={'size':12}) heat_map=plt.gcf(); heat_map.set_size_inches(10,5) plt.xticks(fontsize=10); plt.yticks(fontsize=10); plt.show() pip install boruta --upgrade # import DecisionTreeClassifier from sklearn and # BorutaPy from boruta import numpy as np from sklearn.ensemble import RandomForestClassifier from boruta import BorutaPy import boruta # transform all categorical data types to integers (hot-encoding) for col_name in df.columns: if(df[col_name].dtype == 'object'): df[col_name]= df[col_name].astype('category') df[col_name] = df[col_name].cat.codes # generate separate dataframes for IVs and DV (target variable) X = df.drop(['y'], axis=1).values Y = df['y'].values # build RandomForestClassifier, Boruta models and # related parameter rfc = RandomForestClassifier(n_estimators=200, n_jobs=4, class_weight='balanced', max_depth=6) boruta_selector = BorutaPy(rfc, n_estimators='auto', verbose=2) n_train = len(X) # fit Boruta algorithm boruta_selector.fit(X, Y) # check ranking of features feature_df = pd.DataFrame(df.drop(['y'], axis=1).columns.tolist(), columns=['features']) feature_df['rank']=boruta_selector.ranking_ feature_df = feature_df.sort_values('rank', ascending=True).reset_index(drop=True) sns.barplot(x='rank',y='features',data=feature_df) feature_df ###Output _____no_output_____
09 count the num of digits in a num.ipynb
###Markdown All the IPython Notebooks in this example series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/90_Python_Examples)** ###Code # Here is source code of the Python Program to ... # The program output is also shown below. n=int(input("Enter number:")) count=0 while(n>0): count=count+1 n=n//10 print("The number of digits in the number are:",count) ''' >>Output/Runtime Test Cases Case 1: Enter number:123 The number of digits in the number are: 3 Case 2: Enter number:1892 The number of digits in the number are: 4 ''' ###Output Enter number:96587245 The number of digits in the number are: 8
keras_triplet_loss.ipynb
###Markdown Loading and pre-processing the MNIST dataset ###Code (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = np.expand_dims(x_train,-1) x_test = np.expand_dims(x_test,-1) assert(x_train.shape[1:] == (28,28,1)) assert(x_test.shape[1:] == (28,28,1)) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 assert(y_train.dtype == np.uint8) assert(y_test.dtype == np.uint8) ###Output _____no_output_____ ###Markdown Building a simple Convolutional Neural Network with Triplet Loss ###Code batch_size = 64 epochs = 20 learning_rate = 1e-4 embedding_size = 64 # default in Tensorflow K.set_image_data_format("channels_last") def keras_batch_hard_triplet_loss(labels, y_pred): # As omoindrot's loss functions expects the labels to have shape (batch_size,), labels are flattaned. # Before flattening, they have shape (batch_size,1). labels = K.flatten(labels) return batch_hard_triplet_loss(labels, y_pred, margin = 0.5) model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1))) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) # Last layer is not activated. model.add(Dense(embedding_size, activation='linear')) # Normalizing the created embeddings model.add(Lambda(lambda x: K.l2_normalize(x,axis=1))) model.compile(loss=keras_batch_hard_triplet_loss, optimizer=keras.optimizers.Adam(learning_rate)) print(model.summary()) ###Output Model: "sequential_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_3 (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d_3 (MaxPooling2 (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_4 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_4 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 5, 5, 64) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1600) 0 _________________________________________________________________ dense_2 (Dense) (None, 64) 102464 _________________________________________________________________ lambda_2 (Lambda) (None, 64) 0 ================================================================= Total params: 121,280 Trainable params: 121,280 Non-trainable params: 0 _________________________________________________________________ None ###Markdown Training the model ###Code model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) ###Output Train on 60000 samples, validate on 10000 samples Epoch 1/20 60000/60000 [==============================] - 9s 158us/step - loss: 0.5390 - val_loss: 0.5005 Epoch 2/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.5090 - val_loss: 0.4770 Epoch 3/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.4137 - val_loss: 0.1964 Epoch 4/20 60000/60000 [==============================] - 9s 149us/step - loss: 0.2629 - val_loss: 0.1516 Epoch 5/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.2124 - val_loss: 0.1299 Epoch 6/20 60000/60000 [==============================] - 9s 150us/step - loss: 0.1864 - val_loss: 0.1178 Epoch 7/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.1670 - val_loss: 0.1094 Epoch 8/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.1528 - val_loss: 0.1011 Epoch 9/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.1391 - val_loss: 0.0976 Epoch 10/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.1339 - val_loss: 0.0952 Epoch 11/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.1258 - val_loss: 0.0881 Epoch 12/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.1187 - val_loss: 0.0863 Epoch 13/20 60000/60000 [==============================] - 9s 154us/step - loss: 0.1119 - val_loss: 0.0841 Epoch 14/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.1085 - val_loss: 0.0823 Epoch 15/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.1043 - val_loss: 0.0809 Epoch 16/20 60000/60000 [==============================] - 9s 150us/step - loss: 0.1014 - val_loss: 0.0785 Epoch 17/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.0965 - val_loss: 0.0757 Epoch 18/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.0941 - val_loss: 0.0772 Epoch 19/20 60000/60000 [==============================] - 9s 152us/step - loss: 0.0912 - val_loss: 0.0749 Epoch 20/20 60000/60000 [==============================] - 9s 151us/step - loss: 0.0897 - val_loss: 0.0721
kaggle_ubiquant/notebooks/20220218_colab.ipynb
###Markdown Top ###Code # google colab preliminaries from google.colab import drive drive.mount('/content/gdrive') drive_path = "/content/gdrive/MyDrive/Career/ML Study/kaggle_ubiquant/" # !pip install git+https://github.com/dennischenfeng/kaggle-ubiquant.git --no-cache-dir --ignore-installed gpu_info = !nvidia-smi gpu_info = '\n'.join(gpu_info) if gpu_info.find('failed') >= 0: print('Not connected to a GPU') else: print(gpu_info) from psutil import virtual_memory ram_gb = virtual_memory().total / 1e9 print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb)) if ram_gb < 20: print('Not using a high-RAM runtime') else: print('You are using a high-RAM runtime!') # git clone the repo !git clone https://github.com/dennischenfeng/kaggle-ubiquant.git # !pip install -U pip !pip install kaggle-ubiquant/. import sys sys.path.insert(0, '/content/kaggle-ubiquant') import kaggle_ubiquant assert kaggle_ubiquant.__file__ == "/content/kaggle-ubiquant/kaggle_ubiquant/__init__.py" # run this cell if want to git pull any changes %%bash cd kaggle-ubiquant git config --global user.email "[email protected]" git config --global user.name "Dennis" git stash git pull %load_ext autoreload %autoreload from kaggle_ubiquant.dataset import generate_dataset, DatasetConfig, Dataset from kaggle_ubiquant.model import generate_model, ModelConfig from kaggle_ubiquant.train import training_run import pandas as pd import numpy as np from typing import Iterable, Dict, Tuple, Callable, Optional from scipy.stats import pearsonr import wandb import dataclasses import optuna import pathlib ROOT_DIR = pathlib.Path(drive_path) ###Output _____no_output_____ ###Markdown Data ###Code %%time # df_large = pd.read_csv(ROOT_DIR / 'data/train.csv') df_small = pd.read_csv(ROOT_DIR / 'data/train_small.csv') ###Output CPU times: user 20.4 s, sys: 2.17 s, total: 22.5 s Wall time: 26 s ###Markdown Try using git clone instead of pip installWill make code repo modifications easier to roll out ###Code # TODO: continue here !git clone https://github.com/dennischenfeng/kaggle-ubiquant.git !pip install -U pip # !pip install -q --pre poetry # !poetry --version %%bash cd kaggle-ubiquant pip install kaggle-ubiquant/. import optuna import sys sys.path.insert(0, '/content/kaggle-ubiquant') import kaggle_ubiquant import kaggle_ubiquant kaggle_ubiquant.__file__ ###Output _____no_output_____ ###Markdown End result: ###Code !git clone https://github.com/dennischenfeng/kaggle-ubiquant.git # !pip install -U pip !pip install kaggle-ubiquant/. import sys sys.path.insert(0, '/content/kaggle-ubiquant') import kaggle_ubiquant assert kaggle_ubiquant.__file__ == "/content/kaggle-ubiquant/kaggle_ubiquant/__init__.py" ###Output _____no_output_____ ###Markdown Next: test that a change to my repo will be picked up here in colab notebook Added 'git clone' code to top of this notebook ###Code ###Output _____no_output_____ ###Markdown Test hparamSet1 on small dataset ###Code num_train_iid = len(pd.unique(df_small.investment_id)) print(num_train_iid) # need at least one time_id for test start_test_time_id = max(df_small.time_id) - 1 print(start_test_time_id) use_investment_id = False learning_rate = 0.05 max_depth = 4 min_child_weight = 1 gamma = 0.2 colsample_bytree = 0.7 tree_method = 'gpu_hist' dc = DatasetConfig( num_train_iid, num_train_iid, num_train_iid, start_test_time_id=start_test_time_id, use_investment_id=use_investment_id, ) mc = ModelConfig(model_kwargs=dict( learning_rate=learning_rate, max_depth=max_depth, min_child_weight=min_child_weight, gamma=gamma, colsample_bytree=colsample_bytree, tree_method=tree_method, )) model_small, r = training_run(df_small, dc, mc) print(r) ###Output 100%|██████████| 256760/256760 [00:08<00:00, 31051.26it/s] /content/kaggle-ubiquant/kaggle_ubiquant/dataset.py:65: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['target_lag1'] = compute_lag1(df, lag_default_value=dc.lag_default_value) 100%|██████████| 570/570 [00:00<00:00, 33423.55it/s] ###Markdown Train XGBoost (hparamSet1) on full dataset ###Code num_train_iid = len(pd.unique(df_large.investment_id)) print(num_train_iid) # need at least one time_id for test start_test_time_id = max(df_large.time_id) - 1 print(start_test_time_id) use_investment_id = False learning_rate = 0.05 max_depth = 4 min_child_weight = 1 gamma = 0.2 colsample_bytree = 0.7 tree_method = 'gpu_hist' dc = DatasetConfig( num_train_iid, num_train_iid, num_train_iid, start_test_time_id=start_test_time_id, use_investment_id=use_investment_id, ) mc = ModelConfig(model_kwargs=dict( learning_rate=learning_rate, max_depth=max_depth, min_child_weight=min_child_weight, gamma=gamma, colsample_bytree=colsample_bytree, tree_method=tree_method, )) model, r = training_run(df_large, dc, mc, wandb_project=None) ###Output 100%|██████████| 3134540/3134540 [02:08<00:00, 24340.69it/s] /content/kaggle-ubiquant/kaggle_ubiquant/dataset.py:64: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['target_lag1'] = compute_lag1(df, lag_default_value=dc.lag_default_value) 100%|██████████| 6870/6870 [00:00<00:00, 42298.17it/s] ###Markdown Colab session crashed b/c ran out of memory Shrink size of dataset to fit in memoryFull dataset crashes the notebook b/c memory usage. I think its b/c GPU memory, which is only 16 GB, whereas full dataset is 17 GB. Try shrinking to 15 GB 15 GB ###Code all_investment_ids = pd.unique(df_large.investment_id) n_keep = int((15 / 17.27) * len(all_investment_ids)) keep_iids = np.random.choice(all_investment_ids, n_keep, replace=False) df_15GB = df_large[df_large.investment_id.isin(keep_iids)] df_15GB.to_csv(f'{drive_path}/data/train_15GB.csv') ###Output _____no_output_____ ###Markdown 12 GB ###Code import gc; gc.collect() all_investment_ids = pd.unique(df_large.investment_id) n_keep = int((12 / 17.27) * len(all_investment_ids)) keep_iids = np.random.choice(all_investment_ids, n_keep, replace=False) df_12GB = df_large[df_large.investment_id.isin(keep_iids)] df_12GB.to_csv(f'{drive_path}/train_12GB.csv') ###Output _____no_output_____ ###Markdown (2nd try) Train XGBoost (hparamSet1) on full dataset ###Code import gc; gc.collect() df = pd.read_csv(ROOT_DIR / 'data/train_12GB.csv') num_train_iid = len(pd.unique(df.investment_id)) print(num_train_iid) # need at least one time_id for test start_test_time_id = max(df.time_id) - 1 print(start_test_time_id) use_investment_id = False learning_rate = 0.05 max_depth = 4 min_child_weight = 1 gamma = 0.2 colsample_bytree = 0.7 tree_method = 'gpu_hist' dc = DatasetConfig( num_train_iid, num_train_iid, num_train_iid, start_test_time_id=start_test_time_id, use_investment_id=use_investment_id, ) mc = ModelConfig(model_kwargs=dict( learning_rate=learning_rate, max_depth=max_depth, min_child_weight=min_child_weight, gamma=gamma, colsample_bytree=colsample_bytree, tree_method=tree_method, )) model, r = training_run(df, dc, mc, wandb_project=None) ###Output 100%|██████████| 2180462/2180462 [01:55<00:00, 18873.77it/s] /content/kaggle-ubiquant/kaggle_ubiquant/dataset.py:65: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['target_lag1'] = compute_lag1(df, lag_default_value=dc.lag_default_value) 100%|██████████| 4797/4797 [00:00<00:00, 30977.89it/s] ###Markdown ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABM4AAAMsCAYAAACsou5SAAAKUGlDQ1BJQ0MgUHJvZmlsZQAASImVlgdQVFcXgO972xtt6XXpvbcFBJbepVdRWXaXpa7LUkVERYIRiAUVEbChoSoohiKxIooVBQULahYJAkoMFkAFJQ9JNPkz8//zn5lzz3fPO3PuuWXmHQBI95h8fjIsBkAKL10Q5OFMi4iMouF+BUQgBaSBCSAzWWl8p4AAX4DIX/afMjMIoEV713Ax17+//1eRYHPSWABA0QjnstNYKQg/QHgViy9IBwCmI6yelc5f5MUYSQFSIML8ReYuccEixy5x+ZeYkCAXhI8BgCczmQIuAMQOxE/LZHGRPMTF/CY8dgIPABIeYQdWPJONsCfCBikpaxYZiQM6SDyyDqkKYXrs33Jy/5E/9mt+JpP7lZf29UXUnDm8QJqHr5m1iVVAIM2FmZwQK2Cmc9g0Q1o4U5DCEfyfh/a/JCU546+1F2+GzOGFBiPWGVFFZOQAHggENOABfIEZsEZu2QoEfPG4ACZIBgkgFggQSkci2YjXENFwZC4AKYgHKTadk714PsBlDX+tIIEbn05zQm6eQ/PisYwMaGYmZiYALL6jpRI+zC+9j46yb768JqQcO+QMb33zhaoDcPIJ8iTeffOpY5DzHwPgbDIrQ5C55EMvDhjkhYoCSSAHlIE60EFqNEN2YQcYwA14A38QAiLBKsAC8UjNApAFcsEmUAiKwQ6wB1SAg+AIqAPHQQtoB2fARXAF3AB3wAAYAkIwCl6CKTAD5iAIwkEUiArJQSqQJqQPmUF0yAFyg3yhICgSioG4EA/KgHKhzVAxVApVQIeheugkdBq6CF2D+qCH0DA0Ab2BPsIomAxLwkqwFmwM02En2AcOgVfCXDgVzoEL4G1wOVwNH4Pb4IvwDXgAFsIv4WkUQJFQ0ihVlCGKjnJB+aOiUHEoASoPVYQqQ1WjmlCdqB7UXZQQNYn6gMaiqWga2hBth/ZEh6JZ6FR0HroEXYGuQ7ehu9F30cPoKfRnDAWjiNHH2GK8MBEYLiYLU4gpw9RgWjGXMQOYUcwMFouVxmpjrbGe2EhsInYdtgS7H9uMvYDtw45gp3E4nBxOH2eP88cxcem4Qtw+3DHceVw/bhT3Hk/Cq+DN8O74KDwPn48vwzfgz+H78WP4OYIYQZNgS/AnsAlrCdsJRwmdhNuEUcIcUZyoTbQnhhATiZuI5cQm4mXiY+JbEomkRrIhBZISSBtJ5aQTpKukYdIHsgRZj+xCjiZnkLeRa8kXyA/JbykUihaFQYmipFO2UeoplyhPKe9FqCJGIl4ibJENIpUibSL9Iq9ECaKaok6iq0RzRMtET4neFp0UI4hpibmIMcXyxCrFTovdF5sWp4qbivuLp4iXiDeIXxMfl8BJaEm4SbAlCiSOSFySGKGiqOpUFyqLupl6lHqZOiqJldSW9JJMlCyWPC7ZKzklJSFlIRUmlS1VKXVWSiiNktaS9pJOlt4u3SI9KP1RRknGSYYjs1WmSaZfZlZWQZYhy5Etkm2WHZD9KEeTc5NLktsp1y73RB4trycfKJ8lf0D+svykgqSCnQJLoUihReGRIqyopxikuE7xiOJNxWklZSUPJb7SPqVLSpPK0soM5UTl3crnlCdUqCoOKgkqu1XOq7ygSdGcaMm0clo3bUpVUdVTNUP1sGqv6pyatlqoWr5as9oTdaI6XT1Ofbd6l/qUhoqGn0auRqPGI02CJl0zXnOvZo/mrJa2VrjWFq12rXFtWW0v7RztRu3HOhQdR51UnWqde7pYXbpuku5+3Tt6sJ6lXrxepd5tfVjfSj9Bf79+nwHGwMaAZ1BtcN+QbOhkmGnYaDhsJG3ka5Rv1G70yljDOMp4p3GP8WcTS5Nkk6MmQ6YSpt6m+aadpm/M9MxYZpVm98wp5u7mG8w7zF9b6FtwLA5YPLCkWvpZbrHssvxkZW0lsGqymrDWsI6xrrK+T5ekB9BL6FdtMDbONhtszth8sLWyTbdtsf3dztAuya7BbnyZ9jLOsqPLRuzV7Jn2h+2FDjSHGIdDDkJHVUemY7XjM4Y6g82oYYw56TolOh1zeuVs4ixwbnWedbF1We9ywRXl6uFa5NrrJuEW6lbh9tRdzZ3r3ug+5WHpsc7jgifG08dzp+d9LyUvlle915S3tfd6724fsk+wT4XPM189X4Fvpx/s5+23y+/xcs3lvOXt/sDfy3+X/5MA7YDUgJ8DsYEBgZWBz4NMg3KDeoKpwauDG4JnQpxDtocMheqEZoR2hYmGRYfVh82Gu4aXhgsjjCPWR9yIlI9MiOyIwkWFRdVETa9wW7FnxWi0ZXRh9OBK7ZXZK6+tkl+VvOrsatHVzNWnYjAx4TENMfNMf2Y1czrWK7YqdorlwtrLeslmsHezJzj2nFLOWJx9XGncONeeu4s7Ee8YXxY/meCSUJHwOtEz8WDibJJ/Um3SQnJ4cnMKPiUm5TRPgpfE616jvCZ7TR9fn1/IF6bapu5JnRL4CGrSoLSVaR3pksgP+2aGTsZ3GcOZDpmVme+zwrJOZYtn87JvrtVbu3XtWI57zo/r0OtY67pyVXM35Q6vd1p/OA/Ki83r2qC+oWDD6EaPjXWbiJuSNt3KN8kvzX+3OXxzZ4FSwcaCke88vmssFCkUFN7fYrfl4Pfo7xO+791qvnXf1s9F7KLrxSbFZcXzJayS6z+Y/lD+w8K2uG292622H9iB3cHbMbjTcWddqXhpTunILr9dbbtpu4t2v9uzes+1Mouyg3uJezP2Cst9yzv2aezbsW++Ir5ioNK5srlKsWpr1ex+9v7+A4wDTQeVDhYf/Hgo4dCDwx6H26q1qsuOYI9kHnl+NOxoz4/0H+tr5GuKaz7V8mqFdUF13fXW9fUNig3bG+HGjMaJY9HH7hx3Pd7RZNh0uFm6ufgEOJFx4sXJmJODLT4tXafop5p+0vypqpXaWtQGta1tm2qPbxd2RHb0nfY+3dVp19n6s9HPtWdUz1SelTq7/RzxXMG5hfM556cv8C9MXuReHOla3TV0KeLSve7A7t7LPpevXnG/cqnHqef8VfurZ67ZXjt9nX69/YbVjbabljdbb1neau216m27bX27447Nnc6+ZX3n+h37L951vXvlnte9GwPLB/oGQwcf3I++L3zAfjD+MPnh60eZj+aGNj7GPC56Ivak7Kni0+pfdH9pFloJzw67Dt98FvxsaIQ18vLXtF/nRwueU56XjamM1Y+bjZ+ZcJ+482LFi9GX/Jdzk4W/if9W9Urn1U+/M36/ORUxNfpa8HrhTclbube17yzedU0HTD+dSZmZmy16L/e+7gP9Q8/H8I9jc1nzuPnyT7qfOj/7fH68kLKwwGcKmF9aARSicFwcAG9qAaBEAkC9g/RfK5b6vD/7H0jF/Gsn9BeDoaRv3HJ3qRf8IlYA1GwEIAzRAGR6iAGAFsIUxtI8hAGgGfRX/VPS4szNluhj9sLC/CQAOA5i5QEEIzFwHrIwMv/UCiAt/aX+clEIfQBkC5Gcbg9E0v7Vwy31nn/b439a8LWCf9g/ACyA2QElvjgYAAAAVmVYSWZNTQAqAAAACAABh2kABAAAAAEAAAAaAAAAAAADkoYABwAAABIAAABEoAIABAAAAAEAAATOoAMABAAAAAEAAAMsAAAAAEFTQ0lJAAAAU2NyZWVuc2hvdOuEafQAAAHXaVRYdFhNTDpjb20uYWRvYmUueG1wAAAAAAA8eDp4bXBtZXRhIHhtbG5zOng9ImFkb2JlOm5zOm1ldGEvIiB4OnhtcHRrPSJYTVAgQ29yZSA2LjAuMCI+CiAgIDxyZGY6UkRGIHhtbG5zOnJkZj0iaHR0cDovL3d3dy53My5vcmcvMTk5OS8wMi8yMi1yZGYtc3ludGF4LW5zIyI+CiAgICAgIDxyZGY6RGVzY3JpcHRpb24gcmRmOmFib3V0PSIiCiAgICAgICAgICAgIHhtbG5zOmV4aWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20vZXhpZi8xLjAvIj4KICAgICAgICAgPGV4aWY6UGl4ZWxZRGltZW5zaW9uPjgxMjwvZXhpZjpQaXhlbFlEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlBpeGVsWERpbWVuc2lvbj4xMjMwPC9leGlmOlBpeGVsWERpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6VXNlckNvbW1lbnQ+U2NyZWVuc2hvdDwvZXhpZjpVc2VyQ29tbWVudD4KICAgICAgPC9yZGY6RGVzY3JpcHRpb24+CiAgIDwvcmRmOlJERj4KPC94OnhtcG1ldGE+Cgil044AAEAASURBVHgB7J0HnBRF2ocLwYgYEEURFAUUMCBGzIoBc+RMKGI49ZBTkTMr5iwnZ84KimIO+ImCARPGU1ERBAOIigkz5vDtU/jO1fZ2z3TPziy7O//399vtnu7q6uqnq7ur/vVWVZMePXr86WQiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIALVCMxT7Zd+iIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIeAISzpQRREAEREAEREAEREAEREAEREAEREAEREAERCCGgISzGCjaJAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAISzpQHREAEREAEREAEREAEREAEREAEREAEREAERCCGgISzGCjaJAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAISzpQHREAEREAEREAEREAEREAEREAEREAEREAERCCGgISzGCjaJAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAISzpQHREAEREAEREAEREAEREAEREAEREAEREAERCCGgISzGCjaJAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAISzpQHREAEREAEREAEREAEREAEREAEREAEREAERCCGQLOYbY1u03rrrecWWWQRN2vWLPfKK680uuvTBYmACIiACIiACIiACIiACIiACIiACIiACJSeQEV4nO2///5ur732cn369Ck9wTLFuPTSS7sRI0Y4loWMMMcee2yqsIXi0n4REAEREAEREAEREAEREAEREAEREAEREIE5BCpCOGuIN3vIkCFeCLNl0jUgmh1zzDGuV69efpkUTttFQAREQAREQAREQAREQAREQAREQAREQASyEZBwlo1XnYUeNGiQ++STT/KKZyaarbHGGj4sx8hEQAREQAREQAREQAREQAREQAREQAREQARKQ6Akwtn888/v1l13Xbf++uu7pZZaKlXKmjZt6tZaay23yiqrpApPIOLeZJNNXIsWLVIfkzZgl85d/DWQrjS27LLLuo022qgsaeH8iGb5xLOoaNaQuqGm4aswIiACIiACIiACIiACIiACIiACIiACIlAOAgzp9dhjj/khsliPMwvTpEePHn/GBUizrWPHju6II45wzZs3rxb8xx9/dMOHD3cvvvhibvtVV13l5p13Xvfaa6/5ZdeuXV2TJk38/j/++MNNmzbNXXDBBe7XX3/12/bYYw/f/ZAfF154oT8PAp3Z999/7+655x735JNP2qbE5RVXXOE49uuvv/ZilAVs06aNO/zww70gN888czTEP//803311Vfutttui51IAIEKwWy++eazaHz4Z5991u2www5+2ymnnOI+/vjj3P7arCCQWXdNE9OIj+6Z5mkm0aw2hHWsCIiACIiACIiACIiACIiACIiACIhApRBAEOvbt2+1y33kkUe8JmUbwzBFe5zhXXb88cfXEM04yYILLugOOeQQ16FDBzunM0+ubt26eS8zE80IgGi14oorujPOOMOvsw2RzQzPq1A0Y/vCCy/s9ttvP7f22mtbsExLvNcGDx7su0KaaEYEpKtly5auf//+jrSGxgQDPXv2rCaasX/xxRfPiWb8Dq+N37UxE8vCbpsSzWpDVMeKgAiIgAiIgAiIgAiIgAiIgAiIgAhUKoGtt966xqVH9Z8wTNHC2cyZM90vv/zi8BYbOXKkO+yww9xpp53mJk6c6BOAeHTAAQfUSAzb8ep64okn3HnnnefuvfdeHw8BEbMQrKKGsDVr1ix34403uksuucRNmjTJByGuQw891C2yyCLRQwr+Jq0mzn344YfuuuuucxdffLH3iONg4h4wYIBbcsklfVx0Kd1qq638OulHjTz99NPdzTff7H744Qe/vVz/ouKZPM3KRVrxioAIiIAIiIAIiIAIiIAIiIAIiIAIVBoBevslWdHCGV0rBw4c6OiWOHbsWO95hsAzdOjQXHdLPLfi7IYbbnC33HKLmzp1qnvwwQe9APX777/7oJ07d65xCF0nTzjhBPfMM8+4CRMmuIsuusi9+uqrPhyi2nrrrVfjmHwbOnXqlPNgmz59ujv11FPdc889595880136aWXuscffzwXN2OqYRtuuKFf8o/033HHHe6DDz5w48aNcyeddFLumnOByrAC39Dy3dgwnNZFQAREQAREQAREQAREQAREQAREQAREQAScGzNmTA0MDDcWWhimWbgj6/oyyyzjPcToqmjdHfFAwyMLM4+uMF68s8aPHx9u8gPhv/POO27llVf23Tzp6hnaww8/7ExYs+3Dhg1z3bt39z9XW201L97ZvkLLUGjDWy5qjG+2+eabe6+zVVdd1d19992O8dyw3377rUb6v/32W/fuu++6ONEvGncxv6MTARCHjX1mEwgUE6+OEQEREAEREAEREAEREAEREAEREAEREIFKIkAPQsy6YyKSoTGFFoYpWjhbc801vWhGl0YzBDMT0GxbdPnFF19EN/nfeLAhnGErrbSSX9o/PMGi9t1333kRq1mzZlUi0jLR3Xl/r7DCCrn9U6ZMya3bCuIfExwstNBCuVlCrTsokxLEGZMBlEM4i4pmTARgopktJZ7F3RFtEwEREAEREAEREAEREAEREAEREAEREIHqBOjNh1AWFcvCUGGYortq7rTTTrlB8OmqSbfNgw8+2DF22E8//RSer9p6q1atqv22H8svv7yt+i6cuR9VK126dAl/+nUmB0A0wz777FO/TPuP7plm4QQGtg3xz7zeTOhjjDXMBDQLa8v27dvbasmWcaIZkXMDTSwz8YylTAREQAREQAREQAREQAREQAREQAREQAREoHQEihbO6KaJ/fzzz35yALorYkwaENdF0++s+ocX11prrWU//RIxzbpCIrpFB9vfZpttaniy7b333rk4Qo80hK3obAi5gH+tvPjii7lNe+yxR27dVnbdddecKGiTHZhnGqIas2uGhvhWauEsSTSz80o8MxJaioAIiIAIiIAIiIAIiIAIiIAIiIAIiEB5CBTdVZMui4sttpgfZJ8ZJxGYWrRo4QfKb9q0ad7U/uMf/3CjR492zz77rPcm6927d857jLHCooawduaZZ7q77rrLffPNN2677bbLjW9G99AXXnjBH4KAdeKJJ/p1vMrOOOOMaFT+9+TJk/1g/gh8CHYM7j9q1CiH+Lflllu69ddf34cjbiYkwJ588km38cYbe0GN2TURyrjmNm3aeCGwUBdVH0mGf3379nWFZs808WzIkCG+++YxxxzjPdEynEZBRUAEREAEREAEREAEREAEREAEREAEREAEEggULZy99NJLDgEJO/roox3jgqUVjxgXDfGLv9CYPfM///lPuCm3jgfWgAEDcr9t5aabbnIch4UzX7Zr186CxC4R4gYPHuwFuxVXXNEdeeSR1cIhml1//fWOscswxmC78cYb3YEHHuh/MzMnf2ZZrt+OybdkRofWrVsXFMJMPEM0i84CkS9+7RMBERABERABERABERABERABERABERABEchPoOiumsxGGc6OaaIZQpN1a4w79YcffuhnoAz3IVLRxRMhKzp7JuE4F7NZhkYX0TvvvDPnEcY+ZkKwcK+99louOPFjtmT9o48+cmeffbb78ssvq21nH950CHLPPfccP3OGhxzi2aeffuqFQnZwPjzYwqlKmVigtmaCWJp4LOyECRPSBFcYERABERABERABERABERABERABERABERCBFASa9OjRY46qlCJwXJD55pvPdymcf/75HcKNjXUWDXvttdd6jzQEspNPPtkPvr/OOuv4iQRefvnlnBBlxzF7ZM+ePf3Ps846y73//vuO2TCZefOVV16pmhDgMwtabYmA17x5c8esm2mNrqXdu3d3CyywgI87OsZaGA+ziZoot+iii+a83fC6o8sqQtqhhx4aHqJ1ERABERABERABERABERABERABERABERCBBkig6K6adq2//PKLCwfbt+2FlnhlPfXUU4WCVduPeMZfPqPLZBbRjLjwckO8y2frrruu69evnx/TDQ+zCy+8MCeaLbvssq5r167+cJuFM19c2icCIiACIiACIiACIiACIiACIiACIiACIlD/CdRaOKv/l1iaFNK1E686rHPnzu7SSy91U6dO9d5tTErAuG3YuHHj/FL/REAEREAEREAEREAEREAEREAEREAEREAEGjaBosc4a9iXnT31CGeMtWa20EILuW7duvlZOU00u//++93YsWMtiJYiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAINmECdeZwx/tmCCy7ovbTS8KI7ZJs2bXzQr7/+Os0hZQ+DKDZx4kTXZ58+btm2y/ox0WbPnu1n3GTiAMZek4mACIiACIiACIiACIiACIiACIiACIiACDQOArWeHKBxYNBViIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiEB1AuqqWZ2HfomACIiACIiACIiACIiACIiACIiACIiACIiAJyDhTBlBBERABERABERABERABERABERABERABERABGIISDiLgaJNIiACIiACIiACIiACIiACIiACIiACIiACIiDhTHlABERABERABERABERABERABERABERABERABGIISDiLgaJNIiACIiACIiACIiACIiACIiACIiACIiACIiDhTHlABERABERABERABERABERABERABERABERABGIISDiLgaJNIiACIiACIiACIiACIiACIiACIiACIiACIiDhTHlABERABERABERABERABERABERABERABERABGIISDiLgaJNIiACIiACIiACIiACIiACIiACIiACIiACIiDhTHlABERABERABERABERABERABERABERABERABGIISDiLgaJNIiACIiACIiACIiACIiACIiACIiACIiACIiDhTHlABERABERABERABERABERABERABERABERABGIINLNtv296t61qWYEEmj65ewVetS5ZBERABERABERABERABERABERABERABJIJyOMsmY32iIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIVDABCWcVfPN16SIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAskEmqywwgp/Ju/WHhEQAREQAREQAREQAREQAREQAREQAREQARGoTALNWrduXZlXrqsWAREQAREQAREQAREQAREQAREQAREQAREQgTwE1FUzDxztEgEREAEREAEREAEREAEREAEREAEREAERqFwCEs4q997rykVABERABERABERABERABERABERABERABPIQkHCWB452iYAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIVC4BCWeVe+915SIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAnkISDjLA0e7REAEREAEREAEREAEREAEREAEREAEREAEKpeAhLPKvfe6chEQAREQAREQAREQAREQAREQAREQAREQgTwEJJzlgaNdIiACIiACIiACIiACIiACIiACIiACIiAClUtAwlnl3ntduQiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQB4CEs7ywNEuERABERABERABERABERABERABERABERCByiUg4axy772uXAREQAREQAREQAREQAREQAREQAREQAREIA8BCWd54GiXCIiACIiACIiACIiACIiACIiACIiACIhA5RKQcFa5915XLgIiIAIiIAIiIAIiIAIiIAIiIAIiIAIikIeAhLM8cLRLBERABERABERABERABERABERABERABESgcgk0q9xL15WLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQPEElllmGdepU6fiI2gER06ZMsV98sknjeBK4i9Bwlk8F20VAREQAREQAREQAREQAREQAREQAREQgUQCiGZrrLGG22233Vzbtm0TwzXmHTNmzHD33HOPe+2112otni299NKub9++rlu3bpmRjRkzxg0bNizzcWkOkHCWhpLCiIAIiIAIiIAIiIAIiIAIiIAIiIAIiEBAAE+zShbNQNGuXTvP4Icffqi1cDZkyBCHeFaMIbgh3k2YMKGYw/MeI+EsLx7tFAEREAEREAEREAEREAEREAEREAEREIF4ApXqaRbSQDyrreFlZqLZBRdckCm6rbfe2nv+9erVS8JZJnIKLAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIi0KAIPPLII5nSW0zXziwn0KyaWWgprAiIgAiIgAiIgAiIgAiIgAiIgAiUgMDiiy/ux3IyL5sSRKkoREAEykBAXTXLAFVRioAIiIAIiIAIiIAIiIAIiEBjJ4Dw07JlS/fuu+/W20tt3769+/77790XX3yRmMYWLVq41VZbzTHQ+3vvvefeeust9/PPPyeGr+0OurUNGjTIcV7szz//dAcffHDeaOeff37XuXNnt8IKK7iffvrJM586dWriMQsttJBbddVV3XLLLedmzZrlJk6c6D777LPE8PV9x1prreWaN2/unnrqqfqeVKWvERKQcNYIb6ouSQREQAREQAREQAREQAREQATKQaBp06auZ8+ebosttnBLLrmk++CDD9zpp5+e+lTzzjuva9Kkifvll19SH5M1IILUjjvu6Hr06OHFloceesjdfffdsdH8/e9/9+HCnQhZt99+uxs7dmy4uWTroWiGCPbVV1/ljXv77bd3u+yyi5tnnuodxr755hs3dOhQfw/CCAjLMdHwkydPdv/+97/d77//Hgav03XS1KxZM5+GtOlo1aqV69+/v08nvF588cU6TXOpT8b4Xb/99lveaBdeeGF3xBFH+HAXXnihQzg9+uij/TGPPvqoZ7DDDju41VdfPRcPYu8ff/zhFlxwwdy2+rxinpaffPJJfU6mT5uEs3p/i5RAERABERABERABERABERABEZi7BNq0aeP69OnjVlpppRqCTNqUIZpcddVVPviZZ57ppk2blvbQVOHwStp1112951iaA5iFD3EN+/zzz71HFh5qeDbttdde3lPrlVdeSRNV6jBwNE+zUaNGufvuuy/vsZtuuqmfsZBAzFr4/vvvey8y4lh00UXdcccd54488sicELPuuut60ZDwiCgffviha926tRde8Fg7/PDD3SWXXMLuuWJHHXWUW2WVVdzMmTPdySefnCoNXDdCE/mnIYgshS7q3HPPLRTEi9IIZ4iLCG0IaSacjRs3zl1++eVuxRVXzAlnTz/9tBfarr76akceaAh2zDHH+AH9hw8f7oYNG1avkyzhrF7fHiVOBERABERABERABERABERABOY+AQQchBcMrx+8xvCCKdY4vtQWimZ4cS222GI+nUnn2WCDDfyuCRMm5MQkxJnzzz/fd0HdfffdXamFM4Q5DK82POEK2d577+2DfPnll14kQwzD8Cjbbbfd3AILLOC4DuvCCAMMj74BAwbkvMtOPPFE16FDh5zQ4gM1kH8IZ//4xz98au36G0jS8ybznHPO8aJmXCC62mJ4eOJBWOhZu+KKK0ouRMelq1K3STir1Duv6xYBERABERABERABERABERCBlAQ+/fRTR1e/B0c96CZNnuROOOEE17Fjx1RHI0bttNNOXgSwA+juydhbeM98/fXXttmLXRtttJH3bPv444+9cDVlypTc/nwreLDx98ADD3jvMcSEJMEBrye6jWJ4vJghzHB8v379nHUls335loh0hdK98847e/HK4tluu+386rPPPus93my7LfEysjTCKRSNHn744Vz3TQQxE84Ydw57/fXXc6IZv++//37vsYRgidcbbAsZ5+aaunfv7nm++uqrfqy08Lj11lvPe/i99tprbvbs2V7EIz0Il+PHj3dvv/22Dw7vTp065ZjiLYcgxLhrhFtkkUV8F2AEP7yn6ArcqWMn9+hjjzqETfIPxmyLP/74o1+3f7DffPPN/fhv06dPd88991zs9ZEPyXekg+6MeL098cQTc82Lbdttt3UmpNq1RJd0a73xxhujm/W7jglIOKtj4DqdCIiACIiACIiACIiACIiACDQ0Ao8//rjjrxhDgGHMsdDM24sx0syr69hjj3Urr7xyLhhiy1ZbbeUmT5rsLh56ca47Yi5AZOW6666LbEn+iXiEIdSEwh3b/vvf/3rhjHWEqEJjkKVJt4mHxIkhYBkT4n/yySfn7Aj+I5rABu+06HhrdOFDSCNexCIzvLMQpaw7qG1fYoklbLXG9eZ2BCuIeni0mWcg9wJxiokTLr744pyIt/vuvd0SS7R0a6+9tveeIj1miG4PPvigu/feex3xmcci+/Go4vpNYGPSA+OBNx2edNgXs75wb775Zm4fPMgzZowX17VrV/vpu4EiSCF+8mfGeHynnXZaLl62c01bbrmlz9cjRoywoPVqSRfVwYMHewH41FNPrZE28sZJJ53khVJ2XnrppY7JJ84666xcl+pnnnnGs0BMJW+sueaa7qCDDnKMHWeGEIs4e+CBB/pukzwDsEFADsNZ+Epb/i9XV9qV63pFQAREQAREQAREQAREQAREQATKTuDXX3/1nkIMam728ssvezGIMbuwQw89NCea4dlGN8Z33nnH7+vcpbMbOHCgXy/VP4QULG72TMQns2WXXdZWY5dp043IhbcU14aZGIYgNmnSpNi4EfQYywrPuehkCog+CGsYHmtmzJ6J4VWFpxiG5xzebhhxhtfnN0b+MVYc3VQRzRCpEExNXESkYl/UmJGUa+T8/NnA/whm8803nxszZoy/3yZCkgauPRS3LE5EM7oDIwjm8zZE1CE9sMQbDs817ifp5noR48wYT4t4SRcebIhJxgEvtG7dulnQerUkvVdeeaVLEoVhzn7zIESo5DfbMSbuQJC89tpr/X3hPpx33nneMxBxzOyFF15wjI+25557+vzG77PPPjsXj4Wr1KU8zir1zuu6RUAEREAEREAEREAEREAERKAOCFCJv+OOO7wHDF4sGB4uJprhIWMDmuOdROXfrHfv3g4PIryV2leND0ZXzFIY3SCxpNkNSTPeU0wUkGRZ0w2DTTbZxF8LYs/IkSOTos67nW6GjF+Gvffee9XEv5tvvtkPGo9YRhiuzwQ2xLf//Oc/eeNmJzONYrBmEgcMjyzEJ+4D3SjvvPNOv93+IfAw2D+TLGDcT0RFGDKIPWIVf3j64cXHjKBJ14+oivhjZt1V7TdLrm/jjTf2m8gvNsnC9ddf7y677DLfFXP//ff3XmakwTzuEC9thlWuCWGS/XjHkb66NNKx1FJL1Tgl3oJ0Y01jjIGGGMlYeDxTXJ89SwiezLpKXof1hhtu6AVYZlZlHL/DDjvMi40hX+45E0gQlokY4tKXJl2NLYyEs8Z2R3U9IiACIiACIiACIiACIiACItCACODhhOEtFIpmbLvrrrv8uFSMVYboRMW+vtjcSvfxxx/vvbjghQgSGuLY888/nxNeTDQjDDNs8pfPELVMSLnnnnuqBR09erQXztiPuPPdd9/l9uPxZaIZGxG/EOAQpRDLzNMud0CelVA0Swpm7BE4o3mGtNAV2EQfwpiAiBce3oyMkwYruj8yRlqY9qRzlno73SnjDIE4rXAWd7xtYzZO7OCDD/ZCGOt4/5F/br/9du/RidefjbXHfsasS0oX+yvVJJxV6p3XdYuACIiACIiACIiACIiACIhAPSDQpUsXnwo8XOKMiQmWW245L8DE7S/nNjzDkmxupBsvsrZt2/ruiXiPRQfKP+CAA7z3FOnGo4+B9ldffXX/h+cX3fQYky3JTJBiP55HSbb88sv7scdsf5QTYpV57eEVVWrr2mXOuGYIc5dcckm16O18iERmdM3cbLPN/EQGjAP25ZdfVYl7L/kupNbN0cLW1fKII47IecKF57QJHsJtWde5H3S3NLv11ltt1S9bt27tRWi6wobCWY8ePaqF0485BCScKSeIgAiIgAiIgAiIgAiIgAiIgAjMNQIMWI79+EP12RItQSYORQe8t/3FLL/99lt/WNKsmwgyGDN/Jlldp3ufffbJjVuGN5jNWBmmz4QPPL6uuuoqv4sZNxHPjjzySC/UIKDRxTPOEOXMktiwn1kw56Yt0ep/kx0kpdMmNiCddGFlvDa6/TK+HRMa9OrVy/8xNhpdPOvaEDnxLiuHffHFF7lo6a6ZZOHEEoRBoJbVJCDhrCYTbREBERABERABERABERABERABEagjAohTiBkLt5gz7lj0tHSlwwp1M4wel+/3zJkz/e7QK8nChx4/Fs72hcu6TPc222zjxxbj/HSvo7th1Dp27Jgbz8zG/LIwzKhI10Sul9kxk4QzvPswujYyRll9NcZII8/MmDHDj2OWJp3MXMofM3oyhhceaIyVRrfOzz77zI0aNSpNNA0ijI3hR2LJC0nioo39Zhdl3XTtt5ZzCGhWTeUEERABERABERABERABERABERCBuUbABDHGwopW8BEAbKyqN954o2RpNE8bxgALZ1/kBAwUj9HV0GZe9Bsi/+oq3Qz2/re//c2fne53SYPqhxMd0JUyNAQRG++M60qyqVOn+l2E7dJ5ThfaMGz0/oT76nIdwQyLyzNsD9MJv3PPPdePZ4Ynoc3oedJJJ+Xub9hFleMbujGBBGwwJrjAEzH8wyONWTW///77hn6pdZJ+CWd1glknEQEREAEREAEREAEREAEREIHKJoBgY2NhhV3U8PSx8bAYf8u6STJWFTM50uWO/a+88krJADLJgIkGhxxyiLNxseiCiHcXxvhP+awu0r3yyis70odNnjTZXXPNNYlJ4ppMFNt1111z3SkRwfr27ZvjivdZkr3zzjs5LgP+OcB7ZFlY7hnjqp199tm5CQRsX9olnmKYdXNNe1w0HLNHkpe4byeccEK19OBRx2yZBx10kD+MMcwQX+mGuPvuu+eigot5WM2ePTu3vSGumOdk2LWYPIAx22koANPFd7/99vPbLd/Xh2s2b0fSwoynWf7odouVa2ZUddX0ePVPBERABERABERABERABERABESg3ASoHNM9rk+fPm7nnXd2N9xwg0PIYWbEnXbaySHOXHHFFe7rr792dJlE3EAgueyyy6pV/kuRTrqw7bvvvl5UQWhh3DPOiXDHOe+88868p0F4K3e6GZfMxupqv0J7LwjFJeqcc85xH330ke9uCFe6MV588cX+mvDaMzESLzk8jfLZVVde5Qb9a5BbYIEF/AyLX331lV+niyOGZ9vvv/+eL4rEfXSVpGskccGc9OANltWYBXPMmDF+jLJ27dr5uL788ksvyJmIZGPjcQ4mniDfWZdX7h1dgI0tQlxDNu43hsi65ppruuHDh3vR+Y477vCTBNA1tXfv3g4R8YEHHvBhBw4c6Me+8z/qwT/uEfcBEYx7ldU4XsJZVmoKLwIiIAIiIAIiIAIiIAIiIAIiUBYC5jlmy7Qnuf/++733E13JGOzfxhNjO+NM4RmF8IEQQNyIIQhcxVSILW1JIs8TTzzhBSBEPDyPbLwnBBcGVMeDq5BlTbd5hFnaCsWPcGiGkJVktg9RhLHMdtllF39NNog/50MwSzMI/qTJk7xgxqyPeIa1bNnSnxbBbOLEiZFZLOfMOmrXFabPrtGW7MOjjb8OHTp4MW6ZZZbxh9g9CsOGcdl6eB5EIcagY9IEyzOE+/nnn93//d//+T877pRTTvGzhK622mqei+W7n376yd12221evLWwdbU00S7r+cxLzMRQju/fv78bPXq0F8aYDIIx7Lp37+6YTfT444939957r7voootyp0Jgw5vTzNISxmn76nJ5wQUXeNGvW7dumU5bTtGMhDSp6ueaPL9upqQqsAiIgAiIgAiIgAiIgAiIgAiIgAgUJsBA9ghUCB+hGMKRCGp0q6NLWThuV+FYiw+B9xEzSr777rs+XcXENDfSnS+dpIfx2xAkERaKMQRF4sDrDC+vUhkeZ8T7/vvvl8STEK+6dm3buekfTC8YX6tWrbxHE55oeDbWxjbZZBPfVbQ2cZTqWERH7hHju0W7wv766685IRgPPRNaS3Vu4sFzENEurQ0ZMsStscYaXigbNmxY2sPmSrj/yddz5fQ6qQiIgAiIgAiIgAiIgAiIgAiIQKURwOsoyb777jvv2ZS0vxzbEVBqK6LMjXTnY0F68o1nlu9Y24fgUmisNwubZcmYW3ivlcroeomnXBpjYHz+GpvhNWYTaUSvDQG0U6dO0c314nfr1q1998xiE4M3arHCcNpzSjhLS0rhREAEREAEREAEREAEREAEREAEREAEREAEak2AMerwOGNMMxvcv5hIjz76aAlnxYDTMSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAvWTAJ5iTGKw+uqrFzUZQF1elTzO6pK2ziUCIiACIiACIiACIiACIiACIiACIiACFU6A7pX1fWwzu0Xz2IqWIiACIiACIiACIiACIiACIiACIiACIiACIiAC/yMg4ex/LLQmAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjkCEs5yKLQiAiIgAiIgAiIgAiIgAiIgAiIgAiIgAukJfPjhh+kDN9KQM2bMaKRXNueyJJw16turixMBERABERABERABERABERABERABESgHgalTp7p77rnHVbJ4hmgGgylTppQDcb2IU5MD1IvboESIgAiIgAiIgAiIgAiIgAiIgAiIgAg0JAIzZ870yZ09e3ZDSnbJ04poxmD/jdUknDXWO6vrEgEREAEREAEREAEREAEREAEREAERKCsBxDMT0Mp6IkU+1wjkump++umncy0ROnHDIqC80rDuVzS1un9RIvotAiIgAiIgAiIgAiIgAiIgAiJQyQTy1ZNzwlklA9K1i4AIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiECUgISzKBH9FgEREAEREAEREAEREAEREAEREAEREAEREIEqAhLOlA1EQAREQAREQAREQAREQAREQAREQAREQAREIIaAhLMYKNokAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAhLOlAdEQAREQAREQAREQAREQAREQAREQAREQAREIIaAhLMYKNokAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAs1KiWCDDTZwm2yySeoob7jhBrfZZpu5FVdc0R9z7733urfffjv18fUhYMuWLd1PP/3kfvjhh/qQnEafhvnnn9/tu+++ueu86aab3O+//577XV9X9thjjwadz+srV6VLBERABERABJIIHH300W6++ebL7Z42bZobOXJk7jcrvXr1ct27d6+27fLLL3ffffddtW36IQIiIAIiIAIiULkESiqcde3a1XXq1Ck1zSWXXNKtvfbabokllvDHrLDCCg1GOGvatKnbf//9HWLhJZdc4l5//fXU162AxRNYZJFFPHOL4bbbbmsQomVDzefGWUsREAEREAERaGgEKJc2adIkl+wOHTrUEM623XZb16JFi1wYVhZddFEJZ9WI6IcIiIAIiIAIVDaBkgpnlYTyggsucIsttlglXbKuVQREQAREQAREQAQaLIF55pnHrbzyyrlG2gUXXLCGaNZgL04JFwEREAEREAERKBuBkgpnt956qxs9enQusbT07bXXXv73n3/+6U499dTcPlY++eQTd+2117rFF1/cb58yZUq1/fX5B4UtmQiIgAiIgAiIgAiIQMMhQE8BGxZkvfXWazgJV0pFQAREQAREQATmGoGSCmeM8xWO9dWqVatqF/bRRx9V+82P9ddfPzf20+zZs93XX3/tKMjgOo899NBD3mV+0003dYwn9uWXX7pHHnnEPf30076b5w477OCWWmop99VXX7lXXnnF3X333f648B/d5LbffntH11DGw/r000/dAw884N58880wmCO9ffv2dcsuu6xDGEPs++abb9z48eN9Ov744w+//bjjjqs2ZgZjbnHd5557rvv555/d0ksv7fbZZx/Xrl07t8ACCziOY/+rr77qEBfNSnWd22yzjevRo4eP9vaRt7uOnTr63wiSIS87byUuuSf77befv7fzzjuv++KLL9zzzz+fE3q5VwcddJBH89tvv7lzzjnH3zdjRf7bfPPN/c/333/fDRs2zK8zdsqBBx5YlYc7uIUXbu7zy9SpU/3+hjD2ml2fliIgAiIgAiJQCQS6dOmau0zKh2mslOVIOx9lkb333tutuuqqrnnz5r5L6Y8//ugmTJjgGPM3OsYaY7ExLvDCCy/smjVr5n755Rdflrnlllsc5ZLQKMP+/e9/d+3bt/flVcrfDG2x9dZbuzZt2vigVma143bZZRe3zjrr+MZsyqwff/yxGz58uD+HhdFSBERABERABCqVQEmFs2IgUmCwMc4QLyZOnOg6duzoRSfi69evn2NAeLNlllnGb0Mo6ty5s232YtV2223nhS0KEWaIGhtuuKH99EsKHQMHDnSPP/64GzFihN+20EILeeELN/7QEL523XVXX9C45pprvBBGOkMj/fyRTsSSwYMHV0szYYlniy22cIyvceaZZ/rDS3WdePZZmvof3t9xLWbGi7HnmIyhEo37Tz4K723btm1d7969vUh7xhln+AIigqmFYZKLcePG5XDtvPPOXsBlw7vvvuu3U/g8+eSTq91rRFz+GGiYeD///PNcHFoRAREQAREQARGYOwQQmmjsatlyccc4tZTXGFvX7Ndff3WIWVErdTnS4j/77LNz5V/bRjmShrrVVlvNHXPMMbbZ7bjjjg5hKzSuhfLsiSee6IYOHerLz+ynkZjyRzgpAuXNk046yR9u5Rz209jLb8qlNDCakQ4aXxHX6Bny4osv2i4tRUAEREAERKAiCVRXieohAhOjZsyYUW32RBPNPvvss2qeQbTGWaGgS+cu1UQzZr8MW/B69uyZa3nD28iOw+vt5ZdfrtaCh3cYAhUFKwoaoeGhxDaWhx12WE5IIez06dPd999/nwtO61/UE4+dtbnOXORVKyaahedkP+IR5640gytehHZv8f6bNWtWDgOCI96IFKAnTZqU277lllvm1hHIGCgYwwvxvvvu8+uHH3547l6zHZGM+DHuwyGHHOLX9U8EREAEREAERGDuEvjwww99ApgsYK211vLj1NKoiX377be577ff8Ne/cpQjiZruotZoTPkRL7O33norlwZ6WPTp08enAgFrp512+itFzpc1Zs6cmQtL+SYU1Q499NBqohnlQURDwllZKBdZ1Qo9JELRjB4clF8xwh9wwAGxx/kA+icCIiACIiACFUJgrnucFeKMIHHWWWe5Dz74wCFm4NZu9vDDD7s777zTF36GDBniN1MgQuTgw7/vfvtaUH/86aef7n8PGjTIi2D8YGZMWtTwBDNDpLvrrrt84YQx2vAgwssIkY4CSP/+/d0VV1yRE02Yttxm1SQMaWXigIsuushZ99Srr77au9ZzDoQYugqGVpvrDONh/eKLL/bdUBHouGYrGO6xxx6OSQ0qyfAqo0sDRkFwwIABXuCk2yVdbDG6BY8aNcrnpVVWWcVvw1OPllzud1ggpeCN+Ep+CQuajN/HvWbWT/hjK664om/5ldeZx6F/IiACIiACIjDXCFBO47uM0RiKZ5YZZTwaR6NWjnIk5wi7iFLOYAgSxl1bd9113WZVDcDvvfeeH36EsAhniGqUS2iMpcyJUT7daqut/Lo17tFYGHrR3X///X5oEgQwGvvWWGMNHz78h4ebGcOjMOQJ4fFiowspnmkMCcI+mQiIgAiIgAhUKoF6L5whgCFEYdExHCgQYHiI4eljLWmM7cBxFDbMGK+BrpwYY0iYtW7d2q8yUYFNR46L/Hnnnedb6CikPPXUU36cMzsm3/KOO+7wuzk3BbM999zTe3qZeMNOE7LCeGpznWE8XKeN3YY498QTT+TGi0MArDRbfvnlc5fMGHqM74HRTcOMgiZ5B8E0zDeMnzdy5Ei3+uqrW1AvsPHDPB5ZJ+9169bN//HbuoOwvtJKK6m7JiBkIiACIiACIjAXCTz77LO5hjC6LprYRJLoihgnnJWrHMlYqJQbMDzPjj32WF+WwJOMHg+IVPRiwBDRaJCjnMLwJnjRM/xG2HhnZcywZwFlE8bzxVi//vrr3aWXXup/2z8aea3sbNusrIygh3CGwUsmAiIgAiIgApVMoN4LZwhBZuY6zm8KAQgUZnhsRS0c3wGhIxQ7LKyJWNddd533bAvHt+B4Cif84a3F2GW48+czvJQYRyKfSEXao1ab6wzjMpHRtpnHG7+tAGT7KmFJdwczCoi77767/ay25H4hnj766KPub3/7m9/HOHrvvPNObswTulP897//9fsYD82MQmdSvGHB1sJrKQIiIAIiIAIiULcEmCyJ7ziNZZTVbHZ0yo9MLsVYqFErVzkSDzPGUg3LipQlKFvwxxAS9GygCyeGZzzj7YaNfmFarQxs492yLywz85tyJuHomWEWeqexzUQz22/LUEC0bVqKgAiIgAiIQCURqPfCWSgyWcGAGxRuT7phYQEBrzRaz6Jm3md4Zx199NF+HAlc6BFZwsIF3mh00cQTLZ/RNZJjMcbNwkuOAhneSzb+WFzaw21ZrzNMTyj8sZ3CoRkFxkqzsODI9RfqNjlmzBi32267+cIp9zzsphkOjst4eWbcO2afijNmcJWJgAiIgAiIgAjMfQIMt2BDc5gIRYOoeXdFU1iuciTlhhNOOMF3tURAo5Et9PzCg4yxyih30nuBYSfMKM++8cYbXhhjrF7Myo2Ig2bR8iAebmG5lnBhoy2/bRw41kPD600mAiIgAiIgApVMoN4LZ7W5OYgbJlbhOXTllVf66HDHp6CCqMUsnhRQmPGSbn2MC3bKKad4bzYGj6WwYgO4JnkPWeELocVEM07EjIuMeYbRUmgWLbjY9lIscdPneqwQGI6jQWGr0oyJAKxFF5GUscgw7hXj21FInDJlSu4+UZilq6t1oWBMEbN77rnHVqsVLrmfzI5lHpC0WpP3yHMUbmUiIAIiIAIiIAJznwAeXCacWWr4VidZucqRjGVGLwi8y24dcaubNHmS793A7Jk21qoNI0F51QwBi/IGFjbsWbkyHNIEIY4x0MaOHevD21AV/sdf/6ZNmxb+9MNR0FUUY1wzykDwsSFAqgXWDxEQAREQARGoIAKNWjjD02ujjTbyt3PNNdd0++23n585kRmC6KK5zjrruMmTJ7tLLrnEC2TW2sdYEzfffLMf78o80ogk9FjDm8yMwgVx4XofGq2EL7zwgh+PImz5YwD5chki3vnnn+8ee+wx30rZtm3b3KkY36OxGd1nw3sRXt8NN9zgRo8e7bp06eI3I2oOHDjQPf74475rJQXW7t27+xbXI488MncoE0OYcGYb6fIadtN95pln/EQV5BkKrAhyjLnH+HjMloUxmQXpS/JGs7i1FAEREAEREAERKD+B8ePHe6/y8EwmFIXbbL1c5UiGhLChJPof3t/deOONvqwQNnBStqExz7qUkiYmNKARl0kO6L5pZl1KGaeVHhQ2ezsTCCCYMVQHQlzUzBPfJkqgfMyxNATScEz5hnI0472aABeNQ79FQAREQAREoBIINGrhjA89gpa12m1WNVMRf2a0JDJrEN35GEQfrzNsueWW8+OUWThbMtOQGYUb82Zj0FT+nn7q6dz4GYSjNTBsEbRjmVWznIZAFB1z65tvvmmUhR4r7MXxxNPsueeec7Sotq/yxMMYWJc/M7o3MCMrhVMzhK5Zs76s8jT83/hoDz74oO32SwqVzOpq44FQkKVbRWgIaRLNQiJaFwEREAEREIG5RwBhycY5IxWUAWzs0rhUlascecstt7h//vOfXpiiLMmMl1FD5MNeeukl3yOCdbzlzeOM32Y2Xi+/r732Wnfcccflun6aQEc5xxqI7TiWw4cP942K7CMeG+fVwjDjqEQzo6GlCIiACIhApRKYp5wXbmMu5DtHGMbWbclxoaARbs8Xpx2Dt9igQYN8t7rwWPYjaFx00UW5wVNvvfVW76Iejoll52AMCCYPePXVV22Tb32Lejq1WKSFn/kIQS40Cmrjxo3LbcLLCYumyQKE221b3NKuM9yHB13YYsk+uiPSbbQxWFo2XKvdnzPPPNMLaCEv4mEskBEjRnhhLcpm7NgxuU2IZOH4ZrYDIXXYsGG5PGTbmb2TmVhHjRplm6rd6yzXkItAKyIgAiIgAiIgArUmEI7jRaOilRXCiK28UK5yJF1GL7/88mqe7HZ+0oNQddNNN/lNeM6H5U82kj7CWHkT0cvKlnStpNzDGKuEo8zBsBX0RgjLHzbu7VtvveXDf/fdd/589o/yMENZ0LgoEwEREAEREIFKJ9CkauZAPx0lH9jWrVs3Wh4UKhjDbIH5F/BjSeS7ULyYmJkIwWT69OkuWpiwY+kWuVKnlfzPKVOn5ApfnItugHg8UehIOt7iqe2SSQ1sTAxaTpmJiRZGrvftt9+uMfhrbc/XkPMK95buq4zXESeSFsuGSRhWWmklxxTz5b7fxabRjmvI98+uQUsREAEREAERqEsC5ShHkn68yCiv0TuCmdGTJjHCM41Z3hk2hLHMTNyLMhgwYIAX5GbOnOnHWWXGcIzunDbWL78POuggFtWMMXJXXnll3wAbzspeLZB+iIAIiIAIiEAjJZCvntyou2qG95MCRjhoargvuk6hJangEoalVZABXaPGuWbMmOH/ovvq6jfeVOHsSnV13vp+nrT3Nut1UJBlLBSZCIiACIiACIhA4yNQjnIklGhsSzP4Pr0f8FQrZAxHYePqMlzI7bff7mcKZ9xVs+hsmradiaWYNEsmAiIgAiIgAiJQnUDFCGfVL1u/REAEREAEREAEREAERKBxEXj99dcds8JjjFnGDOJRC2cJj+7TbxEQAREQAREQgZoEyjrGWc3TaUs5CNCllHEr+GNdJgIiIAIiIAIiIAIiUHkErrrqKj+ua9xwFGxjBngmxJKJgAiIgAiIgAikJ1AxY5ylR6KQhQjk6/tb6Fjtn/sEdP/m/j1QCkRABERABESgnAQYk42xdlu1auV+/+13N/2D6SUf87ac6VfcIiACIiACIlDXBPLVk9VVs67vhs4nAiIgAiIgAiIgAiIgAmUkwJhsTAxgkwOU8VSKWgREQAREQAQaPQF11Wz0t1gXKAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiUAwBCWfFUNMxIiACIiACIiACIiACIiACIiACIiACIiACjZ6AhLNGf4t1gSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAsUQqDbG2UMPPVRMHDqmwghcffXV7tBDD62wq248l6v713jupa5EBERABERABERABERABERABESg9gSoJ99///2xEWlWzVgs2piPQL7ZJvIdp331g4DuX/24D0qFCIiACIiACIiACIiACIiACIhA/SCQr56srpr14x4pFSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAvWMgISzenZDlBwREAEREAEREAEREAEREAEREAEREAEREIH6QUDCWf24D0qFCIiACIiACIiACIiACIiACIiACIiACIhAPSMg4aye3RAlRwREQAREQAREQAREQAREQAREQAREQAREoH4QkHBWP+6DUiECIiACIiACIiACIiACIiACIiACIiACIlDPCEg4q2c3RMkRAREQAREQAREQAREQAREQAREQAREQARGoHwSaZUnGpptu6tZff/2ChzzxxBPuhRdeKBguDNClSxe38847u/fff9/dfvvt4a6i11u0aOGOPPJIN3HiRHfvvffGxtO2bVu39dZbu2WWWcb98MMPbtKkSe7hhx+ODZtl4+KLL+62335716ZNG9e0aVP38ccfu7Fjx/plNJ55553X7brrrm7FFVd0v/76q3vzzTfdU0895X788cdoUB8X8Xbo0ME1b97cff755+755593EyZMqFXYGgdrgwiIgAiIgAiIgAiIgAiIgAiIgAiIgAhUOIFMwtlqq63mOnXqVBDZp59+mlk4W2GFFXzcLVu2LIlw1qRJEzd48GBHfH/++WescIZYhQhFWLNVV13Vbbvttu6kk05y33//vW3OtESI22OPParF27FjR7fxxhu7Rx55xN155525+BZccEH373//280333y5bV27dvXpOuWUU9w333yT296+fXt3/PHHO4Q2M7itu+667t1333XnnHOObXZZwuYO0ooIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiECOQCbhzI766aef3DPPPGM/ayxffvnlGtvqcsOiiy7qjjnmGC+aJZ0XT7MddtjB7/7kk0/c6NGjvRfXhhtu6BZeeGF31FFHubPOOivp8MTteJiZaIbo9eSTTzp49ezZ07Vq1cpts8023gPurbfe8nEMGDDAi2azZ892I0aM8KLY3nvv7b3JEO+OPfZYHw5x71//+pff/8svv7iXXnrJTZ482W2++ebeUw0PtD59+vg4soRNvBDtEAEREAEREAEREAEREAEREAEREAEREIEKJ1CUcIYgdNttt9VLdAhTu+22m+/SmC+Bm2yyid/922+/OTy7/vjjDy8G0j2yV69e3mMr3/FJ+zg/wtXvv//uBg0a5L3dCIun2ZVXXulFMjzSEM5WWWUV17lzZx/VySef7L799lu//sYbb7ghQ4a4JZZYwqdj2rRpPizeaRj73nnnHb8+fvx47x1HN088zxDfiDdtWB+J/omACIiACIiACIiACIiACIiACIiACIiACNQgUJRwViOWAhu23HJL300Rjyu8pWbOnOluvvlmv4w7FOEKT6pFFlnEffnll+7pp5/2wlNc2Oi23r17e+GKbpYzZsxwjJ0WZ9Y18ueff/aimYVBkOL8iF/80c0ziy222GJeNGOstOixbOvWrZv3PCNOum5idG010YzfCJNsW3rppb1X3GWXXebatWvnEPkYh81EM8JijCmHcGZiWZawc2LQfxEQAREQAREQAREQAREQAREQAREQAREQgSiBsgtndDdE1MEQkhZYYAEviJ155pnuhhtucHhMhYaXFV0dMcIzaD+/GV/toosuCoPGrn/33XdeSBo1apQ74IADYsOwkS6UCFcMsM8ScW6hhRZyCG/YrFmzaghffkeBf4xXlmR0D8XoGoott9xyfsmECFFjzDKEM7p+YnQl5S/OGBMNo7snliWsP0D/REAEREAEREAEREAEREAEREAEREAEREAEahAoSjjDq+rAAw+sERkbGPtsypQpft92223nRTMEsJEjR7pHH33UzT///H6my5VXXtntv//+jvHQ8EILDe+rs88+233xxRd+tknGIsNzrHv37u7VV18Ng9ZYHzhwYI1tcRsQq5i5ki6b/fr1c/vuu6/v3omXGZ5ddHkspW211Va+6yVxjhs3zkeNiIiFEwD4DVX/EAAx8yLzP2L+Icb16NHD7ynEJkvYmFNpkwiIgAiIgAiIgAiIgAiIgAiIgAiIgAhUFIGihDPELwbRjzO8nkw4Y8ZKDJEK0Qyja+QFF1zgrrnmGtesWTM/e+S9997r9/EPkY3ZIRHNMPYxZhezR+6yyy4FhTN/UMp/zE7J+RDLSIsZ3STx+CqVMfbYnnvu6aN78803HX+YndNEMr/xr3/WddPChPtsna6vjI1G+umaSvfXJMsSNikObRcBERABERABERABERABERABERABERCBSiLwP7Uow1X/+uuvjvG64mzixIl+M2KOeVSxAa+u0IijadOmrn379uFm99VXX7nPP/+82ja82BDO6MZZKttnn33c+uuv74WzMWPGuMcee8wttdRSPp3LLrusH4D/8MMP995ntTknEwHYLJuIgZdeemkuOiYQwGAVNUQ9jEkL4ozur8cdd5wX3/CQO++88xK7lmYJG3cubRMBERABERABERABERABERABERABERCBSiRQlHDGgP3/+c9/8vJibDIzhBsb58y22TIqhuE5FTUbE8wm86P+AABAAElEQVQG9I/uL+b3Wmut5Q9DNLvjjjv8OsLWiSee6C655BLfpRSvOsZCK9YQ57bYYgt/+EcffeTOOOOMakIc3nlMgLDooovWOIVti3ZjJeCaa67p+vfv7wW3n376yZ1++unus88+qxFH1rCxEWijCIiACIiACIiACIiACIiACIiACIiACFQogaKEszSsrKshYRGm4gQg9kW9y+aZZx42VzMTkfCsKoUhwDFOGzZ27NhqUXKOqVOnulVXXdWtt956RQtnf//733Njj73xxhtu6NCh1c7Dj6+//tpPfrD44ovX2Gfpi3bjRMxj0gO81Dj+1FNP9d00a0RQtSFL2LjjtU0EREAEREAEREAEREAEREAEREAEREAEKplA2YQzPMfoikh3TNafffbZapx33313P2MkkwOEtuSSS4Y//TozamKhGOc3FPkPEc/GNqOrKN1DQ2N2TaxYoQ5vMPNoe+SRR3IebeE5WP/ggw/8pAedO3eO7nJMnoAh4pltuummrm/fvv4n48ade+65nrHtD5dZwobHaV0EREAEREAEREAEREAEREAEREAEREAERGAOgZruXSUk8+GHH/rY6LLYokWLXMyrr766Y8bNNdZYo4bww8QDu+22Wy5smzZtHIPrY4x1VirDWwtjNs2wC2inTp38eGrss1kqEf8YD40/0pfPuDYTzZjYwLqBxh1z//33ewGvefPmbqONNsoFYaZPtiE8/t///Z/fjpjXp08fv/7ee++5s846qwY7iyBLWI5p2bKlvza6gMpEQAREQAREQAREQAREQAREQAREQAREQATmECibxxnRX3bZZX4GTSYJGDJkiJ9dkzG9zKuMmStffPHFGveC2TjXWWcd72HWoUMH3y0Rb7MHH3ywRthiN1x55ZV+PDO6RDKm2bRp07yAttxyy/nz0YX0iSee8NEzDtvBBx/s1xmEP/QCi56fbpRmzALKX9RmzpzpTjnlFD/D6H//+1+39tpr++6XNh4aacDuueeenJfdQQcd5L332M5ECddddx2r1Qyh7dBDD3VZwhIB3mk77LCDF+IOOeSQanEW84O46Or69ttv+1lRo3HsvPPO3tOO7YyV9+OPP1YLgliKZx3Xc+GFF1bbt+WWW3pe7Pv3v/8dKx4yscLRRx/t72O1g6t+MNkC58PL8KmnnvJef2EYSzszq954443hrtw6XWvhzPhyYRdc0kzaGTfvlVde8eFrE1/uhFUrXbp0cdtuu61r3bq1QxjlGj7++GP31ltv+fOFYWuzzn0jzXiJ8vxGrXv37q5Xr15+M8+jzRAbhoM9YvStt95ajS8CNPtY8tw//vjj4WG5deLnPHHGpCKkjXcHx4eTZyy88MJuwIAB/rC777478TndddddvUcnz/cLL7zgw/NMMfNteN21iS9M+4ILLugbChDVef/RHX3WrFlu+vTpfrZhxj+s78asvDbTcVJaDzzwQD/ByvDhw33eTApX37fzbPOM844tdM08LzvuuKPvck/e5J1NIwhezRdffHHqS+Xd361bNz+hzuTJk93TTz9dYxgDIuPZ2GCDDXxjB++As88+27Vr165aGmis4RvF+2j55Zf3aeA7c8stt6ROTzSg3Vu+lT/88EN0d6rfvPdpnOEbGn3nE0HctX3zzTf+We3Zs6efHIhn5aWXXop976RKRBGBVlppJd+Yx7subRlk4MCB/h3IN4p8ERoNcLwLKAeRv/he0BiWz3iH8G5j6IarrroqX9Ba7eM8lFmsDIJX/H333Rd7v2hw22abbRxe+5SXGA/WZlSvVSJSHlwoP1k0//znP31ZgLxbW8t6zcXc60Jp5PtJGZlGXnpO0JuDZyL8FhaKo9j9lIP5FlA2ooxUSqPBmeFRiJ/8xLfZylGlPE+p4krzTazNucoRP2Ucyjo06lM+l4mACIiACGQjkEk44yOdxZhEYPDgwY5CJAWOjh075g7HGy0syFjcbKdyyQyX/GEU3i644ILcsWlXLE5bhsdR8aZQSyUJ7y4KIWYUkK+44gr7mThbZS5AsBJ61sXNlknQ0GsNAY804FVnhVXC4O328MMPs+rNKkD8SIqXAhWWJSzh4/iwvVjjXsOT6+EDHTUK2+blt/HGG9cQfhDyOB5hKmp4Ixo/KloPPfRQNIjfT2WnkG222WZu5MiRXrywsBTeSDfnp5JGpSZqiK3sj3LjWMQWBBEr8NUmPs5LfExYgWAWGuIZgi7dmBETeT4KVfLD45PWqeRxbRjPYbR7NJ6iNtEH61HhjLy3yiqr+OOp9Ia21VZbOeuWDOMk4QxPVEtDeHy4zvPCjLWnnXZariIfPsdHHHGEO/LII2MrE9wTJi+h4G/CGeID50SQNatNfBYH92avvfaq8cxyX2FF/qcyTuW0PhrPERV2uoYXEoLgSiPJ0ksv3aCFM0RbxHeer3zPFBUb8pm9d7l/zZo1czCLvhvy3VvyMPnPjMYini1E+fD5YsxKZmg24x3AezSaBs5NwwzPr1n4XbJtaZcI6ZwbI98WI5whIOy0004+DvJIVDiLuzbePfZt9AdW/aPShzDJ7N18v8tt5IOjjjrKf1NgnVY44x3Id5pvlQlniFInnHCCY9ZuM66HRkK+M7fffrttrrGkwYH3rsVVI0AJNpBm3plhfiYvb7755r6By2ZM51R4xeOtb2HJsz169PAND8ccc0zimLYlSKaPolB+svOQf/ieJI2xa+HSLLNcc23udb608N089thjHfGbcd9oLD3nnHN8A7BtL8cS4Zs8TXmklMLZ8ccfX+2bz7eRBmWE2PPPP78cl1KrOA877DCfPsYyzvKuT3PSLN/cNPGFYfhG811D7JZwFpLRugiIgAikI5BJOLv88svTxRqEwiuGghSFTj4IGK2r0cL36NGjHX9mVEqoTOBR8/PPP9vmTEtaxZK8hoiIgiCVDioYFPzwNiFtYeWZcLR+DRs2zO2///4Ob7F8Zp5p+cJE91199dXu2muv9Xzg9M4779Tg869//St6WOLvLGGJhEo7BfhQ2EyMPMUOWkARvyhgUdGCqxmVBq7RzCoN9psl3lUYlfXQaKU30YzteGnECWfhMaNGjXKhgENFCNGElk3i2nvvvX3hDHE2ar179/bCSnh8NEyW31njoxJOoZGKJkbexHMEbxJEZSoEVhBiZtXDDz88S3Jiw1KYIv9TIUKcevTRR6uFo+BuRp6JGpUnjOc7yg2vETPYE3+cx6mFwSvQuiqzjYooE4V07drVVyIpAFKJQHiIGqIChVqerVJYMfHhFWDdzqnw0tWcigDXRR6nQsq7B48lROJQKC9FmksRB+IkAqKsJgEENp4TKk68v/k2hO+6mkfU3AJfvnM8czQyMIM043/yjkKw4Zm27x+eZhjnoMGFJe+AaBrIXyaakafGjx9fQwCvmZL4LXjQxnlNx4eO32ricfzeOVvjrg1RmXcEhhceHk0w55uBWIBwUxsvujlnzv8fISn85uQPnX8vcfH9414zKRH5BYEUEYJ8gIg/rcrzPWrws8aK6L5S/kYgJy/x7mZsVvI1jVM8/+xj7Fa20TCGNyPvY/IgjU8IUzQQsO+kk07ykxaVMm1hXGnyE169iEmWf8Lji1nPes3F3utCaaMRGtGMPES5jUY6GiL5Jh533HG+rJ31HVTonOXejzBvDWWUySk/8j7Am5c6A56FDGtSn4x3ULlM39xykVW8IiACIlB7AmUd4yxMHgUrWs/5i4pmYThbp6UfryurNNj2cixp2eZcCBIUSKKG+ENlhrSUq1CCmz2Vg9dffz0Vn2gaa/ObSjx/NiZdbeLiWLz5jCOthqHh5YWZJ0coxPgdVf/Mu+r555+3TX6JwIDhzUUBHs+vOPHGB/rrH4UwuuTZHy37iKAnn3xyLhgiX5xRicgqQsbFY9uyxvePf/wjJ5rRbYxuWVSE4fvcc8/5yjPCGiwQ10Jhys5ZzNJakhEqQ6PyhpiHtwj3FwE09J4gLAV4DPE3NIQ+hC7S+tprr/lddj/DcOE6z6XdN5Z4qCEuwIFnBaNin2RUmmySjaQwWbZniY+KFl1CMXhS4aGSj1DIe+aBBx7w3Vap+GBUoGUNiwCNOxjvMkQPxPcsE8og4iOmY3gb0XDEdwhvsdmzZ3thgkqjmYlhhCH/876OSwN5D+NZu/POO73nbHR2Zosz3xJhhPcfy2KM7q5nnnmmb5woFEfctSEIYDRc0WWfZwchnG8kltSd2+8swT8q79aIU9voYGENUzTmcV8Q0vEmNu87BKGocX8RpMptfEetQQvPUjzraLQwrz72mbiBYMn9pDFg0KBB/ltEgw7CDWWktm3b5rrzlzLdafMT94zeDNaIU4o0ZLnmYu91oXTilWnPCRND0WiI0MQwKDSqcY8QNxua2X1CSCa/8S4lD1p5lEZOmQiIgAiIgAjUBwKZPM7qQ4LnVhooEMaN+TS30lPK8yK6fPbZZyW9PgpBFKAZt2fcuHG55JoYQ4s2FQKEGAQZCn4YQgzbqPRZNzq2Uyg0keyxxx7zHhl4ZVDxjI6DRvhCRjdi/qhkxrXmI9pQSEWYodWd9NbGssYHA9hhCIgIZXGGCIMXEwIR4ZO6P8Ydm7SNiinXHXb5JawJjHgC4vnFvcJrKvT6oJseFhU9TUSiEkylDE8ZzkElAw+ZrIYASrdPxEjiwbM1NIQHPCUY34bWfxNywzBZ1rPGh6cZlUvOSyXHKsfhOcnjdAk3r8LwOkg3QiN5nfGuMLxj4XXbbbf5MRfxhKEh4oYbbvDnQXDGK5bngn1wRWRmvLeopQlLdxQTQqk0n1bl2Yd3CaJNXPqi5+A37wA8P2BBnjCvOjxC9ttvP38PyUs0SPAO4FrCblV4O5FXGDuMPIQAS/5jHC/Gk0SUpALJu4RzwZS8cP311+fEeUtX2nNa+HxLutzBGeOewGbSpEnVvKapwJMuxrLi+uB200035boP8y4k/5JHEIbNuAZEIp4tuqQhsDD2pA1dgGDUvqqrDe9EGnUwSwOcQw8p0sW5L7roIh+OPMb7jGeH7pvsY6zHuC6I3DfuDV5w5BfizmK8mzkfxvXRaBKNg/1x14aIiBgDQ0Tm0BALeNeF3dVsP94reOHyXocPnpy8H8lXWRrhePf369fPR0u+jPtGsJM0IPBxTu4B+RFRLGqw4NoRyqPvcsoVvEutMSA8Fo9ajsMTuJhGAPICkwqRfhqayG+IqKQhHEbBvJo5N/fbjOEKyI+kgeMxayzh+xwKxTT+8e3AG4e/8JtJPmKcPJ5RmPFuYkb1MA12zqRlmvzEsZzHvLJ4d5LvkiwtnyzXXMy95pkmHyGU4t0MV8pQiKwsMeuxgVAf9cbnvYpoFs2naZ/3NOcPGfLuIG9yP3l/IXLxjEW/5dwHnkm8x3gP8jzynqTMYN9EyjoY5bHQeI8Tf/g+C/fbOudAtCWv0nhBIxTPevhe431H4zfvUL4DMOSdxzsmNL5zdIcnHA0bTCLG+4NxO8nfPIP0UjA79dRT/TOPBzCWNj8lfde473afo99c4k97HYQl3yP+k3fx/g+/MeyXiYAIiIAIZCcw54uV/biKOoLKBYWExmp4MPBXSsOriEJP+6oKnhmFdsQADE8wuuLgcYYgY8KZtT5SYAnHc6HbG4V3KtVUMMZViXEUYCjIUEEKK9t2vnxLCvL8YQzUHjUqKlSMuQYKXFRko4XC6DH5fmeNDzZcLxYnfITnouJFYZB8Wgrj3lCIp1JOodYqR3SPwqh0IZBRIGOMNTMK6SZ6UikKzbxDOJZCP16nVBCoZNDNLashKJiFFT3bdumllzrGTaFQTcXcCra2P+sya3wmEFNRzuftQwEeEQbhOjQEAwrhFJ5NOGMbeZ3xjijUY3h0wo9xn6joW56hsss9ohLBM4Vniz1PacMimlmlhSVdCnkmeP7i0hemn3XyA95T5AmeZ7raYTx3eCJZF1DSynuBP7jR7di8HslznNe6kXG8nZt3B/sYd4d3ixkVD8b7YYgA6y6c5ZwWT76lTVpDGK6PdMDehhtgHW8clnZ9eIzwvJiXkYmSpDE6sDfvT4QzKqiIQMRvxnPJnwkaYRp4VkkPxrk5zp5f7t1pVUKasbJ0IUhSYUfUM4M7zxiVYsZPxKslqyE2814nXiq1VOSilnRtiPZxM1LzPJt35rRIt0bSad8XOw/PC88IeQbhOa3xvYcT730q+FFBwuIhf5uHNDxJt913C8PSRFaEbAQ2KrTkoRkzZvjnO040Q/DiehAU6KqWtQwCW0R7yw+WX4gTMYPzm6CKmMG95poRnnhfEh4h3vKwvdO5BxiNIFGzc9kEUOznmQ6fX+Ll3UQaENjo2sm2QpYmPxEH7xo8nm+++WZ/3+LyHeGy8MlyzVnvNcN7hN8z0ka+5f7wnuR9wfeBfIPRCBc14853we5X2uc97fnDc9JoYedhybuMBiAaMWnMw0gT6bd0c4/5jvA94tr4PmMTJkzw47QiCvFOR/zi22GzvJM38xmiL+85ympWfuT9iJiNOPa3v/3Nl2csDtLBeRAaKesg+GF4hCLAcT0Y4Xi2+aP8QgMc+TZ8F7NOPsKy5Kek7xrvmqRvbtrrIC14SPL9wOw66sJz1Z9Q/0RABESgEROYpxFfmy5tLhLASwKj4oeog9HVjUIJBVtaBK21zwQZwtAyiTG2XWgISZh188OzioI+8eXr8kfhiUKp/VHpYuwrKllUEihURD0a7LxUFq0yQWWotpYlPvOu4/zRlthoOhChSiWaETctvSaymNcbBXkKwPCiMklrLkYlzCriVmhFiCTdZnS1oEDPseZxxCxgWLQrrx3DksKv3TeWDM6MWEorr40xQsU8KjpwLIV383qzijPbi7Ws8SEKYlHPgLjzR0UzwnA+PA1MUA6PgyXXxjNAdyrui405xDGMjUVlCG8F7gMFeht7MUtYhAYT1PFswwPJWq3zpY+0UmnjPlF5Ij8gDpqHARUWRDPEbmZYJG3sp0JIpQFvtqiRxxAhqQhx7eaFwfNv+YqKDV2a+U14G1+OuIo5ZzQN4W/SbOP/IX7AhusNjXzJrMGEZcm9wJMJQQTj3YjFDV1gIj3XZ+ztmhHnOF9cGhgMncoqhmBGOLZxHxB54GJdh+kKbs8jYqo1JPCs23iJzIxr4qOPNMM/jqVbNaJZkiVdW1Q04x2CyEMjAZ5PiCjh+KXsN9EMbxa8Jelmah645De+BWkMIZH8yzv1mmuuSTwEgYlKNfeVrmV2n+PeR9bFDkEJcYG8ybsV8YhnwGbUtpMhLFDxJS8TnmVW413JfScfwI7v3hlnnJF7pyBUkL/MGMKAa0EkoEsss3fy3uXc5Dm7Lr7fGKJ+1MwrzoR9vv3cC/IdjQSI/nDiPnIu+KWt0KfJT6SHPEf6456rML1Z+GS55iz3mu+EfTf5JtKln+eSa4U798feY/YcwSxqVj5iO++VtM97lvOH5yRdvIu5l4hNPCvcY/KYGcIY3xvKegi4hDVBlueAvIXZWG0czzsUL2zeYVwD34RCDYd2PtJEnsLTEZZ4M5rHHWEQfmlEI402diqNGebhZZ6CNMTx7iCcPf+URRC7afzhnWrGNZkgnyU/2fFcc/hdQ2CO++ZmuQ7ecyaaIbjbPYoTXC0dWoqACIiACKQjIOEsHSeFykiA1mgKTBRmzCvJCkrWgmjiC4UrKmuYtdaa8MY2WgetUmezD1KIx4Uesy6E/kfkH610FBzsD+8xCqoUyug2gLdO3MQARENBwypwpMvG3ImcIvXPLPFZK2actxIVG7xpon9xgkPqxEUCIgZg1upLARNDxKMiRiUIftxfG4DZRE+6UoW27bbb+p94iJhnoA32y31I8gigImz3jSXeD4wDRH7AEFdtDB6/IfKP7npUsDGEAAqptbG08SH+wAWLjvXGNipG0XvHb+NIGCrijMM3LujmzHYM70Xy7YgRI7zoS6Wba+O+UPGwPMMzZKKwiQZZws45W/z/fOmjUmSiGe8BPCbsvvOs21hPVF4Q4DDEQ8QljGeNikJo5DUq/VSGoh6KjA9G9zgqb3g0wgezd0mx5wzPX8z6XXfdlRuPiy5sVDQxE32texzXFjUTzthuIkQ0TJbf5j1IxZJKKe8ixHG4IaSRd6zybbNIknfj8l+W85YqLN8O7qMZeTwUk/B+5ZrwXkHc5dpgyDNiec+6uloccUu8VvBuJm4aOsJzRMOb6M+7DAEQ4z7HVfRNSEeg5NvFs0kFnzzLu4JKt+V5nmUEFLbTRTlOWI+mJe437yHEI/Ih31yuhUq6iQHEjyBghleVXS/77H0JP2vkIqxdK+/78J3Ftdl12rHMpGweNLyrTYRFHDARPun9b+kq1zILnyzXbAzS3GvehYhyNC4g9pGH4U0XfWt0sXxvjU2IwHh8mZH3TfxhG/c07fOe5fx2Ppa8Y62bLWUBvM0wvtn2fW7fvr3fhgBr30HEKzyoeQ+ZYEheMVGWAyy/sM7zZKIlvwsZzx5lNs7JsQwHQF6GKaId5yWP81zZ8A48e5iVMTkf7w7C0R2Z9zbfFa4zn2XJTxZPvu+ahWGZ5Tp45jDGwrXvP2nHC1smAiIgAiJQOwLqqlk7fjo6DwEEKVqlEV+o0NCKjplgRiERYYOCIOMYIaxQSaRwY6IY4W1WN8Kyn8oNRiGMLjgcj9daVLAhDAVSKoUUziiAWoGegjuzxFpFgbBxhlcJohGFQVokw3HX4sIX2pY2PqvsWSE8jBdPAf6iVuhaouHz/aaiRPckEzmsgmQVCI6lYAZ3hDwKllbxs/tLGNJv9wvmts4+KlEUVhHW4iro3DfuH0alzlryyR94acR5Y/nAf/2jMM4g0XgzcTyeN7RmF2tp4zNvPc5D146owcvyYbiPa8Wbr5BFu3WZxybbYRMahX6eHyoP3J8sYa1iEcaXZp0xYjDyI90uuY9moZcKQka/fv1sl19yDGklXHh+87aqFvivH1FmCA08H1YBK/accedKu43rYBy+0BiLjcH+EYvJj/aMh2Fs3dLO75Cf7c+6tPsOR/P8szgQNs1ohMBriDw8dOhQ2xy75P0QsrVAPP/Rc9i+Ypd44MCNyjbeFDxXvAMQoUkrFWH+MPI56aLbJGk04dGYUrE0L2hLD1zeeOMNL2iT/xC1khpUOIbn1+Izrz2Li2NpsCEeMzsf+QLeVglH1KKST3x0r8ITmgYQ8gfjRhWaNRrxzjzt7Fw0WPE+tUYFzg0PWLSvEjNYmiHeIkIivJiAQIMEYxmSfp5l8g4iPmI531jEVkQhvru8U0k3xm/eP1yLvYfsXPy2BhQ7t4l23FfuER54Sddix5RymYVPlmvOcq8Rd2wCIson3AfuEX8mQFn+5XvHvUEUgz/fZIQX81w1NtxPa0ws9LxzTNrzW/wsmRU2NN7VVpbjGsg7/PHu4hpDw7ORPzM802jk4DnmOec5pLF1n3328dcJi6g3rx0bXYYCL/tMLIdTv8i3xr7BludIJw1zfDsQMXn+Ke/RwJPvXW1pyJKf7Jh83zULwzLLdZhHYtj4TBw0YiEK2liFbJOJgAiIgAhkIyDhLBsvhc5AgEIMBXYKUnihUEinAB2KYrREUmDmz0QG65LAqSh8WaWPgjkV8Tije02ccEY3FxuLhYLS4MGDvXiDeIHnR7TSHRc3BSK8DzieLpt4HtXG0sRnacYTjwoGnglmpNkK52xjfCIbU8vC1HbJ2B9U/qxQSeEVs3GqWKfLHPeGbqWkgQI+BWUEUDOEAu4hRpcT63Zi+1lyLJXdUChhO/kALyMzBFi63HEf6P5DFxEqovmM/IWoh8cKXldUOmpjaeJDYLMKpHVdCs9JQdwqjWxHmLVKeBguad28AW0/zwVmIqNttyVpoXKKkJAlbPR+WHxpl9x3hA2rUHBcOD6MeTHGxRcKrOy35yEalvsfPhvsj1Zyij1n9FxZfsM8ajwbpBcuXJ/dL7snYXjzMiF8oTweHpe0zr3H8nlvIHJbN1IaOuiaFDXuGYI33kJ4ZtHgETU8ZcJ3fHR/Mb/N6wpxjHPT9RCOCAjmuYSAQ3duqxAnnSeuayBed4hQPJfcJ+6NdZ+0/IPIzzauz87BvYm71/bMWRoQEnk/4qljohn7eFfgdUllne8f3yXz0Ob7aWkw70ne+2zDI4bx0rhf1qBg5yJNCGd8O/AeNCHD9sct7TyIdXzrzHh2GYMMAWzrrbf231jiR/zjWwgb8i/bEAHGjx/vxwS1xgOr8MPLPBot7nBJPPmuJQxbqvUsfLJcc9p7bdfBM0WejGskszC25H7gbcazSfr5owGSMgleihjvwzTPu8WZ5fx2TFhGs20mnJFX+d5gcd60Fp4lz4SV+yir0RiHIb5TjkAw410Jm0Jdb7lH4bNFPPad5RlJyn/m+cukO5RFeKeRfvI8f9wbypZ4RPO8JlmW/GRxJH3XbL8ts1yHfU+iLIiLvCLhzKhqKQIiIALZCUg4y85MR6QkgGBBoQPxxQot0QIXBW1EMwooiGuYjfHAOhUhCjEUiqyiyXYzCj0UKtpXCTsso5VoC8eSygzCGy3nhGW8C9ITTVN4DOu04NICT8GeQqFVMqLh0v5OEx/saP3EqNCNC7rsUbi0Aib7EYNKLZyRRisI07JNoZAKJV19zBDwGOeHSr5NGR8Vdax7LpUJ4osaeYMKMN58eIflMyqKdL2igsxxVAq5l4WMSgV8KFDSZdO6CxU6Lml/mvgYJ4UuNFbpDuOybqq2jbRlEc6sUmrHU6GAB14qcWaVfO5plrBxcaXZRuWb7jh4sCCsIgbYM23PJ3mJZyrJosJL9JqTjovbXuw54+KqzTbyOX8Y+cMqTXEVZnsXwqkUZh5gVkmMi5Pu4ZZXeOfwFzXG5MK4v3hkWAU9DGe8w23FrFOJbV/1XrdB6S0ORCPYwYjGD4Qz8pqN60PlkLTxjuQ9isgTikt8R8KGB+LlvWbXwvcGb5eoUeFkO90e8UjB7H5Gw0a38+4jDXH5GJES4QwRIfTgo9Ehatwf0sBzzPuQ6+Q9E5qJFTQSmeBGJdoETd7b1i3ajuP8GHFGDc8Vvs/mfcx+7jHfUjgibPDe5xtNd3rMxuW0tHD+qGeeD/jXP56HfNcShi3VehY+nDPtNae918TJ+5FvKIbgD0eEVISanj17+nen3/nXPxgzmD1/3FuEcJ5tu3+8LwiT5nknyqznt7TEvUfsG8a9tncAeTrOeD5IJ+UG1nkuwjINx5AfrAxC+RFPzqwGD55nnjE82eIMD00zJgHiOeVdQkMdjHnm6JZM2YH9SZY1PxFP3PsgLv4s10GcpNnEtjC+pPsRhtG6CIiACIhAMgEJZ8lstKeWBCg88ccH3ISVqCs9BXUKUFT8rXJjswhyelq5MbxfKJhEjbipBFD4wuuMmbTyGS2GtNoyUDbH0GrOgMWFjEGmKbzRfcLEoELH5NtfKD5aW40dFSXECBsnJBqvFVij22v7m4oWBUbGhcMoyIZGYY4KEvcOgRML7y8VLWv9pDU5FN0sHlrJOQd/FPbivDcsLEsqybS2U5HjGPJVoe6z5C/yCBVoCvxxhf7wHIXW08RHNz08A7h+xi/j+pMsWolPCpe0HW8ZBDqYRA1vQPI5hkCcJWw0rrS/EQZ5rukuhCchAjUD91MxtO613GvG7LEKFnGTTionFPzJe6WyuXFOKmsIUSYgcC3mSUT+QcQ0cRBPJt5jIQs8qTAT1/yPWvzj3iPQmndHGBXvPyrQVNiNVbifdctbvIe4jzyn3OeoCBw9rja/6YpJnqBLfVTQgS9mIri9k6kgRwV4eweR57Ck2Snxho16KxIerynuD/kSYYMKOPfPKrO8h8Iu7Ah+di6Ox/h+UQmPelKyz7YRhnd8KJ6xH+Ma6IJF3uHZ4DnGkhoOENFNNENkwYPYzO4lv60ijXjANZqHmIVlaWM/2bsZQYfGIwREJmIIvws2lqKNY8o14fHGO9c8Ay1uOPWr6j6HkIn3ctK1WPhSLrPyyXrNae4119OrVy9/WYi9dJk0xmw08dbe3wi7iGzcK8arCrv52UyzCJBYmuede2QeVGnO7yP+6x+NdeF9pwxi5TfKLvZc0uAWfbfBkkZIRDEmc8F4nvkOWnr+Ok21/GnbsixJh33vo/mPhgHKLfbOo0GO7yhjaNq7DfZ4XPIdJR8nWdb8lBRP0vYs14F3LuVU3kvRHhU2dm7SebRdBERABEQgP4F58u8uzV4KtYyFwTgZdLEKC25ZzsBYDsSBQJLP+GDQGprVCwfPGT6eFKwZF4nWUyp+SVbsdVFJYVyH6B/npYJJgckKq3Zu0mbh6aKWTyzhWAtLwXRumrUiUkjGwvGv+E1BicohFQ0KOLSU4lGAcR1W0WOw3DijsmkeY1Z5igsXbkPAsXRwDrokpjEEt7Bgm+aYfGEKxcdgthhsyBtUVkLvFPIEBWnEhnKYCVJ272xw4vBc1j3WwoSiZzg2XZxoRjw2wDDXaJWIMP64dbhRacUQpyicFzLyYVh5LBS+0P5C8eGlgciA4Q1A4dvG+GMblQQ8segWFVeY5d1FfrbxSjgmyaz7LPfAxrYhLJViukpiPCc8V1nCcpxxjo6jkyZ9VISp6HN/bOYxuCFOUCFBsLFKIefqV/WuggmVb3sHsL22NjfOSZrDyTpgwPVhJgpS8TRhzbxO2M83x7r4lkqYwrMXoxJrnln227ov042QGQnj/riPGCIW+6MVXL+zxP/Mw5jvYSgu44nLuw/Dowyz72FU+OLdaCJWofcE3UDjrt0aAxCB2G+eL/zGeAeZAMXvuPcxA5bDkHTvtNNOBPPGM2uDpCPIIRDGpeGmm27y4Xke2W+Cw5xYav43EYM95n3EOumku7uZpZtnBMODz8bW4jf5xSbFQRDBOIbv8mabbeb3+41V//AaRuDjHli+te8s8UTLbTQoMEYp5bUwjRZfOZdZ+WS55rT3muuz9yrPU1i24D1uwqeJxHiXIbLyHbExR4kDfjzD5C+ETCzN8849z3J+H/Ff/5jJnHtqRrmedzmiHu8Rng3W2UbDSWg2CzrPN+Id6SYcM0qGhpBtz6w9g+H+NOtWZuF7FQpfpJ10URa3MiYNcjT28TybkTYEPiyuq6iVe7LmJ4s/bhn3zc1yHSbi814Jyw/wtPdk3Hm1TQREQAREoDCBsnqc0QqDABV2JaDlhsIZHhnMhpbW+LDicUQlk4+ZVbijx/Mho5JGYZkCYHRg0mh4+41nEwPhch4zPqLmIs7gsGa1vS4KPnGtyhY/S0QSKgVWQKabSHgMrXZMER9njD1jYamEWaE7Lmy5tyG+0BUTowBCy2bUqDBY90crwBPGhBcKlFYAjx7Lbzgw6yKFAgqUDKBbyPBMo8WRgj6CDa2RcWNChPHgZcA4GIirpbBC8THYO93Z9txzT5+fqUDyR4GU/G0FatJCZSVf17di0kthjQq9PRPRwWaJk/ti3mbcX/OaIX1WUI22eoZpoTJGnqDgSYW40GDYHEsli3cHrdZUaBCHol2PwnPY+o033ujzYljgt33FLAvFx0xjvLPoUkZjAR5vvLvovoRIbFw5N95HDBRuRkWE/IzYh8dIPiO/4y3EM9+3b1/H4Oe0UPOuJQ4K4ubxliUs5zQBC2+Hyy67zKdx3LhxvgGkUPoQhZicgIoSFWQEMSpADADdr0pEIk66viCq8p6yygeVbvNYyHfdafdRKa3rc5I2vj/kS4T99lVdDuHF/TdBnDCINby7eEcyMDzMyCs8P9zTaDdFjinGqMjSXRZhkvcX3w/yBR4W5EMqhTbrZzHxl+MYPD9oAEJsYgB98gkNB1bRpYJu33dESLjxTqfRDI4wtzxF+vCMLaUhDHPP+DZznxFEETbi3i98W/CGY/wkumAyrhTCAWUM7jVeIox9WCrDI41vAsID7xK+q+Q93gkmRnAuWMKVdxlpYx/vLER/8oSlj7xis3Hy/PNtJiyNGBxPA5Tx5ZnmG4Wxj+smf9PNl8o894Zn356HuvQ084mq+peVT5ZrznKveSZ5T5CHzCubZ9IEGdJr+Yl3Ivmc8Ig+1m2adyfPMI2LvDOwtM87Zau05/cR//WP7xeNPuR50mpCNmUQE9VHVA2rQPkBcYp8QgMpzyTlFvIi5VK8OPnmU2Ylj5AXyHt8M627Oo1xlJWKMeoJlE94D9DgDD/OzXPKc8e32GZp5h5TFqShiwYC0sHzYfzDRkG4cTwiNvebb3uW5y3ftcR9c7NcBzOL8p5HeGUGZe4R12DvzXzn1j4REAEREIH8BMrqcYb3FKIZHxnGt6BwZl0MEKr4iKYxCmV8AOI8M8LjqThSaOaDlsUoeJhoRuGEKZxpNeSDSIGEFldrlSfeUl0XXJhl0f4o+CCW8QHG8DgJPUj8xr/+UfBOMoTJ+mJUlCmoYFaoi6bNvGDYTpcNM+v2wXEWh+0LlzCzVjpm7rKCG2GSjmO7zbDIPbbWTgtvy/A8rCOwUfjKZ3HnLzY+Zq+iIkMXHouDCoeJZohICFqkP+SYL31p91GoNQEDcSuuxZVCmbG3rmfET76153DUqFF5T2mCHM857wu7zpBjNAK6ulpXFVqTqRDacbaMHsP2UGALw8Wdy/bbMkt8hIUXFX+u3wRF8hrCO0uef96HCGz8xYnK4bltPS6tHA9HwlBgpvBPxZZCOIV7834jXVnCUpm3tJNuvBtCszSxzdZtybb77rsvNzYhAhFGBYQxoqhcEydppWLDcYi1oYBgcdnSR1DEvyzntOjjONu+cBmmzda5Nt5bVFjwHuOZhSOTXSAcmPHuwkuEvIDwwDeM5wYhJdrl0OKOS5ftI95w3c7DEuGT9xf7+eZRYSYfkgf5vvK8J5mdMynupOPitltc4T6LN9zHO48KLO84mCCMUfkjLO86E4OJh+favtdUinkn4E2DB5cJgubFF543zbqlLRqW55XupDxj3F/yMfeb96AJR+H1ICjxrSc+xAbSgycdzyaDoCedh/Pm2xdNl/0m/5AO2NGIYeejEm7d02zsUcocDF+AyEueQLwhvOXF0047LfceIE4EQ67f7gt5lzyMEBxtuCIdNJTadZMWePHdoJwVvpss7VmWIePocbYvjl8WPlmvOe29pqxp4i/lWwQP8gbCF4wx3o0m3rCN8gf3iG+lNUojTtI4EFqa5z3r+S1+3iPce95XiGbce8r3YQMnjT40/PJe4duOCEu5BbGK5xpRFSPPIPYQB9dKWRvRjN98CxDY8pnd46QwfIMR2THeeYi2pJ38y3uD82DkRa6LBkCEQdILd/aTPnuPENYarYkHry6e4yz5yfKjLYnTLOmbm/Y6iIf3OfmEdHEdvDe5XnvW4s5r59dSBERABEQgmUCTqhZAr2pQeA7depMPSbeHjykFWz7wfDypIJjxQafCxDbzqLJ90SWiFS7GfKDMKPRRGTRjHy1w0cGMKSCGHzsLH13SKoYQxQeSLqXhRwXvACqgeEUNHTrUFxJqe1101aDVmcJY2G3C0kWhkhY6Pt58/BADEQbopmpGGukSQhyhUeCgMG9GIcC6Sdm22i5LnVdqm55KOZ5CEAUgnlPuKxXLQoXGODa6f3FUyr+NQjiVerwcuXcmSpb6zOZBQDdaqxQknSNtWPIc7xy8BsL3Y1K8abdTUaLyRTck3nX5xJu0cRYKV5fn5DuHtx0Cs3kSJKWPe0GlGQ6IQOU0OxfiXrHeHOVMXzRuxt7iG8i7iwp3Uh5EfCAc3lz5wkXjr+1vE4DxEk7T7ZDKO++DcJyp2qYh7ngTWBBKKTelyVeUP3gmMTzV8l0P7wXeaeTtaVXew4W+R4j6iHIIdIhDSfcx7lrKsa0YPlmvOc29xpMSYRgeiDzWGJV0zYjChCcc9zXfPSKOQs971vMTJ+URRC4EGd5Z+Yz3WvuqhnLGCOQZTjLyB3mVRjHyRymN9PJuQMQjr4YNGNHzcI9JL+/HpHA8J5S3+XZZ41Ix+Sl6bvud9M3Nch2EpbELwSzpOux8WoqACIiACMwhkK+eXLaumox3wUeEwmsompEkhDNayihkmjEeDK2WeF2FLtEWDx8mPrpUQqLGh8FEMwoRiHbmOh4Ni2BFqx7hhg8f7nfjJk8F08ZbCI9hGy7kfPgxS0/a6wrjSruOGEbhifNGB+ulYEVFh8of3RujXdtweceolFNAkDUeAlRKKEyWukDZeAjV7yvhHRa+88qVWioxhSoydu60YfNVdiyuYpZUuoodv6aY83FMXZ6TCm10YPukdKe9F0nHZ9lel+fKkq6ksHjg8VfIqBzOjQoi99lmji2URvZTfqgLo7xApdk8TdKck/KHjZNUKDzvhSzvBgQR8xQuFHdd7C+GT9ZrTnOv8U5OyxwulAHxMktrhZ73rOfnvJRH0qYZwTaNaFvO/EF6w6FA8rFLc495TqLPVTH5KSkdSc9VlusgrHk0Jp1H20VABERABNITKJtwRssRRqUBEYuBNxnbC/ELYSxagUQQs5bOUDijgID7NN2dwkE7w0ukmwGFZcZXoBJGV6QkY7wFWjxpITbDuyvJaE3FbLairNeVFG+h7aQRi3qUsY1rZIBnvOSiwhlTaFsYxrGRiYAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIFEegbMIZHlEYrszWtZHfeFHhFcUg6//P3p3A61aNfwDfpYz5E5H4y5UUmRVFyZUxNFCmTJmHkGTInzQgiVJEhso8JskQUZo0UsZEQplyK/OQ2f9+F89r3X33+553n3Pee84993k+n3P2fvdeew2/NT6/9ay16pMSmUSzOGMJVssee+xR/+y8ZxXmhKZxBGHHLH2cmSebbIsTsXEo6Zuu8lHPfzYzjf3cumYrkWWIM6bcyMYg11jxWQbGomKc2b2e0UrniUAikAgkAolAIpAIJAKJQCKQCCQCiUAikAisUghMjDhDTpGtttqqLIO0XNO+TKyk7O3xuMc9riyNjGVnK+pkJRuR+ptKnM7oNEPCHD1M0vuma1Q4SK96s3L7EXgW+7lZPnrUUUct5wXrOvsqWGJaL9d86EMfWtzG0dXLfZgPEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBMZGYPWxXfZ0iAQi1vw7qtmJO06tYUGGELL/2ROe8ISevq4Y5078dNiAOLLcckJSyGynC4kYf0g5pBnM7EkBt9h0NMKP61e/+tVyi5gMcZw8qU+li3d5TQQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgUQgEeiHwMQszmyWu+aaa5ZN7utlgzartCzT0e1OCJxvsssuuxQrLvFy8tP++++/zAlHs5kuJyLVpBxC0Waj45xyZrnmAx7wgHJ4ACs1yzSdwuk0sUmd1jff8irjkwgkAolAIpAIJAKJQCKQCCQCiUAikAgkAonAJBGYGHFmWab9wP72t78tF3+nASHOEGvzSZ7xjGc0W2yxRYnSt771rebQQw9dLnqzmS5EWSwBXS6gKR4gxyzZtBcaAs0x26R9gukU3uTrRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgSEITGypZuxdFqdS1uHHs3BTv5ur++c+97kD0uzEE0/sJM3ELeIcaajjG8/CTf1uEvfnnntu8fZe97pXs8kmm5QlnuKekggkAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIjBzBCZGnNnPzF5da6+9drP99tsPYrrOOusMCCpWXSF3vvOdGwSQkyInKbe61a1KOBtttNEgGGFvuumm5fdxxx1X9mIbvGzd9E1X6/NZ/Rkk2XrrrVcOFbjiiiuaP/3pT7MaRnqWCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCicCqisDElmra18ySTBvW77DDDuU0TftvbbjhhmUDfCRPvYn9s5/97EL+nHnmmc3RRx89sfx4/OMf39zmNrcpm/6/7GUvK+E85SlPGYS34447Nv7acvnllzd77713+a5Putr+zOZve6HBGRlJTj/99Nn0Pv1KBBKBRCARSAQSgUQgEUgEEoFEIBFIBBKBRGCVRmBiFmdQtfH9SSedVCzPkDsbb7xx41RKG+Dvs88+5XkbfVZqwyTexXWYu3je5a7r2fWvf/34pJyk6TTN9p8N+EOmk6741tUBCX2lK978iD3NvD/55JP7epvuE4FEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBIYgsNrSzfALU7VkyZKJLpO0RHKttdZqLrroomkRR0PiP+ePF2q6RgE76bIyKux8N3MEMv9mjmH6kAgkAolAIpAIJAKJQCKQCCQCiUAisHAQGKUnT2ypZhu+yy67rP1oQfxeqOlaEJmTiUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEYAYITHSp5gzilZ8mAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIpAIJAJzikASZ3MKfwaeCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCiUAiMF8RSOJsvuZMxisRSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgURgThFI4mxO4c/AE4FEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBOYrAkmczdecyXglAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIpAIzCkCSZzNKfwZeCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCicB8RSCJs/maMxmvRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgTlFYI069CVLltQ/8z4RGIpAlpWh0KwULzL/VopsykgmAolAIpAIJAKJQCKQCCQCiUAikAjMMQLLEGcbbrjhHEcng18ZELjkkkuaLCsrQ051xzHzrxuXfJoIJAKJQCKQCCQCiUAikAgkAolAIrBqIkBPHia5VHMYMvk8EUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEYJVGIImzVTr7M/GJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCwxBI4mwYMvk8EUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEYJVGIImzVTr7M/GJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCwxBI4mwYMvk8EUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEYJVGIImzVTr7M/GJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCwxBYY9iLruf3vve9m80226zr1TLPvvzlLzcXXHDBMs+m+nHb29622XbbbZsf//jHzSc/+cmpnI/1fq211mqe+cxnNt/97nebE044ofObm9/85s3ixYubddddt7n66qubiy++uPnSl77U6bbPwxvc4AbNgx70oOLvNa5xjeYXv/hFc9ppp5Vr258111yzeehDH9rc6la3av7+9783F110UXP22Wc3f/7zn9tOG3498IEPLG6vd73rNVdddVVz/vnnNxdeeOGM3C73cT5IBBKBRCARSAQSgUQgEUgEEoFEIBFIBBKBRGAVR6AXcXb729++2WCDDaaE7Morr+xNnCGN+H3DG95wVoiz1VZbrXnxi19c/PvXv/7VSZwhq5BQ3Ibc7na3a+5///s3BxxwQPPHP/4xHve6IuJ22GGHZfy99a1v3WyxxRbNKaec0nzqU58a+Hfta1+72X///ZtrXvOag2cbbbRRideBBx7Y/O53vxs8v+Utb9nsvvvuzRpr/Dfb1l9//ebud797c+mllzaHHnrotNwOPsqbRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgQEC/2VgBo+mvvnLX/7SnHPOOUMdfuMb3xj6bkW8+J//+Z9mt912K6TZsPBYmrEII1dccUVz8sknN4sWLWruec97Niy5nvWsZzWHHHLIsM+HPr/ZzW42IM2QXmeddVYDr6222qq58Y1v3GyzzTbFAo5lG3na055WSLM//elPzbHHHltIsUc+8pHNda973WaPPfZo9ttvv+IOuSdNSLO//vWvzde//vXmkksuabbccstifSbuO+20U/Gjj9vief5LBBKBRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARWA6BaRFnCKHjjjtuOc/mwwPE1MMf/vBm9dVHb9/G+ov84x//aFh2/fOf/2zOPffcsjzyfve7X8O6azoifMQV//bZZ5+GtRthaXbQQQcVkoz/iDPWbZaokte97nXN73//+3JvqSbCbO211y7x+MlPftJsvPHGDes0csQRRzQ/+tGPyv15551XCDYWeyzPkG993BZP8l8ikAgkAolAIpAIJAKJQCKQCCQCiUAikAgkAonAcghMizhbzpcpHtz3vvctyxRvdKMbNX/729/KPl/HHHNMs2TJks4vEUsstOxR9pvf/KZYtyGexpHtttuuEFeWWf785z8fEFPtb2NpJGswJFcIQkr4yC9/QXzF+6murN34hxhrf/v973+/ucMd7tDAgWy++eblamlrkGYeICY9u+lNb1qs4o466qjmFre4RSH57MMWpFn5eOm/M888s1idXec61ymP+rgNP/KaCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCiUAisCwCEyfOLDdkDUUQSde61rWaDTfcsNlrr72aD3/4ww2LqVqQSvYHI9zbtN9v+6u97W1vq5123v/hD38oRNKJJ57YPO5xj+t046HN91mdWRLpaumpe9Zq5Fe/+tVyxFd5McW/t7/97UNdWB5KLA0lCC7iQIS22LMMcWbpJ7GU1F+X2BONxJ5sfdx2+ZfPEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBJpmWsSZEyN32WWXTvwQUD/84Q/Luwc84AGFNEOAWdp5+umnF+LsGc94RiHPHvOYx5S9uuzZVQvy601velPzy1/+snnYwx5WNspHDt3pTndqvvWtb9VOl7vfe++9l3vW9eCyyy4r5Nm97nWv5rGPfWyz8847lxMrWZlZvmnJ42wKqztLLwkLMRJLL3/729+W3/U/GJCwIqvf1ffIuE033bQ8mgqbPm7rMPI+EUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFVEYFpEWeWOdpEv0tsch/EmRMrCYsqpBmxNPLwww9vDj744EJUcfPZz362vPMPyRakmd/eIc1YrTkFcypyyDfjio32hYcsq0+qZBHG4mu25G53u1uz4447Fu+++93vlsMB/Igww1KsDi+Wbl7jGteoHy9z77CBF73oRYOlqZa/DpM+bof5kc8TgUQgEUgEEoFEIBFIBBKBRCARSAQSgUQgEViVEJgWcfb3v/+97OHVBdT3vve98hgZZVkmQU6x6qqFH0ih9ib8rK9YmtViOSfiLCy26nfTvXdy5T3ucY8St1NPPbUQe+uss05Z3rneeuuVzfktJxXPmcjixYsHp2xK15FHHjnwjmUbgVVbglSr91+r3cDj+c9/fiHfxPHNb35zSUvtJu77uI1v8poIJAKJQCKQCCQCiUAikAgkAolAIpAIJAKJwKqOwLSIs1//+tfNO9/5zpHY2ZssZNGiRY2/LmmTYbFEsXYbe4LFhv71u+ne3/Wudy2fIs2OP/74cm9fs9e+9rXNAQccUE6/ZFV31llnTTeIBjm39dZbl+8vv/zyYmVXE3Gs865//es3DhRoSzxzmEJb7nznOzdPecpTCuHGgu8Nb3hDc9VVV7Wdld993HZ6kA8TgUQgEUgEEoFEIBFIBBKBRCARSAQSgUQgEVhFEZgWcTYOVrHUkFvEVHsfs/CjbV3WtTQRuUS6SKTwp88VARfEFOKsFsSWpaa3u93tmrvf/e7TJs6e+MQnDvYeu+iii5p3vOMddTDl3umZCEZ7xrUlnrWJRGSeQw9YqbHOO+iggwaHArT96OO2/W3+TgQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgURgVUdgYsSZfbssM1x99dULsdM+PdPplU6M/PrXv75MHjhVsy2bc84/zAAAQABJREFUbLJJedQmkdruxv2NxIu9zdZff/3l9k2LDflr67Bx/eaONdhd7nKX8skpp5wysGhr+/GTn/ykue1tb1v+2u+cPEp+8IMfDF7d+973bh796EeX3/aNO+yww8pBBgMH1U0ft9VneZsIJAKJQCKQCCQCiUAikAgkAolAIpAIJAKJQCLwHwRWnyQSP//5z4v3O+20U7PWWmsNgkKEOXHzjne843LEj33RkGohyDWWX+Tcc8+NxzO+svYij3rUo8qyzPBwgw02aJBpJA4iYAW32Wablb/Yty3ct6/SFqTZCSecMJQ0892JJ55YCLzrXve6zeabbz7wykmfniEeTzrppPLcbzgSBxcccsghy2FXXi7918etbyyXlT7LOlMSgUQgEUgEEoFEIBFIBBKBRCARSAQSgUQgEUgE/o3AxCzOeH/UUUc1r3rVq8ohAfvtt185XdOySyc8EgTQ1772tXJf/0Oq2YPMck97o1mWyNrsC1/4Qu1sRvfvfve7mxe+8IVlyaY9zVhwrbnmms3//u//lvAsIT3zzDNLGIilJzzhCeXeJvxxamhXBHbZZZfB42233bbx15YlS5Y0Bx54YDlh9Jvf/GYh2iy/vM997lOcigP5zGc+UzBwz99YxmqzfyePtgXRtueee/Zyyw9E3YMe9KBC1Dmlc7bkec97XrE4fO9731uWlfL3yU9+clma+o1vfKM57bTTOoNC4DlUQfn41Kc+VdzAxJ5xU0n9Tbh10MKDH/zg5ta3vnVz05vetByooDw5yMKprX/+85/DaV6XIsDqU3l3eMVb3/rWTkwQrawaCYKZZeV8lyh7H/jABxr7GbLqdFJvl0i7PQjVVScCz5a1a1dY7Wfax/YS9rab6f6epN/TjdOK+G6S6b72ta9d2uY4Hdnki8mfn/70p80nPvGJFZG8ZcLQ3j33uc8tWxscccQRy7xbmX6YAHr605/eXH311c273vWusaIuL/7v//6v1NthbRc/jSlG+bnrrrs2t7/97ctWCOqiybtb3OIWnXHQlsRYQT+u/zr00ENLHDo/GOOh/t5hRR/72MeaX/ziF2N8kU5GIRD91crSV41Ky2y9m2SbOFtxTH8SgUQgEUgEEoH5hEAv4szyRhLXqRLiEIHXv/71zbOf/ezmhje8YSEu4hvWaPXANfz0HLlm0OiP/OxnP2ve8pa3xKdjX8PPuNYfIlje/va3N0960pOKhRZlJ8SeZO95z3vi59jp9cH1rne9wXddp2V6WR9ygMCj0N/tbncrpF18bID3pS99KX4uc/roMH8tiyX1SaVTueW+Cx/PZyqx3NR+cvZjI3e6050KcYXEQlx1KQW+kx/2eQvizF5wdR4Ni1v9DTeWwlKU2paClDIk2lZbbdW87W1va77//e8P83KVe66ujsKadSQFkSCXjj766JUCoyh7iEHKrnoyKp2RqG222aa0P5dddlk8mshVXJRVJL62aTaFxS8yRf14xSteMZtez3u/tK8mYkwKzHZbd9/73rfZfvvtm2OPPXawHyaLZXlJMZ0r4kz4s53WFZ3R2ui+6Xja055WJsOG4a4ssHQftudqpJEbfWcQ2Ky967493LlqA4M4M4GgfXzWs57V7L///rWzXves1vVZ+qiuPrKXZ+m47FmrLJGVYZJnklkGh0n1M5OMd/qdCCQCiUAikAjMNQK9iLPpKMgGffvuu28hi25zm9uUwTzF0GCzlpNPPrnxF0K5NcN78cUXF8useN7n+uEPf7jxN0y++93vltlpRN2ipZZtLAYox6xNajF4/uhHP9o85jGPmXIWeY899qg/HeueRdb73//+Bj6s3pB6bXz22WefsfziqI9b7j/3uc+V5akxsPRs0kIpec5zntM7rpRBFkDDBMka4oAHpC1hjXfOOec0F154YSmLZqCRakhM8bD0lYVIymgEatKMFRZrzXZZHe3D/HurbCA+aqG0I0Ao0Cx4XvCCFxRrztrNbN/f7373KyT+bPvLv4033ri5+c1vvtLn1XSwMSkxKWEdHZbAEYb+wkRBtieByIq52iZBmw7/9t6pJpUe//jHlwmqqWKjnqjzdf5pD4i+uS2sUkOMd4xZNtpoo+aBD3xg88UvfjFe5TURmBcITLKfmRcJzEgkAolAIpAIJAITQqAXcTaTOJjhZck1rrAI8bcixJLQ2M+sKzwzzZbeSEMsx+lyN5NnlPe5snqibFCsJ21R08aH9csjHvGI5rjjjmu/Gvl7XPdONiXyzNLY+qRXS4TlK5IReWbZ7mte85rmN7/5zciwV+WXNWmGGHjd6163IJa5qnthMdLOXxZFyihyBLE8apl2+9v8vWoioC8Z1Z+smqhMPtWxB+gnP/nJZQLTv7FEs4xzHIkl6LYTICxTTfSYULMEcyr5yEc+UraoSOJsKqTyfSKQCCQCiUAikAgkAisPAiuMOFt5IOmO6V/+8peyZ1v325X7KYXiyiuvXKHpQ2Kx9Nt6662bs88+e9aXo9izLZbWWBJck2aRWwg1itBLXvKSYmGAHLX/VcryCGyxxRbNYx/72PICoY00+9vf/racQ5YdDtygrFqiC2NWGiw/6+VRz3jGM8qyQUuq7KdmWRIL0I9//OONvfGuuOKKYoG64447FksploYsOz74wQ8Olk9F4MKxdJSliLKM1GNx4nCOmQrrRnGgOLNoqYmzccOFCT9Yr1kyqS1hFWN/vSDLLSFjpUIQdMokAiDeK8/2TrLfIqtUWDpx90Mf+tDAIlf8HvawhzVf/epXi7Wc31dddVVj+Xv47cRgfiOO4+CRcdMxLM/EwUnC4nbGGWcUy85huMuf5z//+aW+s+hhsQUP1qCxh6Wyoy5aqg87WLHcCSIj/IbTDjvsUNyxEJLv8se+UMhQS76RniEvfvGLSzsXy/At/5YvlsNpi5QxxLl8sfdkiH0q5Z3n9sS7yU1uUiYZEP/RxjzkIQ9pttxyy+bggw8u4YqX9Cn3iPndd9+9V5mWdkuE5aG6oewjZJRz9e7www+P6I19tZelpemEhWWU5XHyf1S+sSDfbrvtSpmC3+LFiwtG9o4UhjrbPqF63DweN3H2/bQ0FrnVJi1Zmom/d8qZfBol9jYj3BKWy2TcSRXto8OH4ArvL3/5y+X7mf7Tvtn7TFt0/vnnD7ZyGLfNFf64bu0Jqr2xz2pYZ6tjP/rRj8repKzqRsmo8hL1fJw2DfZ9y5a2zoSH7SDUFf0KzLpEXbaSQFugnmpD1H3prgXxqn2zhJ57fvNT2VcvjS88F19pVwa0i05OHyXTwdk38kT50g9oG/Sb9ZLeYX0B98oRafcz/JMGRLE0sCTnpzYMLimJQCKQCCQCicCqjkASZ2OUAMr/TPYrGSOIOXUyFxYSxx9/fCFiDMbtveQQidmUUH4QMIibYYJUMDhcb731ipI6zN2q/LwmzRCs9i1sK8LwMfDea6+9BssNERGUDX93uMMdmje84Q2FuOCW4kxRkffKALGvDyLNEm2De78peiEG+vbn2nfffYti6jkFgVITy+WEiQihlNnXCsHn2XSF8hhK1+WXXz7wpk+4lnkuWroUnIiLdCNqdtttt2Jt6ZAMFp+xD5/3MJAOxJm6Yel6LdzAB442QycO0PCdk4gDD78pQUgeIi2eWWaLOOuTjmF5Zpk5ZVWc5FEQDiXA1j9xEb783XTTTctb5JO0UKjtF4YwCoGXdCHmzjvvvKKMekdxRcBF3kS+w8xedsqJe2GFuFcWCcUToVJ/754C/dSnPnWQL9xShn1bl7MgIr0nyr4/uEsb93GSNKLT73HLtLKhHkW9kDbl56UvfWmpD373FXWYUkwsVw/SbNz8H5Vv2gLpQ0IoAyHKs2WyykS9fcC4eRz+jHONMoNMbksQqsgFy69HEWfyT11DGsRki/gTZJi9y5QRhIx+RXuF+G0L4t6kkLBmgzhTpx38I37icdZZZ5Ug+7S5fdxqj+QpslwbHGVOO+7PwU9tgrLGYFR5Uc/HbdOiLo1btpTnZz7zmXVUyp5zEf/6BQtqk0F1G6DNQOZrH1772tcOJofUd+V57733HrTT/DLOkC9IYt+GGE/YtuOVr3zlyOXxfXAWT+FHXyBN4qS9eNnLXlYmWuLApWF9gXKrPSJ1P4Pg04/IN8Jve536U3ff+MY3lgmY8jL/JQKJQCKQCCQCqygCSZytohk/18k2gKNwU4gN6J2YOWxD53Zc65NL63cUmVAoYnCJ6JlKDBoNdFnjpCyLQE2aGUyzqOkizXwlL+0FZFbbzDzlnOWMZbBIBBsSI7JqQQ6wnqCo1idXUhJYDTkgw96HFBREByWOVZVZcPcstSiTrJIOO+ywolQiTnZdeioeRYbF0VTlSlgUpRD+ireZ97A2oRzHvkl9wlW2F/2HNFPeWXp5RgFX5iznouxQWhAz4s5CIg4HcJpflOVjjjmmkEcwptw5hdc90qm2xoIHsp/CLmwHkFB2LV2GcxBtfdIR2Lh25Zm8RkqNu9wb5iyAWPRRzr7yla8Uws/+O0R63ve+95UyIL+l9573vGch5YTFCoUfiHGHeyBHkDQOe0HEcast8BfL6yiyoUAjb3zPIu/II48sVhusyijSvkfEhBJaIrT0H1xZkX3nO98p1k3y89WvfnVRXFm5RdsT7ttX4U1Vpn1DEYcxgkTcWa/c//73L1Z4bT/H+Q2LsBZVhsPibjr5PyzfxIMirg5//vOfL1aP8ghxhLRVj5RrBNC4eTxO2sINf0ltKRjvuk6gjnftqzIkjTUBp54SZG0tCDTkr4OL2uVeeUacBVFbf9f3XjisJZU/By5pQ7WxpE+b28dtxFEZQd6rY+oOIk1bsuvS9tVJ6croKOkqL9Np08YpW9pVbSgR53e+850lL8UZ6V+LNgeRLH4stuShsotM89x7/U3UlfgWUWXCwQED2l+WpsL1h0hkqcbyMQhHZUB9mErGwVlfpy+QD0hg2wtoq+zRiihjPXvBBRcMCF9hdvUF0tXuZ7R3SDN9u37UmEj/Z3IC9trg+jCvqdKT7xOBRCARSAQSgYWIwH/NORZi6jJN8xoBynEs+TAIDQVlVKQNdCmCXX/8CDGgJBTjqeSSSy4pTvhtAJvyXwRC4fYEPnHYwn9d/PuOchyKJcUiLFrg/453vKM4ogCGghvfwx4BYenYiSeeGI/L1V52TlVFAll+GPnEH2I5DkWGIJpCiaPAhJVH7FdUHA35R7mwXDT+ELMs1pB10iwNb37zmwvRw4s+4YYVAmWHxRrixJIvSog9H88999whsfr3YxYXFDpWQhQlhDOSCF6hPCP52oLgdOjHEUccMXQfuj7pqP3vyjP4U+iH7RVXfx/3lEyWpw5HQUY9+tGPLnhLF7IPsQY37mIZEoKdUFQJLODJHaXRkk5lZVS9pyDCkZWIfOC375WbIMvC8qIE8p9/yFOWF5YAK7PTkanKNMWY0i4+QZoJB3kcxG2fcBHCcQIuMqcmAqab/+18i/iIM3zUPdgiel0JJZz0yePywZj/YEZYEM9EgkBn3RjCoocol/JeviBDlE8EJwsn7UQtYZ0a7VP9rs89Eia2EjA5wAoq6n2fNreP2zp+yvxb3/rWUh49V18802bqg8eRdnmZTps2TtmSd+IFH3FWx+NenGsxVtDXI4oOOuigwaSN9li7SUxItEXbJz36JGUg8kIbwgoPCacti3LY7u/a/sXvcXBGdhGkXbSz4qGtFw9l0BLxtozTF0QfIh4x2Yg8M0GFjB5lXdgOL38nAolAIpAIJAILFYG0OFuoObuSpIu1h035zWqaOR1nyaaBaZfUVgLx3kB6KokT07hDbKQsi4B9bZxWar+eRUstmFj/xP5Y4TKWM/ltEF4Tbp5RfAzsuQsSxPMgw9y3hXVWLciQWC7oubgQCiyLnFqCOKXYWppCiRom4hbKgnJIyQxFGNERZEp83ydc6RM/5dBSO0QPyxvkQhCK4W/XlZLmj1DC4Me6wJ5O9ZKb+ltKFCulqaRPOmr8RuXZVGHW72srOc9r5a1dfoLQDus7bQBrDuWBYkjJC+IslNk6rPoemcNSjCA2LO+yhA+5E+RvV7tRl9vavz73U5Vp1jzEMsF2HrK+7HNCqDIcy1GRr/Yaq6VP/tfftfMt3sG9vQ+YcJGQYc3bJ4/D36muykYQVEFYTPXNsPfyX3vw7W9/e+Dk05/+dKl7iH0WX+TSpdbN/kwkaGtYK9bkAkJGXyJu6mt9QufA4zFugggRJ0vda2vfPm3u1VdfPQitT/uMsG+L+q+cOgW83Q+03frdLi/TbdOmKluBR9c4QHtR1x1LJAnLNO1zLch3exmqP9rcut7HhFC4FyeTI22Lw1jmG210uB92nQpnGEab1N67U9nwvfS1Jx/H7QuQ6iaZlGUTIPpDhJl+qt1mDUtDPk8EEoFEIBFIBBY6AkmcLfQcnufpM7CzIb99hViROBnNQHCYeGcJxlRCObY8cJwZXzPgpFZKpvJ/VXlPsTB7TxAMiAvLNupZde/shxMyyhKhPbBHyHWJfDabXkuQN0FqhRJOobCkdJjI37ZiU7ulOB1wwAGDR9JozxjKuCV9FIhQmDnqG65lQPZyC1LOciB/ZvdZViHSRsnOO+/c2Ng9FKdRbr2rleRRbvumI/walmfxfpyr/G0TQ0Fgs/AZlp9hCWb5rXxCWMAFCeTP0tzvfe97pY0YRYIrr5auBQbjxJnFz0xknDItDaQrD2sFvjga41/UFW1he6P6SPs49QfhS7ryLaIR1mXx27VNSvTJ49qfUffR9ohbVxxGfVu/Y92lzlv+y6+QU089NW6Xuaq36rBvkDY1ccah9so7ZNx0ibMIUD4qryxIQyLdfk/V5opnyFRuw51ru456Fm2hfcemkmHlpW+b1pWv7bIV1r1heVzHrV13oxxGWmq37vlt0gXpWde7sCQM91FOEMS1jGp7andxPxXOgbXwusYpYWEblrjhb1c7Eu/qq8kxh9EgapU1WJog8wc7FtfRBtTf5X0ikAgkAolAIrAqIZDE2aqU2/M0rYgJyq5ZYMqdJWwzFTPDNnaniBoAdw02I4yw8qgHyPFuVb8GaQYH9zY7NrB28qWNikN5CZLL71F7irVn7IMMmw7OoQxSOixlGyZhTTbsffs5/yw5sxcYUsH+QjZIjjLUN9xLl1qmsDazzxMCzGmPZvYp1fY6e/nLXz5U4bc0UZ0glDMKOCXHckR7x8UysjoNkSf1s677vukIP2aSZ+FH1zWUVdYTw6xK67BZq0q/fbTUdSQ5Cx84I+K97xLkm/3OtAsUUfUepixpkJtta7fwI/I/fk/iGsqpstGWsLZrPx/1G+mDOLa5OVLRcs9Q8vvkP1ymkiARRrnrm8ej/Ip3QQ5ol+RtF8kSbkdd1U1SW5v5ra4qZ139Q5BjXaS28kUC5/Kj5z8WdJYPyjv9o+V6QdD1aXNjr8a+7XNXvsODtC3Axk3adNq0ccqWco3wCXK9jk87HVFmwhKydus+8jMwjveTagPa8RNejXMXsRZxco1+oG4fPR+3L+BWW8HCzL59lr2yahavWC6s709JBBKBRCARSARWZQSSOFuVc38epd3+ILFkk2XTTMWyGso05cUeP8P2JLK5bwxQDRxThiNg5tmsNCXOjL0NkGMT+7DoQlxQzmuFg0KLyKB0/PjHPx4eQM83Zv9ZeiAZYs+X8MJgHwFC4XD4QF9BttnrxpId5cPG+qzDSJ9wKaz20+Ef7MK6jFKCvIHNPe5xj+aMM87ojGLsuYRMam/OHOU2lLxOD0Y87JOOEd7M2iukUViNtPPTCamIjUuXkpDkUY96VCGE1Gt7DfmDJUwtu7SMbJhQDIM0c1pybXXC4nUuJeqR5cJtEkg56SPIBmVO/bA3ljRbDm/JH5mL/O+Tx+OmtSYVtAfDSNep/Is9pOpDHjyLDefhVltbIhWiDka+RRjax6iXLNimK8q1iSVlHzGsHUJgsNSOMMdpc/u4reOqLrUlnrUnQdruhv2eVJtmiab8j/306vAjzvFMe8xir/3ce22ztoSoIytCuuIRz+AcpK14IcFjb9iIWywxD+I9no97texdWXciMqI29t7cdtttmwc/+MHFel+ZdrhMSiKQCCQCiUAisKoisEJ2QjdQciLQi170onI1MJmOWG7CD8rsKLGpNwuPvgQMZYVCxppm9913L6crjVrqN910UaZf8IIXLPcnXBsNm5Ftm9yLW/1Nl0VCYOLbcDvMeiLczpcrRWAYuTWdOFJwKI2EokxhDAuA8M8m8IgzYmDKiidlNAL2+wolTDmOzfcRGvLQwN6+P6F48E0ZNCi32fJ0B/ZdsTLIJ5RXy0drsTTS/lfiOF0LlC984QuDvc/EPZSTPuFSzlg02ogdmReCrAkrirCIC2uGUMa5DUsE2NaCiKQwk3H20Qm/wz/f9UkH96NEW6s9jCVFo9wOexeb3/OLAhwCD4QBi5sg1oQlP2Lje27hGYRtWJSEH65hsRUbyXtWu9PWR3muyy93wySWZNV+DnM7znN7GbFQEv5uu+02IF8WLbWcjdMox/GndsM/p34S5TEOUZnN/K/DG3XfJ49H+VO/kwdhWTPdsQW8lS1YxbI3YSDhop462KAWEwe+U7fa+3jJr5AgPcQtym28G/dqP0TxUH+dTkz6tLl93NZxko5o9zzXl0Y71k5z/d2o+2iDZtqmtcOI+Kjn9XJUaajzw3dBjkpL1HnPtaW77rqr29KWTNeqrnjQ4984OAcB6xCbevznlNI4MOfss8+eMtToC+p+xv5o+kvj31qClFb2ptuP1v7lfSKQCCQCiUAisDIjMFGLMzPmCKh6XyOzaDpp1j1BbIwDoAGqJVNx4lh7g9Tww0DIwNJsr/0pxp19Xrx48WB/h/DLgNFeO04xcrpfyEzTZQBdK4bhb301A2jD1tjQWbrrbyzfGrY8zb4U4ZYy+JGPfKT2et7eW4LCIgfZMRuijCEcKZyUbhYDZkwpWZY2KFPEnij2oUoZDwH77LAOREQaaFtmyxrNEk0kmRl/GwzbsF35i2UkLLi69p8ZL9TlXfGfJRZi6oEPfGBR6ig66j3lzGB/nA34l/f5v098/4pXvKKUFe2K+z7hsiQzaw8rliJIMstp4KL8UdTDCi1IRfgdeOCBjU3JEcDaCxZXvkG4aUNNIoQEIRS/u66xx494OEUObk6znC38KJswdxrhhz70oa4oTPlMm06JlTbEv7QjRZA9SEJY2Q+RsEjbZpttirL3+te/vpxih/gIZbBWINV3/YGlt3BwsqTyAv/99tuvYaminbCkMdoE7mEVSuawyGtPfMs/bXJYcw1zP85zp1Ha1F854F8sB4xvg8iJ3+Nc5QvCjJ+xZLNPOR4njHHc9MnjcfwLN5Ywqyfq/nTEXnnyPiYFwg9l5/TTTy/EN//VHVazxjRR1hwg0i4nQTZp74Jw0FaKHwKk3lMxwhp1Ve8R+ax/WBwh8hFFfdrcPm7ruKiL0qw+qIvEQRW19V3tfqr72WrT2uEoA5YaqocmxUxWyJew3Krds9hiyWWchBDlVl7JY+2YfK+3Kai/ndT9VDizeDYhbGKUBSkyVFupXyDGT/VpsMPi2dXPqJeIemNbh63II+U7+hZL2WOSYJi/+TwRSAQSgUQgEVjoCEzU4oz1lAGmDtdAy9HWMWuGqDKIH0cMFPbaa69Cmo1ybxBkYBFLJEa5rd8ZeMSmqAZPlkccf/zxRckymKagGayGzFa64MKCJ/4M0Fk9xUyspTn1bGiE71rPqNbP3denR7XfzcXvLkWv65m4HX300YP013EN93Gt3426l49O8WJZIi8NBpUn93A282z/qrBUGeVXvvs3AhRBpAuBI0tJwoLFc7gagJvBRprJMydzOa2sLX0G45H3ceWX5YunLiVIPTPI1wZQfChM8r7rlL0IM/yJ3+24+c36JJZRKjvbbbddcTZuuLA47LDDBkomcofyqY1ColkqGAKfKIfwQyA7CAPZA2dtKWss8bB8iyJN7JlGIj1xLQ//8y/2RvMTPvHNuOmo/erCK8KMa+2+vu/6tn7vtMs4tRNO+gikmf2LkJgUWmIig5KMuGJ9gaiAi/fqe13WYoIB5vBnAfTZz3624OVbmApLGyGMaH9jz6tIU1zr+CIjpEn+WMpkiWi4i2vtvn0fbuLqvT4AOR3WLvJLvyTOZCoMi6OOf9LmW3iG1dK4+T8qzIh7XOugu56Nm8fxbVxrf9v3YS3cRZLUboelI5bBhj/1N4gx4xffRt1R1pQT+LWXFfs2xgv1fp3jpCPCDbdx9dzEQywrRqySPm1uH7fF86X/jNekG6GkjmhXLeOrJxLDbX0dhjM3s9Wm1dhE2B//+MfL0nzxFF/xFpfYF66Ol8kyaeEPiy39FYszxNKb3vSmzr4j2p8IL67tuMTvuIa7YddxcLZs9NBDDy1tIRJTG27sqhwqt7a7CIlw4xrPXbv6GYcAmWBlYaadVI/CstCefzmxWCOY94lAIpAIJAKrKgKrLbWoKsdHLVmyZKBMzQYYLKQQEhQKVgL1gJQ1BcXQs7AgGBYm0urhD3/4YFkSd2aFDWxCdPRPetKTyixsPHM1ixbKZf28fW92EhFlULXnnnsOFB/uzDAbLBsAUzxmI11mjVmhsKBwel9bKHMUalcKr43KDQDt3xNiQIRM5EctBjs2bQ+hWLK0mE2h2IbiPZv+TtovhJkZf2WS8hxWOJMOd775vyLyD2GGxKKEmL1ub1o8CUwoP6yG5K2/LqVhLsNFWLCOU2dZOQ1Twix3FHflM9Kw9tprF2WGYqNNiOd90yNfKP38bufJXOHXlQZYUd7UWZZRQRh0uYUXgo1VxDB32lJto/IY5CTl03cwkR/xvCuMUc+0JxRYhPKw8Ed9X7+juLN+YQ3T3kPKJArrGASfAyVmW1Z0/vfJ46nSyi9WrvIZMTOupXn4u2jpcj7ptx8i0qVL5DMyBkGvrP3+97/vclbev+pVryp1VF8chzFwzFIIAXL44Yd3fjuTh33a3Kncsn7SfsPDFgrGH6RtkTeT+M5WmzYsDsh0ean9mEqMC7Q1rKeHtctT+TGd99PFOSamTAxNdxzT1c9IgwkAkzbIPH1NSiKQCCQCiUAisCohQE82JuySiS3VZJlhoKnjrUkzkTAzRtEIywLPzIAbSLHuiL1XPA9/KDUGCWYF26KTt3SBGNghtyyf6RJ7WtlvibvY98WACWnGfL+tkJqJo+yGyXrEZ9x0dcVhqmcUa9gIt71nkPjBgom+pTcnnXTSMt5ZMkRYKEhXyn8RgElfheq/X+ddHwQolbHnTJ/vZuIW+e9vRcu44daWD6PiGHue1W4QMjMlZfgnX4Yp/OOmo47XpO5hdenSpUjjCLy6MKu/1aa2rQ8p1QiQmYo2ebY2EddnmigiyJXoIxFD22+/fXmOFJ6ErOj875PHU6WXX8cee2xZorfjjjv2bueVtanKm3zW709FJsQ+USwia9IsDqKJpdlTpanv+z5tbh+34jGbhFmka7batPCvfe1Tt6fK07bfk/o9Ls5I+rCim25chrWZJlsvuOCC6Xqb3yUCiUAikAgkAgsWgYkRZ8HUUZ4RWZZCLFo6q8v6BDGGkKrF0gaWXWY5a+LMwNNSCKck1htB198yVbesxZIK4ZntHSZINrPG9THkcTJg1zcsWEgsMe2bri4/x3kmjqRtUeaZNG655ZbFSq5NnDlJMtzEJtDlQf5LBBKBRCARmNcIRF+mz7TnEAtDloEmUFhWI/tWlj0rVzTQ9ndizW05LgupcUmI2YxnWOvIR+ORWkxkWWIdS4frd3mfCCQCiUAikAgkAolAIjC/EZgYcWYpAFlnnXUGSzb9jo287Q0Tewd5blkKi7P2YLdedshdlyDh9t13365Xyz0zg480m2p22YeWzIgTiX1M+qarfNzzn711KE6kayYUWYY4o0xZmhLkmj2QDNzNJsdpSD2DTueJQCKQCCQCc4iAbQicTKs9j32GWDuZHLJdwFxYVc4hHL2CdvCD7RZ22mmn5pBDDun17Ww4dvACq0HWb6zgagkL9/rZfL2P5Ypxna/xXNnjFfjGdWVPT8Y/EUgEEoFEIBFYyAhMjDizBwPZfPPNy54RlmsyAbeXmD12HvnIRxars1h2MsrqazYzwMbR/qYSG+xb8kEsrYjlFX3TNSocpJc9T0Isx2F1x7KAGEzFqZrhxtUSh1iKWS/XdJomsRF7SiKQCCQCicDKh4B92JwUqn9geTybSxpXPjT6xdh+Vk5DRDTOhTjgR/+7sm8JsKJPlJyLvJoPYSbO8yEXMg6JQCKQCCQCicB4CEzsVM0gfwxgkUNO1HTKHQsyhJBZ2Z133nm8WK5gV4uXnvjpsAFxtPHqkUceOYjBbKcLiRh/LOH4DzNLWh1ZP2zD6iDH6tM173jHO5Z4tpdvDiKfN4lAIpAIJAIrBQIsiVliXzrmXm8rRaJWQCRZoMfecCsguGWCQNyt7KTZMgnKH4lAIpAIJAKJQCKQCCQCBYGJWZzZvNQG9gaw9bJBs+eUAZv824tkvglLuK233rpEy4bPBx988DKnbM1muuxXUx8h7rcw6w2Fh+FjnxRLSWO5psMWWCiENdqw7/J5IpAIJAKJQCKQCCQCiUAikAgkAolAIpAIJAKJwHgITIw4i5MfkUFtufDCCwtxtuaaa7ZfzenvJz7xic2mm25a4nDRRReV/WTaEZrNdLG8E850xFJNe97YCw3Rt/766xdvvvKVr0zHu/wmEUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEoIXAxJZqxt5lcSplHW48Czf1u7m6f8pTnjIgzU455ZRO0kzcIs6Rhjq+8Szc1O8mcX/++ecXbzfbbLNm4403Lks8Tz311EkElX4mAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIrDKITAx4sx+ZvbqusENblCOiA9kb3SjGw0IqnovkE022aRBAFl6OEm55S1vWcLZYIMNBsEI+y53uUv5fcIJJ5S92AYvWzd909X6fFZ/IvjIuuuuWw4VuPLKK8sBDLMaSHqWCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCicAqisDElmra18ySTBvWb7vttuV0Tc8QVk6PvOqqq5p6E/tdd921kD/nnXde86EPfWhi2eGY+kWLFpV91/bff/8Szi677DIIT1z9tWXJkiXNgQceWL7rk662P7P5215oMEVGknPOOWc2vU+/EoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFYpRGYmMUZVJ1GedpppxXLM+TOhhtuWE6qtAH+QQcdVJ4H+nF8fFzjeX2Nd3Gt39X38T6uU71zqmWIkzS7/q55zWuGk17pGnxU3XTFq3rdeetQhS6JPc34ecYZZ3Q5yWeJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQC00BgtS222OJfvmNRhdialFgiiaC6+OKLm2Ek0KTCnqS/CzVdozBzUuoky8qosPPdzBHI/Js5hulDIpAIJAKJQCKQCCQCiUAikAgkAonAwkGAnmwbrC6Z2FLNdmA/+clP2o8WxO+Fmq4FkTmZiEQgEUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEZoDARJdqziBe+WkikAgkAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIjCnCCRxNqfwZ+CJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQC8xWBJM7ma85kvBKBRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARSATmFIEkzuYU/gw8EUgEEoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEYL4ikMTZfM2ZjFcikAgkAolAIpAIJAKJQCKQCCQCiUAikAgkAonAnCKQxNmcwp+BJwKJQCKQCCQCiUAikAgkAolAIpAIJAKJQCKQCMxXBJI4m685k/FKBBKBRCARSAQSgUQgEUgEEoFEIBFIBBKBRCARmFMEVttiiy3+JQZLliyZ04hk4IlAIpAIJAKJQCKQCCQCiUAikAgkAolAIpAIJAKJwFwgsO6663YGu0b9dJij2k3eJwJI1iwrK285yPxbefMuY54IJAKJQCKQCCQCiUAikAgkAolAIjD7CIwyJsulmrOPd/qYCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCiUAisAAQSOJsAWRiJiERSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgURg9hFI4mz2MU0fE4FEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBBYAAkmcLYBMzCQkAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIpAIzD4CSZzNPqbpYyKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCicACQCCJswWQiZmERCARSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgdlHYI3Z9zJ9TAQSgYWAwDrrrNNcddVV8z4pz3rWs5q11167OfLII0t8H/7whzd3vOMdmy996UvNeeedNy/i347jvIjUhCPRzgd54tkll1zSfPzjH59w6N3eP/ShD22ue93rdr787ne/23z7298evLvGNa7RPOxhD2tuc5vbNNe73vWaK6+8sjnnnHOab3zjGwM39c1NbnKTZuutt2423HDD5p///Gfz/e9/v/nUpz5V7rn7n//5n+YhD3lI/UnnvTB+/OMfD97d9ra3be5zn/s06623XsFOuRaXLunj1vf3ute9mjvf+c6NuKvrX/jCF5of/vCHXV5P6VbYd7vb3Tq/rR8ed9xxzd/+9rfyqC/GtT+3v/3tmx122KH50Y9+1Hz0ox+tX834/jrXuU4jbn/4wx9m7NfK5MH973//5i53uUtz7Wtfu1EfzjjjjKFlbZx0bb755s0tb3nLTqe//OUvm1NOOWXwro9bH41T1tW3u971rs1pp53WnH322YOwxr1RDp73vOc1v//975u3v/3t437W6a5P28ODbbfdtrnd7W5X2qvLL7+81M2f/vSnnX5P1fbUH22yySbNNtts09zwhjcs7Yz25uKLL66dDO6NAXbcccfmZje7WSO/vv71rw/FsY/bQQAjbtZcc83SX8BAXNdaa63m73//e6mTl156afOZz3ymgct0ZLbK+XTadP3gdtttV9pz7eDHPvax5txzz+2VjAc/+MHNve997+ZGN7pRc/XVVzevfe1rm9/+9re9/BjXsbGVMnDzm9+8hKEMKDPyoi3y7BGPeESzwQYblDZef3r66aeXOLbdrrHGGqX91r/+61//Kn3vmWee2fzud79rOy2/+9SHTTfdtNGeKJP6SvhecMEFA3+nk2+DjztuZqs8TacPnWR5+t///d/mQQ96UCmrf/rTn5qLLrqo+fznP78cAsrhk570pOZWt7pVeXf++ec3H/jAB5Zz54Exx33ve9/ma1/7WnPiiScWN09/+tNLXr373e9ulixZ0vndTB4aky1evLhZd911m7/85S/NZZddVtoPdact4/Qr8Y26YYyobhgv/PznP2+++MUvlmu4yevCQyCJs4WXp5miRGDGCDz72c9uNttss+YZz3hGGdTM2MMJekBZN2C78Y1vXJT/u9/97qUD14nNF+KsHccJwjFvvG7ngwGJvxvc4AZzQpwpIzvttNNQfNZff/0BcbZo0aJmr732KuUqPrj1rW/d3POe92x+8IMfNAcccEA8Ltetttqq2XXXXZvVVltt8JzC98AHPrB52cteVpQ9BAKFZyoxsAvi7KlPfWqz5ZZbDj6hkBjIGsC+8Y1vHDx308ctQuDlL395c4tb3GLgh/Td4x73KAp6TUSN6xYG/qYSxB+Sri/GbX/FV3kyaK/j23bX97c8e9SjHtV88IMfLIRL3+9XVvf77rvvMiQXhRbZc+ihhw7qRd+07bLLLoXw6Pruj3/84zLEWR+345Z17S6lSXmbDnH2ohe9aEACdKVh3Gd92h4EESLENUS9R5R89rOfbRDPtYzT9oT7Rz/60cu0QeoQJZa/n/jEJ8JZuSKm99xzz0Gbxq0xAdJN+4fsCOnjNr4Zdl199dWbZz7zmY3+gzIa8o9//KO51rWuVXBB5G2xxRbNz372s+aggw7qRXDPZjnv26Yjcl7wghcsk64ax0jrqKv+QD6GmAgaRjaFm+leTQQhROp+TZ165CMf2bzqVa9aBnf9xCGHHNJc85rXHASHpEUs7L333ssQeyaB9tlnn2X6V/0l0k1+mlwL6Vsf9Nv6hRBkjnKLHH79619fHvfNt/Cr6zqb5alvHzrJ8iQv5F2d90g6BOYrXvGKZfJe/iIjQ65//evH7XLXO9zhDiV/lPsgztR1dRuxNdvE2fOf//wyeVJHRHv1gAc8oHnb2962zETouP0Kv4zD1MMaH32NSU7pOuaYY+og834BIZDE2QLKzExKIjBbCFCgUxKBhYSAwRKhgLFYaMs3v/nN8shA6MUvfnEZ1P/1r39tvvKVrxTrm/vd735FiUYoPP7xjy/Eig9uetObNk95ylPKtz/5yU+az33uc43BIcWO5Q7lc7/99isz37VCUD74zz8zu9waTH7nO98pTw2igzRjBfbpT3+6WIYhdaTFoI21Aunjlvvdd9+9kGawMEPKcgNJYuBqQGiGPjAa1y0LJQptl8AMrvBkGdEX4y4/J/UMDrXCPqlw5pO/8pwyqTwgZn7xi18UkpmC+8IXvrDZbbfdykx93ziz1CTKb5sgaFsLjeu2b1nvG+dwT7FCWM2GjNv2CEt9QxbIi7POOqtYrlJW5QWLXZYzLCbIuG0Pt6wi5DNh2WNSiZKHVKAgs9xkBUKQV8997nNLPRXWJz/5yQZxxlIKJk972tOKhXdft8XzEf8oz/vvv3+xPlFeTFKw2tW2hIUTHMTXGAXxf/DBB5dvkGhTyWyXc9ZMfdp0pJO2Rdre9a53lXbXBF8fQaAS1rBHHHFEubbrVh//hrkNKyJtNeKZpZHJSdZVJr/+7//+r/zF9ywzkWYIcZMOyOLHPe5xxVob0fLSl740nA4mpUwSnXTSSaVPYB0qTO5Y6Eea+tQH9SNIswsvvLCUc3ghfDbaaKNi4Xb88cf37osHEW/dzHZ56tOHisqkypN6BUuiLzCm0Ycbj2ib9Amvec1rynuEaZBmyog2a1JEbgmwxz8TlSyOCWtdk3bSJh3GW9o4aWF51qdf0ZYGaWY8w6L5z3/+c5lUQGYqy8pfjOV6RDmdrgQIJHG2EmRSRjERSAQSgURgZgggswiFvW0xVvvMncEgoZSFYmRASAGgOLI8oxyQJz7xieX6m9/8phBkBvyIJ1ZjFAeEBEX0iiuuaF73utcVt/U/Ssgb3vCG8oiCaoknQZAR/rBACfHebL1BIaXjV7/6VS+3lheYGSWWRoQlzpe//OXm8MMPL2lHGlj63Mctf8KviKsrrIQHF+m0PIki0wfj2r+8n10EKLg777xz8ZTl3sknn1zuLck67LDDiuJrSWyQtOOGzqKQ0o3wqMtv1/d93PapF11hjfOM8vPYxz52HKdjuRm37eFZLHeyjPXDH/5w8V/bY6mo5W0sxN73vveV5+O2PZaOs7yQH6xVjz766PK9pXSs6sTPcrwgzp7whCeUJaKWZyGyiIkFCrF3SKujjjqq1Ok+botHI/5ply3LRL4ceOCBnUuetN/apg996EPNS17ykoalMKte6QhyrSuISZTzvm26ckUQUX2XZ0aagqSQV4iWSUlYGyFwX/nKVw6W18OfZQ7SVnmEufLDYoxwG8TJt771rdKHItzUcZMxiK2wptTfBLkAD9atiEUWSJb7kT71wWQVEUfWb4S/Jq5MTlm+iTjrm2/Fo9a/SZSnPn2o6EyqPLH+JfKWNZn2w/gAwWTcIS9DlAOif59vVlasYwkrNmOmEG0r4k/5NXFgS4I+/QpiTFuqbpgYDZKXpRkyG4GMVI2yHeHmdWEgkMTZwsjHTEUiMCsIbLzxxkXZD890NjodnQGh7JqJovjaV8Usi4G4/QxivwAzjwZKBi5PfvKTy0wV4sCM9nvf+94yu+i5wZZlBgaRBvIGVbWYTdNBGRxwpxM3IKLsu/YV1jTCNWNtZvvXv/51mb0/9thjl/NqnHTaG8qSBbOmXYTIcp52PICl/TjMtOps4WlJATz4SyIcgyoDFr8N/MzEGzyLq8GMwSaszIJbegNfg863vvWt5VkEjxRhacAtCyBYvv/97x8bU/hRKlkXhUk+8sZSn3ofkQhvqqu8sBzEjDCrEzPpysp73vOeQZmq/RB/gx0DYQMXM/YUyanKRFiP8HuUILqUNUpjkGbh3oCLP0H6GDyFNYmlkzGA4h4BYWBmRtLAc5iwbqMsmBG1b09IWOC0B6OINCQdBVM+UgT6uEWSiDdLkjbRRZExIxsKWR+3Ee/6CidKFqn3T+uDce3fsHvln0UgpVJZNBCOZSDSavCvDVKvYhls+AVDVhTqHaUusNx+++2bxYsXN69+9asH+Tdu3UFuWPKJeJT34sTKBwZTie/EdZSo/+o+K4AgQUe5F/aw/dq0J8qfulTvOaYss0qCq31pEGfKg7Zf22RJlW9CEBfwN9OuzHJHtLNTSR+3kT/j1IsIVxlQDoWjnWVFgaSu9zQMt66sXnzzve99r9EnDhNkgeXflEd5pi9Th0Ppj+/GbXu4j7yv2zN5YZ815SnSL37jtj381F8Tlqu16Ef0FdrTKAf2uSOsbWs59dRTy/hAu4YQR0r0cVv71b6HozYN6YIQ0wZLLwJH3+a38OSbsgYTpB6CTV+GRDQ2GCaTKudd4XW16ZbG6zuJdO27775l7BTLzKcqS6xcLGENosL4aNHS8QBrILiQcdsnbuGhnVI2kUDqqb1HYw9PfZ8+ntVf7Enpu69+9aulLil/+n/faUOJsWKQZn7r+zzjj7GO/oVfwvB9TSxon4whjfUCJ36MWx+4VS6J9rYWYwTl21hjlHTl2zD3fcrTHnvsUaz09Ll1GVUOLN2FhXrZbjeEPawPnWR50kYS7Xw9dlEm9LXi60+dsxw3RJmWj7GNhHfKpD5KvmtvfTdMlCdtb4ztjIn02e0+ZNwxqHKtnYjJoAhX26qc6q8Q7yTa1XH6Fe2Uvo/uw/9aPNMmapNSFiYCSZwtzHzNVCUC00LAAIdSG+IeqUMMSiiROnuiwzAQMctHgbO3BNFZem5Ayxw65E53ulOZjdQZI7FCKOrM/plMIyuIzUIpa7XozIUjDtxTHscVZti7VntQibu0WpYl/vyLweG46YSNv3bHOW6cKJ4IxlqkESlEITKgIjp24cQGpPHMgJfyjBQIPMXFoCOWOhikyK/YTD4spvjBrfwxeICpAQrLhlEiTyzTicEPP9wbyFjO9ZGPfKQs/RvlR/2OMhczf57zDyb+DD6UoUsrQpVpPaIx3Aob4WZG2ZKOIG+Lg9a/UDgMBOW3co2AYklBeQyhiPjrkhgksogg4i8OyjRlXJzFjwLK366NdGt/EcPyVbrf8pa3DF4pg4FxrUCHg3i3aKni1Met70MpQXIqG5ayqFeWmSKcgjTr6zbiVl+VYYNXg+naYqkPxrV/XffKSuz5A0fp81t7E2Qmqz6DXWQY5a0WljbeGfDGINp7z/zJS2Vm3LojDBZatVAGH/OYx5S2kVXFKBnH0slG7eq+sIJ0GuWnMhRLkdvuokx3EbyszhBnQZBTmmO2XRtjjxiClEK+wD8Uw1gyxV9tu3KOqKeIsxZSZ0LGddu3rIf/YYkSvy07pMwi+00y1GIZtjJF+UZK10vMancsFGARIu3KjzYKUak9DRm37eFeO6IMI4wo0ggzfUL0u/wmfdoe9VubIY7IwFoow557zx0COfI7SJRwz502Ez4IRYRNH7fhT/vKAslyVP6zSkWSKS8UcfUvRH+tfdXfccO9CTqWaoi8KHvhvr5OqpzXYbgf1qZHHnCjHOvTox0fpyzpr3wTAjN/Jsfkw7jtk+8RXbsuHQ/VgqDRXr/pTW8qBEeX9ZP4Iu+ISb4gNKLd7JqUUt+NteQn0Z+/+c1vLvf1P+2MdpI4UCBk3PrAvfKKqEFCipNJEuEiXok2fpgMy7dh7vuUJxjoE+QfsjCITmNmBAssh7XPw/rQSZYnSw+VEWSSq3GBvAnLZP2PuqfND4JI2ZA+9ZIgFo3JajEO990wseUFf7hxhbF96bQJYYnfZwyKYO8SZT3aLdsI9O1Xwpqxy2/jb6LcpixMBJI4W5j5mqlKBKaFgA7Tn2UYBIEVHZ1O3uCdwqPjMOA2oHeQgEG8Dq22YEHKnHDCCcXyg0UHJY8iSih9LL18Z2BgcMzSgqKigw5Fx4w3JcvAQgdu6ZsOlaXXuKeciXNscEuB1BFTRnTinhuQSmdY1Y2bTrNh/BO3vmLJAOWDsNZjBm8QTDFgXm7AQkGoZyBhhIAIqyfYOrwhlAiWZQZfSDcDFoRFLfxGtslPBJdlfghObilBrPEoxpTbYWKAD3+kJaJH+u0hYRAu/gaf9swaV2Bt0AJDZcoAkyJg5lc5sLyIKTxBcMCkjj+3BkfCRui1SZE6HjGjaJY9RNmgRCtbFI9RaTcginKp/BKkL0HYIQApCCHcGiwjJYcRejbgJXBnrRJi8GlGU54bgKqTIYhORBCBUR+3vvE9UfcMSOUnQfpR1i0PixnaPm6LJ9U/eCE1SSwtq1533nZh3Omw9dDsseWAMIQp6wb1gFWGvDLwtz9Tm2RCaGiTlCkz/qw/KI7SLc6Be5+6E2QKggOWSDdkmI22hS/PYoKglYzys0v5bLsLMpl1V5Trtpv6d1226ufuQ4HoilMoxlFGhKt+W4aiLspf7UzsxYeAjv23QlFuW8QhhbR/2uFI67hu+5b1Oq0II3lLtBnirqzY9ybqJ2WUAq88qBvRX9X+uBdflhdE3X3nO99Z8jk21IaHNh0JRSKPxml7+KX90y8hWU3oRFtOeQwFu0/bE6RbbSFYIvaff55rh6NPCrKKgtyWwCraoD5u237Fb32/Mob4YB2kjljyx2+EviXz0ousjUmimAxi0SSe4q7e1hZP4b/rpMp5HYb7YW26MYbxC2JHmvZdSgqSccsSax19nH5GHTL5EKdT92mf1Ed9PXE6JoJX/2niS/kXBlK5Fn0F8hv28kl5qYnhmCBFkrfFWIsgK7oEMS1c5Y8Yl9R5OG598C1rRCS8cYHVEsaqYWXGz67VBb4jw/Lt32+X/9+nPGkXjZWVYZMMJiSQUEgnbY0Jnpi4rUMa1YdOsjyJA/JSn2VsZzm2uijvtcGxTYUl0ouWTt6ZvPVc/STaBitPCLLSkn/fGmvGkt7ysvWPGxa79tmEDX+VTePcmMCYjTGouEU5NoE4k36lToJ2LNpQ1rkpCxOB0esBFmaaM1WJQCIwDQR0kARhhTQjlAZKqYFUkArlxdJ/FBUDFWQPQixICQNjBEdYQSCgSK08mdU2GH7HO95RBlG+pcyHohWKQPlwin9IO52+ztFgKgZyFJuwNKIEhoybToN8g3sD2b5iAGUgxw9EmEETJdWAJHAKK4XabySMwSGSiQITM6nMy0OhMlCxB0xbkCIEhganxMAyllwZuIab8rL1z8BX3IRrxlg+GvQhJoIsGzY4bnlVfsIgFFMDR6QZoXjH0lfvEUckNkVmFRfx59Ymy8ofAsxgqEvqd8qVJcMU9zh1lZJKcRgmBnHymv/Ks6WtJIgy8XQfyy3t7UIoGsi/LkHchrJQW2OFW/WDIF2ibvhtIBsSSyr6uKWUEuEjddQDyxqlS/oodxFeH7cRp7jGCaZI0ZoAjvft6zCM2+7av5VBFidBDhl0RzuBbCWUTO4QEMi0kLAMs7F4F3EU7qJejFN3Ik9YO6gr6pjypq2kjIQiF363r9qTqf6Q7cTSz6nceq+uDpNQersmAII4822QN8jFwFrZjnqjLkZ7yn0QK9oMeSKPvC/XAGgAAEAASURBVFdX+UURj/rax22fsi4eRD5o57Sz/tyLl/ARqkQfgTDwjCXaKItmS5S444cJF2lSvnwXWKtHpG/bo/zUSnTgzv/aIrhP2xP1WDy7RN9IuINDSHvJm+dhbasc93EbfnZdY+ygbyMUW/7rIxFM+myWsCZ8QqK/8DtItLB8Cjf1dVLlvA5jqja9dhv3fcpSfNO+9mmfLD9XdtXh6DsRXkhl5U75U2ZrQXYob74jyot2LST6sRhbxXPXIMHCTf1O+TFpF++U8XAf7satD9zzT58WUre1xkfGlV0ynXzrW57CklJaEUIIesLiNfqrdtz69qG+n43yxB/tjvwg4hx5j6iu615x0PpnEl1eKCNBCmorYdDVz8TndAZ9BVE+jdEIMkrdno0xKKu2IO9qy+fp9Cslcv/5x+KVVTlBcg/bBuA/zvOyEiOwxkoc94x6IpAIrCAEzODpOA2YgqCJoCmF/toSptXxnBKGSGAaXYtBGxPvUBD4b8adIMhYBixatKj8xcA43Nb+DLtnTk4MvNuKg9ktyrW0IQv42zedw8Id9ZyC5Y8IN2ZJY78Rz+tBn98GHqGw+m0QEUp6EEmeE0q6tIZiI00x0PN+14p88duAmdtFS3EeJpRPFmJE3BBaBtjKRliVxH4kw/yonxtoEIRNe+BI+TTIRUjJf2UiSLbYuyr8MkCJ5SPxrH1FliFvzV5SvmJwzSJEuliCRDlpfytPzKwaPCr/lo/GgLLGFAlaL0EJ6wKKQZclhKVJBGnTpaBaBseKSh7by4dSIjy/hS9PlQnSx23kET8sWY0yxXoBKa4cmGFGLvRxWyLyn3++CzzrZTe1m/p+FMa1u6577UoozvEeGahsxuyvAfylS62lPGMRFhaDQaKN2nusb92JpUXyX9uiHTS5ANsoNxHP+XCNMtQVl7oNCnKFO3WAUhOETChFtR/Kk7YNWRblm8LlD0GlLrJWlRd93PYp6xEfdbMW+SBflFHtF0ECalOR36x5R0lMalAC221ptLlR9vq0Pcoaywp+KNesehB4rJBZ+bEUMuHAkrFP21OTHF3pEi7RD0Tb6HcQGu5Dou/Vv/RxG993XbXtylfsPxjETdtCyEQH6xPxrfM0iNcg9brCmFQ5r8Oaqk2v3cZ9n7IU39TXvu1TlHdjhFr0g1YPdImJD8T4oqXjA+2ncZw6bAJU/Y1xVZSj2o8oL3VZife+05Zww0pYX281Af+Vff6NWx/4aXzCGk85RoqYwLJk32b3yDHxrzeJj3hMJ9/6lif9tzQZq4gjYXkYRFHEJa59+9D4bqbliT/wQn5pJ/WNJq35q63T/rAANWFS9wkRvmssvW/rAN7JE4eLdElMwMY740B1msWusqF9mMkY1OoWlvXE6phTK6uw6fQrEU9tc5yyaTxVb7sRbvK6cBBI4mzh5GWmJBGYGAKxCfGo2aJ24O3j4WPg1J6VjOf195a6sLQJxax+1/c+lslQYLrE4I2CYEAZA/A+6ezyc5xnyAmm8KFoTfVNWykwkCAxaG1/T1kK/GKgxg2Swl+XhLLX9c4zxOVznvOcwQbFw9yN8zzCGmbpY+Y5LLko2YFTzAyOE0a44VdtDRPPXSntiDODdOkL5c07Fn32LPJOmbCXWm2JUu9j0bbyo3xblkPMcIZ1m98I4bAYCSsLz2sxs4s4M9hDvPkzULXMg6JB6Q/s+rilIPne7G6QZsJVDw10w7rAsz5uuQ8JK08D72HpC7dTYRzuhl0Rr22JfAli2Xsz+yxZLAtSlqQTMQTT2pKn7VffusOy6lWvelVZCouIQTD7U08R9cPKYYTbLkfxvL7Kb0uLlc2wOq3ft+9ZqNYbcdfvo12MdrJ+F5a98tFfCFKHMh3kt2Wpykotw8hIRLe6hPihYFG8+7jtU9YjPnU5j2eRbmlEoFKwiXQpvyTynhLrmXRb2qM9IuqkZd5dEsRWn7ZHm6BsUvyR2qGcs0iFGQtJCi3iLMq4sNtlpt32RJvZRYT5Pp7rm+WzsioeLHHbFkDKNFHv+rgtHw35J/wIx7LAmBxAgNeijAY5E1a93lPqxaU9AVN/G/k92+U8whinTQ+39bVPWaq/i/soo36P07dHv6sejSv6Gf2iPxMhtoVQvllNqb/GJupCjJ9qf+NZlOX6nfsgV7RPSDITTQh1gmAZtz7o06JPZdkUVlHia9IEYWbpozFR9JvCmG6+Tac82dvMUsNoV0dtYdCnD5WOkJmWJ/7ECgztcljDa0O1SSYH9ZvG6LGVQYQd1yiTXRZ+9fgp3Me1rTN4HsRZ+DmdMaj2W58cewSKdxv76fQr4odkjP5C/E1yDiMUuU9Z+RFI4mzlz8NMQSIwcQRCOTU46RKD2Vqx4sbgfzqCEIiT+Ay2zMoZXNnPx8x7WImM67dBksFiDPjb3xmYEUpD3PdJZ9u/cX6zRglTfQqDAamBnlkwgxOD0La0CbIYuBkUdElNGoRSwp2B0LBBbNtyp/YXfpR1So68NkgQZwSMsHZtWbHV33bdx6BqGNaRX/KvVsoNDOv08Lur/NVheo+Ek79tJbomSSP/fWtgGJvViquBd9SD8DsUUr/bVh0sKYMgiP1Q4rtYQihdo04ipQia2TdQVYZjGVgsZVU3QsZ1G0pOV/1kxYFQijzp4zbi4RqEobLRxqV2Nw7Gtfuu+67yH8paPYCl4Cn3yiqiNJQDClu77arDqcvaOHVHeTUrztpKe4U0Q0AoW4gPdazeC7IOa9z7iG9cp/pulLuo80Gy134hMUiNo9/SE/j5bQIAWROWZZ6pp9qxKLOehcgHincQNn3c8mPcsh7hRXmO366hYGr3w0LC81i+6z5E3lGQ5G3siSPu6ktN4IR71yj3fdoepBhh9dZuoy03FjfED1z7tD1hnSUd6ks9WeWZPxKKa0y6IKSQDrVEOYl2tI/b2p+4jzIQcQqLkK72KZZ0yod4bzwg/u22OfyP66TKefg/bpse7uOqbo1bluKb+tq3fYKb/iTKf+1X3Y96j4hjadbeu8zeswhjkxBE/4jYCEKo9jMsxWPCVPtrklK5qSepfGOfScSZMi78PvVBOyv+0hekWcRDOEHAiHdttT7dfJtOeZKeGiMTOcM2sB+3D400xnWm5Un+RJ61LcD4bSxur077VA4jzpQXVoMxhou4ucaEQv0s7rveKatEezOdMai23xYn0Zfpw+v8j7Bd+/YrrF+jTdIPTHXwTx1W3q+8CCRxtvLmXcY8EVhhCMTySp2qAU09SKUcOonMwIR1zEwlNl02ILRksyaMKC/EAGlcMZNkpqnLysqSkPCL0hCK36TTGQNCCk29vE+aYkY8lJlh6RRfCrH4Ux4QAyFm5UIh8Ux+hRWB+ziZLdzbS4Oi316+Ee9dDZT4KUwzw4GVd/KfBJblxxT/QiGjBMK7VhSlPQaYLCvEPQaELEPq+HNnhpnihdyqFcqIgo10DXAMqlnM1URCvWE8JYEgVhwcQQymEFV1OSwvlv6r84DCV5Ng0hCDvvq5b0PhGHXKF9KOVSGLLTPVQUDAK5ZjhBVbH7f8oeSYfW9LPIuw+rit/Yq8G7VMc1yMa3+77mNAXL8L66FaqfRePigHlu1YckIs4R0lfeoOQolFAXLBnkGxF5nfyijFAFk4ijhTVseVONBkXPdd7mIDe2Rju22HEwklMb63dFn5piCp88qkZ/6I9ohSSFhp1gqyuh5EMgW3j1v+9Snr3JOutj+WrGkv7NlZk2f//urfbTEFUHuhHQgrHekOJc8elbXYBByRGH1mn7ZHH0qC+K39Dcw80471aXuQFNF+itup1RIl9ZBo38IaRNuuzMr/aGO4EYeIW7Rpfdzyoy3iBd8g5MICqd0HK2exgXu08dpB7TmxNG+UTKKc1+GN06bX7uO+T1mKb+prn/bJd6wm1fPYXqH2yybu8sE2EouX7oVmskb+22u2lujXYkJL/bYqwaRLWxBhJPJVv2r8Ix6xJUd8E0SL8oAc7VMfoq03RmmTw/wP8lxdqGW6+da3PKk72i7iW+2NumSSuD5oIeI2Th8abuvrTMuTcRj81bdFS5e2yqdaop6qt8NE+oxdoo2t3RlzDxOnaNZ9hXKmbyHGi33HoMqBpcCIQGliKVmPk+t49O1X6hPeEXFhmVf7mfcLE4FuU4WFmdZMVSKQCPREwIbdhAJtwKEzbe8nFZsrhwVUzyCWcx4KAiKkJisM8MNcuyaFlvOg9SBmxaQllARODKRi0G3waYDQJ50GNgaAFKW+EgPPmizih30jgjCjOIwSA4EYjBp8UdqJQXH7GHDP4xAG5GNg7LmZWqdyWR5R4+1dLTGQ8ywGzO4RbpQxonyMK2bohCe9kQ/xrZm8IOlCyQuizWlUNTZIO+HCIxSq8CeuQQjK8yiv3hmoO0WMwEeZMzAMIpDya3P1YbhQYO2dR+RBDPz9jr141Jv24DPyqj0z7rsQhBDFMDac9Vw6WaARA8zI0z5uDfBgJT+ddBuiftSzp573cRv+KFtRhoctD+yDcfg77Kou2RcnRHmMJYTtpV6xbNSAXvlS76Ncxfdh+RJ55HngPE7dETYyNiwGfE8JDEWjJpy9m2tRjiJOYekrTnAMpbcmF1nLKivKkBl2f+49847Yk9AzEgR0+bH0n7ZJOYaHZZF93PKjT1mPMJEEsd2AZ6xOYskaqwokkGXR7b/3vOc9xQtx9S6WRLK4IdrOWDLvt7ZX/8hqOpaN9Wl7QqlT9kxIhcAriEhkgva3b9sT8VDng0So20N5ERL1BBGiHIREW6m8xCRDH7fhT/uqjQxClb9RdqJtF8999913QFZq5+Fu+ZW2xjcmF0bJJMp5HV60F6Pa9Np93PcpS/FN+9qnfQoiVNmt8xYpYdJOWVMOY1KHZam6HWJyJfaoir5P+yDPfB9kO/fGBZ7pPy2VJzFpEOOn8nDpP32nje2J8qUd7lMfxFcc6rpSPFv6T/8QY4a6nHs/3XzrW55isoEVug3zY8m+iZS6bRKncfpQ7rpkNspTrAawnUjgJixkXxBfkTddcQiMlZs41Ik7JGXXJEb4YWymDQ2x55781D6wXu87BrUnZJBmJj9HxblPv6LuhMW1PeqSNIscWzWuaXG2auRzpjIR6IVAEBqUhdjb4INLT3ykWFEK7JnDCmHR0hmpIDhCyegVUIdjnRuLKR0eKw2DYhZj9eCt7lw7vFjmEeUdwaTTp8Q5MtqsnAEApZtSJJyQcdNpQGBAZrDWx0pEOAZdBhBINxYZBoqwjNk1bkKxcz9MjjrqqKLQISJs5h5LAw02QkIJsZEvNywlbO7KgkJ4BgzEgD8G1fFtfTUgs4Evvw38zCoayBhIR3iUGOUhSIL6+/a9wZBBByLMQISFDisK5GgsFZAXSAdic3X5JM5mxsVf2GGhFwpcOxy/lSmkKEXBcifLZClfBnLiq7zHrDrlMIgfZSSU5dpf7uPodQdjGJQZ+CMRxAsuUV7lUS2wCuI0FI/6fdxLD8s+s9JmSpUZ6ZXX7TLbx636zNIRUQoLA3ekN3JBulmdhJLTx23EO8gWGMXynHgX174Yx3fDrk6Vo8yxOpCnMHZvT7FatFnSGnXrnHPOqV+Xe8SE8sdP5YTC3qfuUGLlExLJptPC81ueq4vD9vNaLiIr8AELE20Yq0lthPZI+6Q8aDuDdFEfYj8XhLYySRCUyCjvYKoO2FAaeegblnGesfKLttshHdFO9HHbp6wHhMrDnnvuWZb9s0KQH0Q90Ob0Fe0WUkBbRCGGg7zlL8y0w04sJH3aHgonv/R/SCPW16zcFi3tG6Kdq9uTPm2PfdGQI9oT7RTCmL/6gzq+EWfKs3qg/JtAQDCoN9Lp+xDpG9dtfNO+svhDNusL3v3ud5f2R1+DhGaRqI2Wh8J2Rc4EQQOfuv9u+13/nkQ55784jdOm13GJ+z5lKb5pX/u0T5ZUGwMpu/ZjUi+ViWgT1Xd9Iws+bap0sdph+Y2wVZ+lV78cYz59ubJrPMNyJ9oI5ZjAPSzCbLlhTGfyQpvDQld4+p8YS4YVfp/6gEiWNu22dsx4QltsfBNjHIe01H3STPIt0jVOu6mfjUlfm9DrGxFnxpDi5mASEwrSQMbpQ4vDjn+zUZ6017YMUf/lhbYCgSY/YaYfbVva1lHRhnmv/9TX2zxf2pSdUVKPTY2fgiiLA536jEH5ZWxDxPnlL395Z9D6MZbhffqVsBzkoaW+sdy3DkCZdnpqysJDIC3OFl6eZooSgRkjYD8VQgmwVIWyYXkcEkEHaKAVAx2Dboq9wRAxuCUGB10S7+Nd/I6rAQUrBGKwbuCsE0V8UOqIQV8oYOXB0n9hKRLX8M97J8BR7jwzgDGTbiDAEgg5GDO23I6bzgjHN+NKfIP4QUro0ClbiCOzjCwfQtmPwVOdjnY4/ECuGKjAG3kjP8wAB/6saghl2Ay9q3yVfyyaxEH6Y7DaDiN+c2MALD4GJeKM0DSANkg1eCZh1Rdpjfi3f3OrnNmklQItT828GqwhL5Bm9eBMXlHikJ7yDj4xmw2zKLP87RKbthu0E2H53kCdvwaJyheplxfApusPfiEGkfZ+ozzyD0GrvMKD8hGzr+E+ZvnhEnUm3tVXxKQBHWzgzcICaQYb5ScIRd/0ccu9U5+cxCoO4goLdVweW+4aedbXLfcxOK4VFM9r6Ytx/W3cRxzFmVKmLCvT8kt9kKddEstHfW9PnbbIM3WHP8oXbPrUHeUMWS/flGVEHuWT0olwFt/5JqxAlDXppkArw8o4ErVuF174whcWXNTBWIYqLQgc7QzMuCGIGktY+Bn1VZutXvCzrtt93PYt6+KiDooHxU97K+/1B3XauGtLlLH2c7/tY8cKgvBXmYaZsqh+Ci9k3LaHe/0RolLY6qaJKuUw+tnYr4zbPm2P/GE9qxxqT/SBrvwV32if+UsofZQ/aVIelAtpcphKm2zs4/bfvi/7P+ohC279M+Vf+VBWWJtpS975zncWsiHyRDvImlw9hxFlNvYNXdb3//6aRDnn+7htesQk0hC/+5Sl+LbOrz7tkzBhhgxVXxHkQZo5uCNISNib1ENMcidfuHVvkg0REcQ3P5EtMfGmPvgjiNXPf/7z5T7+IeJijGcsqSzqO/WhymjdRvapD8YnTmJVTvXxlv4hpvw2GRREX8Sjb77Fd3EdpzwZw4aVOyxiDMIPE5jyU/9gtUHIOH1ouHWNMhHPZlqe5K/JcXVM/VP/tW/yXhkxRghphx3P9Q8mUZSR2CpFPsCAdH2nzkd7gzTjHpEefbZyMe4YVJsgviHuu/70TaRPv1Kv2Ojy07Mg0iP8vC4cBFZbujSjaLkaRwpySiIwFQJZVqZCaH6/Hzf/NPwGHQYzQb5EygzoFy2drbY5Of8mIUgCAzUdLAWlHqTNJDwDOgqtAYyOeZRMOp0GVQZJrFKQKF2DiVHxs5yHEuukxTotCDSz0MQSH8pSLQYLscTIALomYWp3XfcGuPLewNRgo102ur4Z51lgYcnFVPFRNhAayibcaiViqrAMBJEZMGDx1V4uO9X3o97DHa4US6TmbAhCy6AVPhTZIPi6/O7jNr7nN0ID7lPh2Mdt+L+iruqqATrSitI3TCy3ZBEhjxDJXWLgS6lSbyiltfSpOxQOg2zYtutg7ed8utc+whL5OBtlGJZhMcyyKCxPutLcx+10yrp2S9uh3k9V1rvi1/VMPLThCIBLl1pmtMtL/U2ftoe/4hv+tpd71/6679P2UEr1rYg3+TxKwl/lV18xqh/u47YdJgs7/Rkib9+lEyTiRuBQ55UyQrmOeLA823XXXYtSjAAMQqbtf/v3bJfztv/T+d2nLA3zv0/7BEf9lXBZmg3rC/UP+lt5o88Py6iuOPCLn+Khvo/qy7nRRiJwjfFGjSX426c+IPq04VP1mV1pmM6zhVqejPOMl+SN+l+PM8fFyYSWMjNqojD8ks8IT/3EsLZpUmNQcRD+uOOtiHNeFx4Co/TkJM4WXn5PPEWjCtTEA88AZoxA5t+MIZw3HrCYYI1gpq62ALF8yOAVgWAT05REIBH49x5ULCoobCxYptoXKTFLBBKBFYcAayOEhwmks846q1ieDSMMkSj2YArrHFbWsW/UiotxhpQIJAKJQCKw0BAYpSevsdASm+lJBBKBRGBVQcAG+zZ0t0TFfiQs12xKHWbiljumJAKrOgKWgNl4GmHGYsVsdpJmq3qpyPTPNwQsAbM3oP3j7E/kz3Ix5JnlmuouCxjWt6yUCIsmpxLGErD5lqaMTyKQCCQCicDCQSCJs4WTl5mSRCARWMUQsAE1ksxG75ZT+COUDXvR2K8tJRFY1RFQH4JMZoVp35yURCARmF8IWAbGcto+VZZf2vvK8s84zCFia9mXSSKbhR9zzDHxOK+JQCKQCCQCicBEEcilmhOFd2F6PsqEcWGmeGGlKvNvYeWn1NirxIl1ZuPt3xP7vyy8lGaKEoHpIUD5tjeKfahSEoFEYOVBAOltSaZ+zSmQ09lnaeVJbcY0EUgEEoFEYC4RGKUnp8XZXOZMhp0IJAKJwCwgQJEYtpHqLHifXiQCKz0C9UltK31iMgGJwCqEACtRE0IpiUAikAgkAonAXCKw+lwGnmEnAolAIpAIJAKJQCKQCCQCiUAikAgkAolAIpAIJALzFYEkzuZrzmS8EoFEIBFIBBKBRCARSAQSgUQgEUgEEoFEIBFIBOYUgSTO5hT+DDwRSAQSgUQgEUgEEoFEIBFIBBKBRCARSAQSgURgviKQxNl8zZmMVyKQCCQCiUAikAgkAolAIpAIJAKJQCKQCCQCicCcIpCHA8wp/Bl4IjB/EVhzzTWbRzziEc0GG2zQOP7929/+dnP66ac3V199de9I3/a2t23udre7TfndcccdV8Li8MEPfnBz73vfu7nRjW5Uwnzta1/b/Pa3v53Sj3Cw2mqrLXOk/S9+8YvmgAMOKK833XTTZvPNN2/WWWed5sorr2zOPffc5oILLohPB1dhP+lJT2pudatblWfnn39+84EPfGDwvr65173u1dz3vvdtvva1rzUnnnhic81rXrPZY489mr/+9a/Nm970ptrprN2Pmw4Bbrjhhs3ixYubddddt7HZ8mWXXdZ85jOf6czPjTfeuNlmm22aG9/4xs3Pfvaz5itf+UrJ/z4R32STTYofN7zhDcvBBeecc05z8cUX9/Fi4Hb77bdvrn3taw9+d9384Ac/aOQP6ZNvXX55dsc73rHZbrvtmvXWW6+UyY997GOlnFznOtdpdtxxx2b99dcvnzqU4ZOf/ORyOI4qf11hKivKzCGHHFLCe8hDHtLc9a53bU477bTm7LPP7vpkmWdd4b3uda9brg6ox5PEcjbbDfXfaZijRJugvoXMtN0YVXbGrW/Dyk7Esb4+5jGPKScGavu+973v1a+mdS/sRz3qUc2xxx7bfPOb35zSjwc96EHN3e9+99IWKBttkQfeX//61y9tm/I4rA/Qzt/nPvcpdeaSSy5pvvSlL5X2te3nqN/3v//9m7vc5S6ljNoQ/owzzujtB/+dMKwOTSXapThYpU++DfN3WPlTjmGtPfnTn/7UXHTRRc3nP//55bzpqsfRb7Ud8+vJT35yc8UVVzRHH310ed233Wj7+fznP78Rhze/+c3tV+X3uO0fx33cPvShD22ue93rdoapHHSVzbZj/bm2+WY3u1nzy1/+svn6178+VtvZ9sfvvm1Pn3zrCs+zUW3Ptttu29zudrcrGF1++eXNF77whabrwJM+ZXhY27P22ms3D3vYw5qb3/zm5cTun//8580Xv/jFxnWmMm797krH97///bHHY+K5kMvDVPkwnfbPmPiWt7xlp9fq0ymnnLLMuz7t/UzzYlhZXSZC+SMRWAEIJHG2AkDOIBKBlQ0BA14KPEU+BBFiMLX33nv3IrB8v9VWW5W/8GvYlaJ11VVXNVtuuWXz6Ec/euDMgPp3v/vd4Pc4N7vtttsyZB0SiOy1116NDj8EKbbZZpsVUuf1r399PC5XaTUACaE8DpM73OEOxd9//etfRZFHTmy00UaN35OQPumgDCFharn97W/fPOABD2je9ra3Nd/4xjcGr571rGc197znPQe/b33rW5e8u/DCC0uZGLwYcSPvKJAh/EAqfvazn20+8YlPxOOxr4gzisko+dGPfjQgzvrkW5efBnkveMELitIQ7+WjPN59992XeS6P73e/+zWHHXZYA6OQYeUv3rev/JbGa13rWoU4QzQjO9WHcYizrvC6nk0Sy9luN3bYYYei/LSxqn8j9YM4m412Y1jZGbe+DSs7dZzre2SctmnRokUzJs7qsBG7UxFnwlRXlTttYk1OaPthQXkOUR6Rch//+Mebz33uc/G4XJ/61KeWdjsemnBBFCGI3vjGN8bjkdd99913GcXtNre5TYNQOfTQQ5eJ20hP/vOSAli3QcO+MYmAOKuxC7d92+5h5c8ElL6zbsMQA8iQV7ziFc0f/vCHCLLpqrODl62bm9zkJiXfpDWIs77tRu2ltl8/YbKnS/q0f33cItt32mmnriDLM2W5LptdDvVne+655wBjfY5+3QQQ4rFvXvZte/rkW1f8Petqe9Zaa63GpKFriLqF2NOfItxD+pbhrrZHnY02IfxV7xHi2tljjjkmHve+jlu/h6WjC59hkVio5WFYetvP+7Z/vt9ll12WKWe1n3/84x+XIc76tPezkRddZbWOX94nAisKgSTOVhTSGU4isBIh8LznPa+QZjrLD37wg42B7eMe97jmete7Xhnov/SlL+2VGjPGZoG7hHJEoTBYD4syg0JCoTjiiCPKte/A1+wsYYnESoyVxMMf/vABaYbkOPPMM8sAlBKDADFYPv7448t3SIAgzVgGnHXWWb3Ju+LRBP71SQflMUgzM9TIyVvc4hZFyUXuPfe5z21e+MIXFny23nrrAWkmz1iXUMTucY97FNLoCU94wlCLu0gmRdvgm8D3vPPOK4NuSgzlEcHFKq+PyKsuKyn5c9Ob3rR49Z3vfKdcZyPfpPka17hGUbbe9a53NWb4zba/5S1vKc9ZjFAilEn4qhfqDCyjnHaVvz5p7uu2K7x99tmneFPXAXk4KSxnu91gQah9aMvqq69eLGE9lzchM203hpWdPvVtWNmJOE7qivhSj5XbcQSGL3rRiwZEQ/ubZz/72QPS7Mtf/nLDMks7wmIEycFiNeqciRGkEfnhD3/YfPrTn26QOkg2ShNFnMXmKNFmUPb+8Y9/FDKAhbBwWFVJF2ICyTWusCRm9dYlrL/UAXU10jAb+dZV/rS1yg+RJoSjMg0vZIi0veY1rxlEs6seD15O6EZZeNrTnjZo+4cFo34rX+O0f33cKiNE3l966aXlvv43FQEs/tpe4wjlkgUw4ozFMJJJ2o488sjayynv+7Y9M823YW2PiRrlBDbGIKyuEK7qhXLFUl6ayUzLsL47SDNjMf3/n//850I+IrNYM+qLo85MCWLloE/97krHr3/967HHYwu5PFSQjrzt2/7xzDiGaMNjHFMeLP1X97N92vtJ5EXEKa+JwFwgkMTZXKCeYSYC8xgBM8UxCHzlK185IIu+9a1vNQcffPDAOqJrgDssWSxmuqxmWDaZzdRJv+ENbxgs0wzCCsGCwJmOhLWcGdlYirPFFlsUrwwCWNQRyzT322+/siSMqXoQZ0HIiNtMZllLILP8r086zLiTJUuWNEGk+M3snsK2xhprlJl5S6JiaZNBl/wgiC9WUJZPGdAOW6paHC/9F0t9WJqEFQRrAUq6smUpTV/ibNhS17BkMdALS7bZyDdKAmHtpXwQiliUKfERJqFImAn3DnkRy1HDbV3+ygcT+tcVXtezSWE5iXZjGNnCMoZCTKk78MADB4jOtN0YVnb61LeusjOI4ARuEBnPfOYzSx3u4z2iIRSl9ncmStR3csIJJ5Rln+7VaZMpJhgQWf4IgoxoZ1nHhFDytTnI5ZNOOqn51a9+Fa+WuQpv5513Ls8++tGPNieffHK5t9SOJad4CnNYeVjGs//8sHzRUuW23OAGNxi0bQgWcSSzkW9d5Y8VB/n73/9e2ol//vOfDSLSRA5cFi21+qulq87W72f7HmklHxE3o6RP+9fHrTC1HUS/PGxZanEw5J/JAFbpCL3999+/uEK2sVL3zsTPUUcdtRwZMMS78nhYWRvW9sw034a1PbFNhP76wx/+cIkbAu3tb3976btZcr/vfe8rz2dahvX/yEckHeu9IE9MEpnAlEYEWF/irG/97kpH4DDOeGwhl4dRZbZ+17f90w7Je+1U3YbXfsZ9n/Z+EnkR8chrIjAXCCRxNheoZ5iJwDxGgEk+QbTUyyPNQHrGcsxM5+GHH1728KKImPF897vfPUiVPTIsddMRsz6IvacGDpbeGKgz9yb260BEmPGkBMYgElGjQzdLHwSG5YXiaHDFSs1g+/3vf/9gRozVFLIoLC+e+MQnFncUMCQRaStwrIlYISCIiH3NLE0NscSA9VsQNd6JB9Iv9n+T1mFCOXnsYx9brDD4gwx8z3ve01CiajGLzB2MY1mouCKF6j3Yxk0Hvw1aDTZDGY3w4CZ/KXuxX5d8EtdPfepT4axckUMU6amUK7OLLE2IfK+FnxQkOMsb/lk2SBBsQW76bdYbxsrcMKKHBaRyZqAXJN9U+aasKnPiIC1msb/61a8us9zl5S9/eZnNFw/+y3uEQW31wHIkxB5w8JX/9nMbVf6EJ1wKiqUH/FenRhGz/BVnVpEUF2FT+GPpUld43FHMu+qAOLRlOli2/Ri33Xjve987sHSyZyILyBBEuqV58JTvdfsTbpD6sZTYEj5WSLPRbowqO+PWt2FlBxlEFi9eXCxcWSEp2+09YyKNruOUVe6UC9acRBuqTPkbJfYs07aqOzC2t1ItsZRdPtgrrRb1OPYcVNeR7EHAtcuxOv2b3/ym1AvlIyYlav/c3/nOdy5llcJeYyJ8xL2l0PaQRGYg2KRZvh900EFFyQ//kPPaM+1VOy7h5sUvfnEJi/WtPR7JVPk2VV6MKn9BqIhv3d6zhkOcqd/+4DOs31Jn9YksdqVdm67PjT4x0vb/7N0HvG1HVT/wY/vbexfLwy6KBcWCigFFRCEqRkVKfCiIKAoWjNKMgKDSqyQkEAgQEBAEpItBBBGw90oQFXsvKKj/+93v/W7mzdv7nJlzz03ey5v1+dx7ztl79uzZa9as8ps1s8vPTXojZekWOgnvTZ6Q0Tkqs1TX6T/X9pRVHgiOZCNvQwF57cVZ0uWXXz5lyhu/dAbAJ1mWu9I9c/qXX8Lf0G9sIh+E3iJH7L9xyuZlSew63eN6xF6HjIt//dd/ncZ5xt4mGdavMiLX6R72ixywd+5RkmP4HFAr5zaNDeV6xvfcc5Dl+GbqY5NLf8yxkq7J8uA5W3he8qP8Pqf/nKdX0Jx/MJ0o/kXmah07p+9b+6LUZS2ymuaIRfhSbBhdazLN5KXxVWcoG4OymOlS48rEKBswF5+k/vE5OFBz4FgUWR8dvwcHBgfOWA4ERJlzYi35AuoIFJDfMgEsseGUxvjZD4iDxYiVgEPJVMCaAIADlNldxk9dIUsU/JltVLf9YOJkc+w46AKl+93vfpOhNBMLzCrrsBk+AkTZywvgxYB6TobeswgkEecQAXjiIHLa1CfIRJxAyydKkrpeO5o57/rsvaKM/Yws07nuda+7usc97jEBHMoKDC0pUR4p6zswTUbA05/+9GmDXudan0NZgcIcCZYCziV7qg6SXaccMANtyjLMslttrzc6FyjmmZSTwXiHO9xhcojxE4+QgF5AiepAaDq494/zb8kYkgGXAGRTv2X5kOu0JSCwjASy5VieQRkBl77XD8APQQUwSrBpBl55m3M77zsQ7ujRo4vyxymVnRaZdA3Zjny4Z03Jdspx2RxeJABMtcfNkryX4Enup79rx3gbXqYt5Wer3hDwGUvGsWwc2YfaZByTB/wlK3OgmXP6ENEryRbahd5YJzut421JdrTXzDsACOl3fQIknyM6pkVWXUv2gev0gwCg3qexrl9mjqwZJAtHBmhNmbgwKVCTMREd5SUi+i6/y+A+1+Xckb0JkCXKJAUwsQSXlJd1hm/RVcZYMmM8hz0akTGpD/G2nMSZTh7/5zr6XhnLrkPr+q2lL9bJn+VuQDHBpk/glD5Ihp1Nt7VnaRx7buNFJhV7iZTHsyxxzHOUn5v0RsoCNo03k0/GpKB1jtjGFv3n2p6yykfe9D2biJ/aZYzXkzjK1xTZKPfqVAaf1MPmklU+xK51z7p+8zx8kwAN2qMt/siVTHcTJ+t0D5CSDwA0Nb7pT0B59LvtENA6GW7VPcnCnyqs/plsQiVo2jI2IquubRnfc8+hzzJ5QZ+U/ph6a7omy0Mrz2ue+L2k/5zLZIk+snycnmRb+PhPe9rT9kGoUv+36PuevtCOVllV1sSJsVQS8Mz4oBvFGCFZcp4/RC75LDKvjaGsjsj58Tk4sMSBY1MZS2fH8cGBwYEzjgOZLWZAa+K0IWAK4tTKuEECF7OCt7nNbSbQiWGSoTUXfHHC7CmGsszAd1k0wKMYZJlmfgPWgDcce/VasiDIZvQANJwpAAajyQC6hpOPABV+A8lk6pipF4BbQiTo4thyBgTqAY7OO++86bjrOW2uF6SZ6UvgLjiwD9Cd73znKYMsAaJrauJAm4HWZp/aBijAqxDDrg4p9tp8xzveccrqAyyi0ui3Pkfqnvv0HO6nLWU2W8patqr/ZBaaibZEaykgzTVx5sP7HM9njnN2yEXe3KZ+b00SUOIpAuYBhuYIH7VdUCQQDS31m9lF9ep3s4zARHV4Nm0qQQzHLStDb37zm6e+zxJX2VLKA/cuuOCCabkMwJNMklX9vE7+jBH3UoeMqsiD69YRGdd+f1kKasNx43DpfktjoL5PLy/r6/O7R2/kzaH6EAiI7Juof8wSWxI+R0Aez4zfF1544X6RXeiNJdlxk9bxtiQ7ZsQDmsm6Us795sDBHlnVNnro7ne/e/OsuYwD+g7PZHPNUYB0+jSBT8qVwAodRj9mXAvqSnKezkTG9hLlHpba1RSgl6wg4L03/CF8ZUcAD9ljjU3Knk9ToeKfMYOAb/RAaKnfWvtinfyZgJLdhI7uger0Br1HD+CdPUTR0jhmt9gDoBl50ddsg0yJ8GSqYObfOr2R4pZladMc71Mmny36b5uyAZYsOwScsCMmCEzKySYmh+uI3kBAyJryBtjI4a51z7p+y3JoAIQl5eRMRpV+5CvZ1gCt0z30HNsLTJRha4kmnwOZOMjE5JIM9+ieqdKZfze5yU32AYrL97L4UOvYULZnfM89B9+Ln4ZKf2w6MPPvmioPPTyfYctqSf8pm8lwQLBJXT4awNakMJ/VeES9+r6nL3pklX8a0MzkKd+IbUsGO52iPuTZMhlL91vVQofGv2Q7Eo9MF4x/gwNrODCAszXMGacGB85EDmRmLyBZyYMEeinjHMeWMXWMg58AkVGay1pzTd6gJSOtNU3ajCtSZ4CNcrmO+6fMVHDmHyNeghRl+j8HFBCzjoAk6nDfgIKcYjzwLEvkLXRxcH1miZBMJwQMUA8nX1Bl6ShwQDZOgsSAlcof9Dluf/vb7+9jV84mqjvkWQOEOUYetGkdJThOIF2XJSco5SxZtd8P4pzf5z73meQo/J1OVP9kICajMG9TrIqc9FPd6WuBU0Bh/M3SsBIQOKmC4wcC2vgpaI1TqO9a5DhL6oAngm1EHgLYTgeqf2TCzCqg0Z/v7uf+Nr8+CG3Dy6X7RSe06A3PJGMPyXjT75x08qV/PN8cWRaFgIcJiOfKlceiE65OvUH+kNn7ZNAAbub2c9qVrJY8yHfBgwxHuqrMuMr5fAKMM1YF+hk75N/G5KHopEx0yKBLAKbM0T2gKLQO/AjoOqdDA5ypJxlXsj8DfAEfstcawCz8zX3zKQCMjCbDOeeWPnfVF9od3akNAbxkG5GJdQR0SIBo7MsuVJe9EwOiz11/GHqjR/+1lhWQhx+AL+CcrMmAugCj9O/cc0YHO1dvweAY0AlFhq8q3cN2AiEQXyjZsSbGABGIzivHy3Sw+mfclJOPGQNkQIb9JurRPXN1WeJqUguxWbFbPWOjd3zPtaP12DVZHnp4XvNrk/4LsMz20i1sE13KlyNzstAyTlv1fW9f9Mgqv0E8IsuUD2eMsBUmIuI/JJPVUmltd5zf4ZmMH1n7fG1ku4pBgwMtHBhLNVu4NMoMDpxBHAjoESNZPnqcthJ8EiibdTWLwxFEAi/Gd47MmskQQJmJnytXHtOWOF+OlwGZ34wmI31kzXIg5Swh1UblAUaWC5pds2SMY+H6ZBcpX1PS2eMEl+fVFSCsPM5A28OtJBtum00XRAkwOPPahjj4MjcEFILcON/4FjrIc0hfz74TXtiQGeTUnU/80T5LfoChljWaNTfbXTryKe8T4LWOIlPl9bLY7K8h6yz7o2Xvqrm6ZB2pR2AfUHGuXHlMpiIi22Xg7xjwCHlW8l22bTpx/J8ZyThXliVbGqcd9mMjz16kIYvM/kpzRD4TvHlLa0kAwGQclsd9n1t+RP7ck3wchLbh5dL9evWG5zL7C0zMc+CLZWNzZOylr1qBj1NFb8guQgGJ83z2BwPWk/3QLmQ1dZWfgohs6gycDzBWlsl3Osv+NeRd24Fs9DwQhxw7j7fJUpK5K3NJkG9JobL0td8pm2Am9yg/153LmFG+bDNdBGAJCK+O7HVY1p3vGfcypOcAlpQrP3fRF2xLslLZAXtN6ouje6Ci4E92JWCofLayDdl7SOZxAtacN4G0lCmxa73Ro/96ygLLTCQY23RqfIuA47LQ4i/kucvPlHeMDq8pPkv0k/NXhe6Jr+B+6W/fQxkXyiV4z7l8GmPJxAUK8LMAb/bCs0m/LHsylJcG5Lrys0f3lNf57h55yyagugTbe8bGNuO7bkvr72uyPPTwvPZjNuk/k7tAXGBZ9CNQ35+scOPTW5VNNrbq+96+6JFVoJc/pN3GET8CjzLmYzsCoJmYoXdLCriXyYny3Pg+ODDHgZOtzFypcWxwYHDgjOGAGdpyiU354OWsVHnc3iGCsmQolcsvy3K+25sqwZclUC0UQE5ZhjEORH3tOuPHmAJ/kAArM/3AO8t/AGaWkArEEhDW9acdc5lpHNo5Kh32nBckxXF2T0CI/RaAUjHyKVt/bvscgDeZPcnWsvfOun7K83BOzOjJsOPIAxiTpVW3LYHdXACjbI7XWUnakf0oZGGsy6TI3j2Wl+JhC4Wn5M4+Q0uEN1mmVpfJnmo2FS+XEsqQyt57Ao0l4Cwyq81zMuFY+FPeO5k15bEsScp4K8/1fN+Gl0v1b6M3Hv/4x0/LKciV548jPHeP7MUlgDReWyjjVdmrU29kKdpcX5o1L4GzXcjqHG+yjx7ZoXP8obTNEjkyblzhMVCGnpJFRudoI2AcyCFzzXUZx8Ys4MwYZjuyhNP+ZK4FfCzpVG2IPKctjoUi48ZNOd61UZZnXhRhn5qlLER1RPe32hz330VfZLkQ0CyALzmwlxcAU3BnqRB9PEeZOJnLxlsCXNQzJ2vhc3g6d7+lYz36r6cs+V/KEhTMA87oh+xJWrcv+pRul52WrPiUS1ZktjzI8cPWPbGz7pdlxLl3+ZlxWB7LdxNxngsIQl4CQJEj8mDyDSi7DjjLmJqTh1r35L4+Ab7pR2AzQLwEd3vGRuQubSnvE1msx3dZpuf7NVkeenhe+jEt+q+e3A3PZRiSNRMhwCnAWau+7+2LyEerrNoPTRa6MbKOMuHGLi35f+XE/Lq6xrnBgQGcDRkYHBgcOIEDQCEBZxya8mQCvARMOVcv67N/1tKm9DbnR/Zf2ZShlPpLZ5jTGAcy5/Mpg2OJ7InAAeeEBjRLWfvICPwZboZ1aQmgJX5mxeKM53qfPYZXO/whm+2qD/gCOOFscFS1EY9kbRwtZsm2eQ7Bqz1CktGFh3PP6IUIR/ay7mQClSRIxVvXc+Y3AWccGUBdOePoWByc7IuXe5g5D+EveZINV5MAPLxPEFqXmfudoJNDVmd7leXLjY/L4757oQSa2w9OJhFgJkHuVLD6l7rT79XpfXmoj+u7muII1uOwLrfu97a8XKpzG71h75rwg2wAjh/zmMecdAuyZHNvNCe3J11w/MCpojfoHM+XfivbW/fvLmS1rD/fo7tNLgiKa7KZcjabzxt4ZaP6M2GiD+gB/ZW9csrgzFJYmQmAIPcIqPPABz5wupUs5CWK3k72WFkuOqsM2p0HkgSU8lsAJZM32RKOhQK6Atbmxm/K1Z8H7Qu6O3yvs2M9jwkTGWV0+hJwlgmMWk60dZ3NmSsf+dtGb/Tov56y5AmPjI86YA7/PWtsh+818SPIDmChBtUjU3Xdh617AtTpZ5l0S7RukogdRCZrap/HnpqAMz4LQKDUdeW9enRPrrP/UyZVZNLLAK8pfdNiU7cZ3/X9en5fU+Whh+clv1r0H91AjqK3y+vJHl1TTuy16vuevuiRVdnQ2RaG7PPh+cz8RiCzZwkZg9ou05Q8z1FrLDJ37Th2ZnFgAGdnVn+Ppx0c2MgBBkgAFee3vCDBa7lU0fIde2YhTqBZKYGWjdAtLagpgFzrMk3Xc0JlpHCefc+bpFK3PdOka9eAT877jGPJgNagjvMJNNYZUM9nGUqWlrkulM1T8zuf7ifIKwM6y0MRkEwQc9ZZZ+2DZpZmlGXzAoEADL3P4VktaxKcuB9gwqzhHCnnPsrUAWacpuwPNnc93sVJEcgKukMyB5B+TDDoNzkhL9pmmR75IU9mOusAz1IBpA2C+FbiDOo3zl8N+gELj+4Bk2bFX/va1y5W6dk4l5n1LQsmE9OzLZFnCW8Ey9krRnltWAoMk6lW1hv5S4Zfea71+7a8XKq/V294wULeZitzCAjit+O1fAJzI/8BdZbaUR4/VfQGeZcxo9+zd1PaSTeUtAtZLevLd5mQ2aQ7x3xql7FNrwiCA3BZeqy8TLJyzNNV+oIsJ8gyXo0v2Vyyj3Nc8JLxUj932YaAB8aRMRbQQRkZrijB9/Rj758N1Y0ZbdMe93LMX02y6ZAXKfTQQftCwEmvad+RPRmudVZAHbxcIll7lst5PkBcCaCsW8K4a73Ro/96ytoMHkgjaAac41coy1T9Xnrhg3PsJV6SlVLOyG90c2nPrgrdE1CZjHo7dCnT5MHyXM8MgFii7M+WZyjLlWNZPUvUo3vU4YUGAaRNUixNUPWMjW3G99LztBy/pspDD89LPm3Sf8ZDXnrlTa9seYjOiaxlwrlH3/f0RY+sBlQGhsncLSmZa/Gp2IhMMtT+n60i+KoZr2U94/vgwBwH3nHu4Dg2ODA4cOZywFIczivjk6AFNxgXx4ADeRuN4wlgzIZZzpdlF5YnJINBOcQAx5jZJ6qHzLoi2RIx5H7bD8wbNwEB64ALQVOCmDgJrke3vOUtp6DE93WbvOccoOMGN7iB4hNxTOYClZzP27P8LjPI4jQHTHS+XG4EDMR3xNlGvc8hmyugmeyPGpSYKj3+L0sq8BjgFpIlmDbOZYKlnM+Al2efffY+GAmUzEb24aGy5CPLWCzTIz+CLjIyFwBn+UsJvKlnEwWkFZQnWybXeEuddgjSSt7nfD7jNMq4AzaE1Jk3ntbZDimTzwAKHM8Atc6t2/xaFls5jmREZklyncWS+7R8bsvLpbp79IasQRkfCFhqz5Q4ro4nqzD3AsogslFmMeb8us9TQW8EJAUQZB8XbaZ3sg9LnmEXspq6yk/BBRCs/suYd1/nEuTKOqbnLIcJAW/OOeec6We5tEdWGIAsm4grQF/ljakCsfRD6io/nc9kASA9RP9lsoZ8hWQbaBt9LhvGn++OZR/ClPUZcDJjuDy37vsu+iLL+vGR7g+ZILj28TfVrdPJlkUB1vDTPqIheofdW6Jd643wrkX/9ZSNvShthGeiA7wZGJGddeM+y29N9pGZUK4nWwHerirdgwdATv0m+z72W9uO7k3UACsAVDWYmrb7jFyQX/uahdQVHwa4ts5u9egevlRAM3vULoFm2tEzNnrHd55z289rqjz08Lzk3Sb95wVF9Cc699xzy0tXd73rXSfZpYPiu/Xo+56+6JHV2M1yIkHD+VKJMaJvAdeIfMeX8JsOpVNNQGQpv+ODBgfWcWBknK3jzjg3OHAGckBwykCaiRHgZ5+LgAUAjmQ9WSqQfYQEv4ArwBlQiXEFGDG8mRFNEKRcnU20idWPfvSjpzcKmjmyxxTQSSCXpTwc1XK2ua5PGyzl8ZY9m+PbyJ1D7vrU8apXvWptuziAZqykiHPK7Wml3gRA9T3zG++8ScsSxSN7mQeMPkclbxZk2G9+85tPDgrwSPAKqLL/SRxuzoDMkJ7n4BgEmFKPN+TNkcwwr/S274sXD7i3194LNszkx6mwdDRvB52rxzH7rVh6ZJZcQAtM8sz6DbjqHkigFEDRMkZ9g9yXzJAr8lUGzOmnBEHTBQ3/lDczqd/xGYAhmJK5lb4gD+vISwyAs5wxe8WRHfs2CVD1DcfywgsvXFfFJHPejAfIJA9kWB/rpyXSb/ankgkEzMyeOJ6nXvK6VMfc8W15OVeXYz16w/PofzzLMiD8sF+c487f//73379VxlfAlf0TDV9OBb3hrakCZTJtybR+1+cZV+Vj7EJWy/q2/f6KV7xiAmYE0nSSAD/ZaTIdyzfBCo5kyhrzslXpSXJKd+jjdZv2p33siuwjYzRvjzQZYWzJcA7AQhZik+gt90KWSwOVnZM5mkkJ4ydBlrf49tAu+oKOt3TImAde0od0CF5qm0y6OguibiOe2j+ODOkLEwcma5IFXJf3e9d6o0f/9ZQFDplQAHjR92wr+crz8RUuuOCCuUfcP6YOACUen3/++RMIDywwwRBwNYWvKt0D6POCnaN7IBk7Y2N98uQ5+S2IbSuzOdPGfPLDyDdZkXlub0FA6pE9e5qsmosvvjjFZz97dA9/L2R5X5b45ZhPfePt6b1jo3V8l/fa9vs1VR56eY5/LfrPGJPJbYKUfqWz6E8vnohvYrkxXY569H1PX/TIqnHBPohTZMnxDYyLjC3tzAQjENgEtHMmZF1LL7BR7Evpl7pu0ODAOg5cmVKwrtQ4NzgwOHBGcYDhDAjFafOHGMHsEcUxTRaR4+Wm6IAthknAUmbTMMpoE2jmWlTOMjOMAAufjB3AQpYDxwCIUadrTxXs/UtdfnPeGGeOAiN6netcZwLN/JZFd8kllyg2UXldjvkEMAnMOBGydhhv1+MBynX5BCgI/DgggEM8sWzDZrsBArRf21wDYDIzpm7ADEBHHSjLHVufw0wa/oR8n/vLzJzsH0GaGWw89myCe+2yB09LAOzZgB7q8CyyAAKaAUfSp+QioFW52T5gLjOP5CuzpZ4hTly5VDjPls/wPb/zST5kyTgvM8XMo/vrRzPr+qCmsi4zmwIGYBUe6h/9iU8CWQFbuRwndZV1AJwthwVCuDfeeCZAafo4/Mn1gifyZQxy9NQHJGiR99RRtiHHDsLL1FF/tugNDmz0gPGWbAm88yp55HwyLf0OwLQJKMxzljzs0Ru53j1Lah1v5TV1XQAzDjvwk+7yTOQhcleW30ZWy3v7XtZXn5v7XZenJy2JdxyQThcAaoDc3mZZEvmlF40lY93YAprJhjHm170YIPXIZFUHWRfwyMjK2Cpl/W53u9s0/gAOyoee9KQnTePP2FQmlAwkzyHw3EQ1H3r6IteW8mdCByCMF0BhzwVI0U56buktzqlLe+lEgSs+6ovoHcDhEvXqDfWk3eW9U3+P/uspq/4HPOAB+/4Du+z5yBo9CXRMpm7aMvdJNwN1yAwekyGy5AUD0RuHqXvSppJ3JuKAfnS7TDf63vMpY7I7ut2/AABAAElEQVSszNwvr0tdPmWBAo2dZ7fYdKCZYJ+tNoFSU11Xq+4pM/nn/ATH2K1Qz9hoHd+p22f5HOX3sszS92uqPPTwHG9a9Z8JT0tzjRn+oDHIRyC77lmC+736vqUv0o+tssqu2FqATPKL+Mzk15LsF7zgBVN1niFkQtjEL+JL0cF0BRusLs89aHCghQPvsDfzPkWoZjHKJQQtF48yZyYHhqyc3v3e03+CPEEQQwpUaQmArgruaE9Sri3x2qZdsj84FZztFsd87rkEwDLAWgIyjrMld2bylpZnCBaOHJ8145zMATF1O3bxHHWdfgMlARjkxfP1Oq7qSMAtqwJocKoQngG+BFT6vvfZBA+CMyQwDvjT83wJpGTBtFxPLoARyifA7bnfVVl26I1lbuONZcEAswDny6VXU5baQWR1Xd0t5+haeg7oc8VettS6DBnPJiABdh9ErwpsgAR0Rr2pe0ubD6vMQfUG0EQmFb3Obm0TrMkU5avb6qBFDxyG3ujRfz1lyRj+kDl6rl6G1dKv9CrfgE7F42TJtFx7mGX0PZshM45c8xt6yNjSl7I6jcMlH2Jdnb26Z11d9bmesXFVju9rqjzgfw/P6/5a+g2IYm/oXz5/VpfMle/V9z190SqrbA0/1ZYDLX6qepXPOGqxwXPPPo5dszmwLk4ewNk1u+8P5enWCdSh3HBUulMOjP7bKTtHZYMDgwODA4MDgwODA4MDgwODA4MDgwODA6c5B9bFyWOp5mneuaP5gwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwOHw4EBnB0OX0etgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwOnOQcGcHaad+Bo/uDA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4XBgAGeHw9dR6+DA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODAac6BAZyd5h04mj84MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4cDgcGMDZ4fB11Do4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4cJpzYABnp3kHjuYPDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDhwOBwZwdjh8HbUODgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MDpzmHHjnnvZ/yZd8yeoLvuALNl7y8z//86tf/uVf3liuLPApn/Ipq6/+6q9evfGNb1w94xnPKE9t/f293/u9V3e9611Xv/M7v7N6znOeM1vPR37kR66+/Mu/fPXhH/7hq//4j/9Y/d7v/d7qxS9+8WzZnoPv//7vv/qqr/qq1Ud8xEes3umd3mn1l3/5l6uXvexl02ddz7u8y7usvvZrv3b1sR/7sau3ve1tq9/+7d9e/cIv/MLqP//zP+uiU13q/biP+7jVe77ne67+9m//dvXa17529Ru/8RsHKnvSxePAGc+BHrncxKxP+IRPWH3WZ33WpmLTODUG0E1vetPVDW5wg9UHfMAHTGPhR3/0R1f//M//vLGOFHiHd3iH1dGjR1ef/MmfvHr3d3/31V/91V+tHvCAB+T0xs+zzz579W7v9m5ry/3Jn/zJ6ld+5VemMte+9rVXn/3Znz1b/v/+7/9Wz372s08457m+4iu+YnXkyJFpHL/yla9c/eEf/uEJZfKjpS+24XHq9/lpn/Zpq6//+q+f2vmbv/mb5anpu/O3uMUtJl2pj37qp35qrZ6/znWus7rxjW+8er/3e7/Vn/3Zn016aun5TrpZdaC3L/D23HPPXX3Mx3zMVJM+espTnlLVuv5n7/OWtZFbtmUdkeWXvOQlq4P227p7zJ1j7653veutXvCCF0y2pizzeZ/3eauP+qiPKg/tf//7v//7FduOyO1nfuZnrsjsL/3SL+2XOcwvu5Kn93mf95nav6mt7Cq5RbuQp4Pos1NNnviC9Dkf5I/+6I9Wr3nNa1Z//ud/PstSbSdv/LFf+7Vfm2RmzreZvXjNwU/6pE+a9MsHfuAHrv7iL/5i9frXv/4keV5z+XRqVzKV+7SOH+V77QU98cVf/MWT/v3jP/7j1Ste8YrJbuTe5Sf/82u+5msm/5Oe+fVf//VJ/7797W8vi231/Uu/9EtXn/EZnzHZxt///d9fvepVr1psR3mDb/mWb1l9yId8yOqRj3zk5Gs7t81YLOtc+n6HO9xh9UEf9EGrJz7xiau//uu/Xip2wvEeefr4j//41VlnnbX60A/90NV//dd/rd70pjdN+rRHrofsncD+ph+7lL3ccMnOt/hcqaP38xM/8RNXn//5n7/66I/+6NW//uu/rl73utc121ExOP1Ll/IfNtFhPMdXfuVXrt7jPd5j9tZ0ghg21FPWNT32oscOpT0tn3QH/flhH/ZhK34P/dni54gz7nKXu0x9+rjHPW7/Vj32e/+imS9LfQ9jgAuwEWKNV7/61ZM9/N///d+ZWk4+BP/45m/+5tXf/M3frJ7whCecXGAcOYEDXcDZda973aljTqhh5gdD1QuccSJ0Ogd1F8CZoPk+97nPVB9BmgPOgFWETdkQJXqzm91sdc973nP1b//2bznc9Skw+YZv+IYT6mVoOT0U3TOf+cz9+gy0hz70oav/9//+3/4xBlW77n3ve58AFAiwf/AHf3BFEYbw7XM/93NXAvgSFOgpm7rG5+BAONAjl7lm3ecXfdEXrfxtIsHA3/3d362+8Au/cBpDKc9I/8u//Et+Nn1+53d+5wlgnQCrh4A1pW6YuxbQH+AM8E9HLtFzn/vc1f/8z/9Mp294wxuubnvb205AuAOAcI4UnXP3u9999d///d/71bT2RS+P92+w94Wj8N3f/d1TezhzNXBWns919OoS0X+AghA9xcn52Z/92dVP//RP53DzZ29f0J2CspCgvYd6n7eumyyoYx0BH9mDg/TbuvrnzrELsU3sbengKn/rW9969V7v9V5zl67+/d//fR84A5qwacZqi0M5W2HHwV3KE2CwlM2lZgiIA5wdVJ4Oqs9OFXnioPMzStk24Yef/Jpy0pFPg28mD0NkBjj/rGc9a/WiF70oh7s/73SnO01+Ty6kX4wjk6T8qRbapUzlfq3jR/keewF0IkMhPOdnmuh98IMfnMPTJ9ti0qC0XcbrLW95y8kn3tavVfn5559/ArDObgmMH/7wh5+kS8pG3fzmN99vP/1ikhptMxbLepe+A2rf9V3fdQK2WoCzHnn6ru/6rmnSoLy3if8v+7IvWz32sY+dncQuy/o+ZK/myObfu5Y9d1yy860+1+ZWn1zi9re//Um+8Kd/+qdP4/lHfuRHTr6gOvKpn/qp+wDJJuDsMJ5D/Pl1X/d1Vauu/Ml/jF/RU7bHXvTYoStb1vbNWP6+7/u+ff3JtnzO53zONEnD9q3ze7/3e793PwGmvFuP/S6vq7/P9T1+/8AP/MCUHJDyyn3rt37rZKuvuOKKHF78/OAP/uBJpujjAZwtsmn/RBdwlqve+ta3rn7xF38xP0/6fMMb3nDSsavywPu+7/tOwScQbolkAzDmSCYKJ44TwDlh2O92t7ut7n//+y9dvnick5jAxEyfGXn8knlBSZup59z97u/+7lQHdJrCEJQ89alPnUCxb/qmb5pmcoF3BgTiBH3/93//dF5QbXYVsn+jG91oGqjafpvb3Gaqo6fsVPn4NzhQcaBVLqvLFn+SVbM3c0R2ySy5TkaZGRrEyf/Jn/zJ6XOdwZqrV6YZAirLNuqZDXadcTqXcQaQMXOOMo59T4AITMhzOI60PbM/yV7xzJ7v6U9/+vTst7rVrSag37j/4R/+4WMX7v1v7YteHucGABT6jjOyRAIv5z3H4x//+NVb3vKW2exZ1+ODoA6Z+TKbatKA82FCANhotrSHevqCsxjQTCAvG6YXdO153rnnAKaS65re8R3fcdLXjuMh2rbfpos7/rk3x47cLZEMIvSnf/qnJzmIae/StYd1fNfyJEtbxs4c8QuMeXKesb0LeTqoPjtV5Iku4sfgj8lRALvMGxkUADFOOnlG3/7t376vE/mLMvhkKcoaEXjJ0gmP5/pi6RhgyGQhci8+lvF6/etffyVgMCGxKbt01zKVtvaMn1Z7ARAMaGZcPv/5z18JdPBbkMfflP2LStvCDtF/JozwnF98j3vcY/pLe3s+6XSBlckfE9H8Zv0oW4H9MFEFbK7Jc8remKPesThXx0GP9cgTgJgMIxmWJvquda1rTf1Db3zHd3zHxIt1vsaQvf4eOwzZ04olO/893/M9TXFZ75OI/4xnxC+i18VwfFUAiGSOuSSP3vukfKvvmPItn3QOogfmQJly0rWnbI+96LFDLc+UMnwkY5iPxD6Z7AacWWlhsgIYddFFF6X4CZ+Ac2XmqMd+z12/7hhZ5aPoD+3VbnImAee8886bsJCDTJasu/eZem4r4ExQeNlll52SPCMwZtbWBYEazlgiqetmRQW1nDsGj3E8sjczvw25v0FHiKHWHExkZgAAACRjBDiMnLwE9/e61732g7vf+q3fWj3kIQ+ZHB7toJyUNTiQc3H8BYUCbQOWMwl86yk7VTj+DQ4UHOiRy+KytV9lpcxlppBZWQjGyYMe9KBpqbKKAnwAWBKIrb3BzEljDXFEkjkyU2zx0MMe9rDZc5nhF8SU2VOWJKJHPepRi8uWnBfY0REyjuiILJ/xrI9+9KOnJX50EJ3R0xe9PKYjv+3bvm0CtLRrHSXDRDC2KZvYjLznkw2R2SszkEAbzyOI6gXOevoioCaZKrN71z1ffa7neetr/U4gW5+T2UBXm0z5sR/7sel0b7/Vdbb+5hAmsJ+7hq3Rb+TRsuhThXYtT5YjPPCBDzzp8QALdBDigFqCiHYhTwfVZ6eCPJEN4xfxlS655JLpO33wiEc8YppwBKLR1zINLOdDL3zhC/eXqdMJJgnNwANa/PUSHwsBXdJfglAZRu4pGN4EnO1aprSnd/y02gsAGWLDynFJPk2wsBUvf/nLV//wD/+wv4KC/8mnzLYHQO8slXznd37nfZszVdzwT3+ec845U0krQn7u535u+m4Jk76nV/RpLadkxoSvzznqHYtzdRz0WI88mQBHstjKyS1L2E20460JIstXl2jI3tuXWDN7/LBkz83m7HyPzzXb4DUHZWciySUXXHDB/nerDMSCluLtCjg7rOeIDaBTylVO08NU/1rL9tiLHjtUNWfjT7651S0yYu973/tO5QGBJl6dMzlz8cUX78f1qZAcmfheolovptycP5hzLZ8mVOJb8GlMSiMTzcEF+H0/8RM/0VLdKNPIga2As8a694tBYmUcEC5ZJQbcpZdeuj/jvl/w+BeOABSeQHAGGKFNKampg3E3sCCsb37zm6cZuZwrPxNUmyFLJojzACn3V4e/AF/lteu+c4Y4LRzE+lrHOHZR1niCGOEyIwIw6ZgMHVlxgmkzfQIaAzqgWdrBaAvGAqz1lE0d43NwIBzokUuzHQJOsxz2EwnZY8XSP2PIDLkZl5rIK2cevfSlL52yXMzGAnMSrAqCjuwF9DJCA9hs0idAcQ5uwPPb3e52k97h4P/jP/7jyqySYxwVbacrgGBAnnKJZN1ev2WDejZjMUGb444lI2tprx/lkBlq5HkCmvlNDzHSjLM/Oq+nL9RR0xyPU8aydE4+8vyewV9NP/RDPzRlFTjuvCUTdNncknq8lQ2B9HtJz3ve86bAW1YPXtGFlmAivC/BTVkUZszowiXQbK4vLFFyXUhb9W/ATv1N5rQBb8gDJ7Z0Vjc9L7mRaUFGPS8w0X5hczKedvh0XTJlLG2ay85I+XX9poxxIeBjSzh65IhdNQZ91mTpkmuUY2vmsrHJA8KTVjK+8dO1bKosFKBTlmqU9Ri35A3v2Uj7fj75yU+ebW+u65Ens/Xaga8cRfcIAW35ExzKJTBVgE8ujV/9iQ4qTy367HSRJxk1Mt09Uz3242tlObwsVsQHqvd2pAeyZyFdAQALbdLtyhlnMhnUU5K+pVOMnXV0WDLVM3567EXA7lpu6ct/+qd/mvaQZCd+5md+ZvIN+Y2yrAOa4QUdZ5war5atl2O8Rf4sJTM2jKnsc6he/Qu05K8L+usAUYYGnUsv2A/M/VtobizmOv3LL8Zv8mMSgj2ayyr3rFZtROcY22xN+fw98iTA98wBDtMmOpdepWNkDi3RkL1TR/aW7Hx8oJa4LP3cYo/pJmPZuCz3v1IHu21/3Np35MvQiSaXXceuto6hHt+xx49PVlVAmvBg7rO1bI+94M+12qGyTS22RR8h9Zd0+eWXT74/YJwPl1gkZegY/fIHf/AHK3slttAmf7Cl72V6Iz5o3R+yjYFm6YO0id9q5Qf9SZ+J3ernSVnZxABBNiXbncBkJAz86q/+aoqdcZ+HDpwF9cRZBofzxbjc7373mwyYjKmSOF6CJqS8jvPb3kEJfsry9XcbLTLsgjZryZdIej/FQpH5BM4JQDKrZkNA9++ldftrMN6IE4FiYGuBd47jQ1g5qQhwsLQnSAJFM7m9ZacLxr/BgYIDPXJJTs00A2tlUUYB24tPUM+pLVO3i9tMwBrFzRDG4RbQqytk2bQ/m7yru0WfcDLKOjjsiOIHTtE9CUaMcTrHn+DAHhMcpjkCeFnygjjpJciWvc2MwaNHj05LlwQaNq22nw+QIJTArgwYc45hRgGfevoidZSfwMuaxzmv/YygpaKChx//8R/PqRM+s5TWQe3D2yXnLWXxlRNREsDfcdcqJ7PWJs4yRbzERfYd4oSavEC1AzMd3Pu31BdmODMx4T7aGnBS/0rx1y9IWzI5AagkW47lGZSpn1f2RzIUUoc+4qBYlpoMO+dKck/3RsZDspnKMuX3df2GZ4LUkoBW2k22LceSyRFi18xsIrOlS8um4rwCKy29YnvIiDH+tKc9bRbosy9fSZY1cMI5VvazC+FPXpwRGeD0GW/6fmlpU/rCNZvkCTigb/S757XfEAIYkAt1lOB+2ubTdZ5XGRmjoYPK0yZ9djrJkz6yTLsm8h8HPuM1Ex8leJPrjKnoD0FG9GCLbldHDcQ5Rqcmm0OG/jo6LJnqGT+t9qLk1RwgHj4e2ZtcQnPZq8qYjELscQkatcpffEy6oZxoVqesM8BZAivHkLFDTwHbANlWSrTQ0lh0LX7QcQE3jFf2gx7Sr/yOksQAnl85n56DnTPpFR3cI08A+Dkif3l+k1BLNGTvykmZq1v20hf6qrTz/CXUEpcp12qPA2CYZGCvATB8EmPKMvY6OcQ5trEkyzzJcgv1+I49fnx0Oz3A12DjAPj8mnoyo7VsyrXYC9nOrXYofGq1LRnD9cv28NwzihPYrMQ56rdFkuN8aZMX2V4p95773OQPtvZ99GCZeJP7kWlEnqMDPZ9Musi456ITs6Q21/qku018uBZFh8JkZIqLG7zw8EykrYAzWVWc0Tki1HlzGkeGssBwTJZOniCJ8HmLA2e3DEDVSQikpENRzSKbXdKxAqpNS3w47S1EKXpzpeyUo3uBrjRMwkxIBFqWPO6SbnKTm0yDS53QawRERBRnTQBAlCC7Pp/fwLgEL5t401M29Y/PM48DPXLJUMoiAWTQCZxo4DPwwrgHds8ZQ7KYYEvWSciMGmV9/71lDxQ0sBjwhFr1CfDC34UXXjiNaUuxA1yZZQSa0TlAbs4zo+04QN8yCktc5oiDRD8woPVSjMwyAfky06cOfGAEZWjEyLheGxJolfdKPTFsPX1R1uP7Eo9Tziy9JQKbyHPL7jJjx+k7//zzFy+JIS+zfcrCjjPoHA12whvWtIFN+cZv/MZpwsNeF0gAUoIvZT1LfWFPhyN7QaQ+p8cDGJnpVy8dz65YYkDvsil3vvOdp2wIM2u2IFh6XsBKAD12i3xxHvOSGWnzpf0r2wusosuNCdeto3X9BgSLvgdSALQEw2ROH5FPWxWUM9qyN/Dc2JIdsgScZZIG8FyScehtgQLOuWACoJWsQIGlcY0n9v8Btrgf0Kz0AwDhyhovxrus6jnqkSeggTFmKwT30w7jKPtD0VVmV+dIe5F+JR+hXcjTkj47XeUpvDFeBXJkEpENfY4CHggQOevxZ5w766yzfExE56JW3V77iuRSAB45MXGxBI5ON9r7l7ItOqpHpnrGT/T8JntBh2knvcWOmPAN4Z1sLpQ+yDmfJl8A4Mn0Uk8J7PfIXwLKbOpf3idAXAIs5/R7luHSUXM+bllH+X1pLCoDGNN/skrZb5Mx/A+63fMK9vRZSJtkj8ooZovZBTy/4x3vOBvgbiNP7mVSxL3weF0mxpC9Yz1zKsjekp1P8sOczEaPJS7rscclOMReioVDgGe2Ocs3jetMtPHTrJYgX4A0mUot1OM79vjxmXT2sqcQuTZpxg8BUkVPt5bttRe5r891dsj5HttCzyKJMzVlci8613ngq77j2wDjs/y+vrb+vc4f7Ol7E/MmKJIcUN6njEPob7gK/ccn8t0EBN2tLdnvvbyeXSVzJmFNJpq4sb/j0T28hA41wZGYprzuTPj+jts8pAHPGZ37E4iEpAMiDhXQDJXLKDjzKTOd3PtHAAU1cV4ZvDjrSw5/ru39JEDuh7SFkCAZJxD4XZGZBYMbCV78IfdEUcbTj+P/giCnTHku3zkCgnztlrVj+esS9ZRdqmMcPzM4EJlrlUsGg4PvOoqZIUFAj4zdmnN5K4+gf9MSt1wbXdGjT3KtT8Y9oIC2ZcaZYeCYICBBAqDpwPF/DEWy2OqZQUUAAYg+AYybYRfACeSMT+M/WWQZ/1Kls3TPtWUQGgPe2xfqCW3D41y77WcCuKWglJyglLMfErAJmVzwJmTPzE4sZRhv6oupsuqfuuOocozjFJtsyNKjMqCvLp9+Wt6rLzmF9qv0jPpbdlWAWeDVHGVPTWBhHLC5co6t6zfyC3jl2HGy2QntsXQoYy3BmboAfWTTOCszqZyrKQ6h+thddphD7TnZSkF4bGSu9SxkHTjuz3fXK2dDXZSN8WWXxw8AYJk1VjeHu6439UdOWuUJQB3fAQie4N39PMscGXcZZ8l8nStXHjtT5ankgX5N/zhuxj39CGDPWDcpkbEn4PTW8lAC0G11O6CklHc2Kz5d7lF/ps27lqme8dNjL5JpBtgvbZMAJgSoqkmALaM2faI/6NVQjz5LEE6P1BTgzHF6AtEV+hywxR620qaxeOTIkakqEwPZsgTYbSJLf2ZSIfcD6mcZPr2QrGoTN8nISVmf28iTrLaAGUuZubnHkL1jnDgVZS99FFvQ4v/22ONsjwA0NzZk3AN1szyTL2jSB5FDPmD8ILaVXeVrz43BtL387HkO17X48aWt5oM86UlPmsYU0A95xtjcnrK99mK62fF/6+yQIq22JT63a9iymrKiK7ZMeYk69CsfsMzyr6+tf6/zB3v6PhneAMpyhZ1EhiQmuDefGXhG7yF+mmdkK+nHJDtNJ/f+scvkjX9ncpt/qyx/OWBZbHeuOZM+jyE3nU9sEEPB58g+E4gwxdj6XRp5v9VB8GIIHUOMcFL3jx05thGtQZhOz/GDfHptOAElDPZXEniYEdBOQietnAKI87ftvShCS03xg+Eug5c4bnFsynvEAanT4lNGJp/ZcMpRG200veQw9pRN/ePzzOVAr1xyMMxkWw4CeEIMYRzWmpOyf8ySIFmfLbStPinrLjO8MtbL88aP+ygXICTngfbOGWsxHDnnk+GU8izjLsAYpx0oaD8r41QdwAJ71QgQGDsZUbJdkd94Ty+mD/LZqyO24fHUiAP+KwOzuaryHPR/CMAIRDRbF3Bx3R5gm/oi9ZafWSaBn2XgrgynAukjerds23Ti+L/MGHNc2YmS4nTN2SjylHtsAmY29ZvlEDLIEMCAc3RkL5j0l0AwtkN7zRoizs8mWyazU2AOYIrjaALJHwfRM5hxLDOb55Y0AKSN74ADmYWtAWfjJEvIpkbO/NtGnthCAXKC1AQcM9VPhyIPZm/z3Etlc/xMlKc8ez4B0HQmcNZSZ1l++pxf4jg9B0g2G87vYSc478ZKdK0MJjphG19ROwAVxi3AxIQNoEj/yyJdGseHJVM946fHXlhybAUGcMwyG3zEL7/DRzJeE9sDMKYbZAfoB+NYdqcx3KPP5urP/RJI+k3HyEQBDuA/Pd5D68Yi2YoNrrd/EMD7q6m21ey6AJitpTvLfTVd2ytPltRnXySZbptAwiF7p67sRXZ6fK4eexy77D78ZVs7IH5yVljY9kQ8Gl81k7tTweP/AG707SbqeQ51tfjxwDLLEfkCVpElPs2EoLEf376nbI+9qJ97nR3qsS15FvWzKTWl/8JXE3PAI8CnF+C00iZ/sKfvZerZIkfsIaYAvvJPkyGcNulbPjaSZJPJmJw3oVkCbQCzLHun32U7w2Do4CQf8FXPVDpZOho4wblMhsZS8QTQznMy42jW5etAQ6fWlD3B5mbV6rKtv7PfCiWVYAawZc22IIOwyKgrU+Nb60454Fz2ROKUc3rK4IUBL9Ptc53PcvayPO67jZ7tGUMpGCT2iVlCu3vK1vcZv89MDmwjl9b8l0tmyuWXNReNiQRPNhJvoW31SVl3MsYcy/Kt8ny+WypXU2ayLcNg5GtyfG6JBgMEeBBIpF7X0zP29NImTrxjjJmsHBlHCfq26Qtt24bH9TNt8zsGec7xUF+OM+QlkRdBCJLxW8+AlWU39UVZNt8TJJK7MoU95/OpP7JsIMfyGfCLzl6qowQAcl0ypU0KXbFh/6WWfiO7sk8CDOU+9Sf5YiM4sGQv8kfekGUG7kdutY0tnCMAFzvj2Th1JXCW7K7yOvdDgD08C6gY2SjLbvqeayI3dfkcL+XJswANks0pQDEO50gbgS2oVRcpeybKk+cuKcteTQb4LqtWZrs+58dxxvk7ZFXQAUAFHAi8gG3kUL8dRLfH7wFEyRyVpUrmBRHJJC3b7PthyVTP+OmxF/Qh4Ix+pHv84atJGnwVqM4toXQMMOTPBNVjHvOYaQwDGI3hHn2WMR3dUfI0GX9smEDSnj8IqJ5s07I8/SXjV/+UtnTTWMw+PHRRK/G7awpwVspdyrTKk6BRdnR8CnHCOp8n9Q/ZOzVlL/3js9fnarXH7BLggS0KaJb7kh96ko0lW5FN2eU1RUbr4/Xv3udw/SY/Xob7Uua2iQPAGf1rEo/e6Snbai/q51xnh+jK0CYcgi4CivFXZM5l1VeuT4YV22ZlXfap5Gvwo1D6TR86ps/ruGCTP5g6WvsecCjbjK6FkfjTdvvZZusqbQ7gNac/60QBz6IPTUDF13Fs0DEObAWctTCvFDrA1NKMVZ1dNodiBkQqQaeWNiyVIViZBa9npNwDyg+dtd/BtsCZPRQS4JkhmJt5MzAMkjgeZXvTvjIgcJ6SzoanrvdK7Dmwsbdsee/x/czmwDZyWS9xsD/D0ka69spCMrI2zcCmJ7bVJ7neZ8aJMW62bIlq0EZgEqMZkL2+FtAuW4oBKmeulEvQHgDBMW0BeNN3AA0Zegy3fR9RMl+26QvXb8Nj1x2UEhh4Vs9W8sKx8KAOaPLc7i8zgjyVe9WkXS19kbLlZ5wFQI+3DS1RJmnmzpMbQI2AkE6fo1qe8SD7GdUZV3PXb+o3z5/9RdlUcsNeyfT2Jtlyq4TYEJNTJnFqEogmGJVxLZDmaM45Ue7FqQ9QlboyC5vfPhOQs12R/Rwvx7FjnOwyeHaspG3kidObiTF1WRZhRjhjqqw/Tqx21g5uWa7+fibKU3SV2eysLAhfBD3JrOfAB0SSgeOPD+d6gYQ+zz5WQOpSJlp8ReDckb0sKkv0SlI3f5IelpGxCTjr0VEtMtUzfnrthWXYgiDXGc8Zow984AMnFtADSBsEiILJLEefTuz9s6wH4A8wRz36LH76HFifLGH14VN0vL3H/NWU/XQsr8wEkTKbxmLs95zOcf2cLpmbyMBDFNC/V57c3/5AeW4y26Lb3fOw9NmQvYPJnr4J9fhcPfY48paMpdzPp+0qAGeIfBq7/KD4ndOJ4//mZLo8n+89z5FrNvnxxhi/whjP8+Ta2ES/ne8pmzpa7EWPHSpfPthiW/hvdByw6IpqkjO6z3MnK0y7ZQnW5Pn5XCYvSr+ixR/s7Xv+kwxGf/AE/c6fyfJxelmZAK5z+rOWKXJnrzr+nmv56xIAxGzwk6PViov6+a/pvw8NOGPkgt76XiPssiosC6mdnxijkvFBdksHqzzf+10QQBgM7CN7DhiHq6QMEAK3DZVvEGNQl4JtiLzAJQJe3ivBVpmqC80/99xzp2IcKU7TnBJWoKdsed/xfXCgVy4FUllfD3RiVARKAnzKvKYAxa3LNF2/rT4p751MIkZNEBFHXBm6wNJsjnz2isq1lqchBq3WFY671my+z2xE7Hgos9IBZQAcZqMsd/d2TvwOJeDPUvjevkg92/A41x7kk+ORgAxgwREK0UmIzooR95uckBc62b415Ic8yXSqJw429YX65kigKRWdg1AH1AInjoCsCm+3WiL9HwejrkOA6HkjY6mDfSEXCDi1iTb1W15OwBZaslnq/4BjuR9wo07Zd38ziRwidXACBd0At2xGDNAtZZKjlHo4TyXNZZKbVUeCRO2LPLDjpR/gWe2rAlw1AZSgsqx/G3myVNAY1194AQx0zF9NAREy3urzS7/PRHkymWhzauPUZuylfyQgCCWgsv8quZEtpS9CZ5111tQvrg8ARE70WYuvaCmmfqVzy6BE/QF2y/vlvvk8DJnqGT9pu88We0EX0l0yImWEhGfkOtkA2WNIFhRgrdxoPM8dwChgdo8+y2QSPQ2kKW2n7D4EXKNTah2Y+0dXWNrEBy91lzKbxmLqpY/qNrCpMt1k2SRzWZ2WMJW6DA+ShZLAuEeeyLnywAPjgAyWGbjuuY6G7B3L/D3VZK/ssx6fq8ces50mxmRt0nWl/Ef2yYfxabwZ87GlZfuuvbdkroV6nkN9LX68FypICOEny0YyBkJZCug34L6nrGta7UXeONlih3rjBpNr4n86LTpV2/CG7kNsDh1bgmfTib1/+hbgqW3iCNnCJR1p8Ad7+p588J/JjT1pSx/KSxFQ4g4Zyia36D86lA4OAYBLkjQU0MxbQstJx2QUs19nKl3p7RwCB7LpIYc+jrfbEHydKggqlYdzDJu3goWAa1lykU2kc+4gn1BZZH8hQhQyGKKYYhApOUi8vzgfKV9/erYEv9auL4FmrrNkwQAz2OJ8OC4Acwxv8lY5gzkCy4GwJr7mnWtRT1nlzRJ6Nss6d0FmFK2PzqxyXSeE3nl/czMq+ty5uTf+MTw5p1/mCKIuSMo9yk9KwNp0/Z49gco60vYAQeW5fBfwqdMGuCUBNR0v+XiQ+sq6AawyuOzdY78Ymzu6fzYTLcse9HuPXLpXAlUzTpbKJD1bdmQyWtImeiD9Zm1+D22jT8r6OS6MBYWPl6XiP7oHnAh+jN0aHAvwVYI9Zb3GcK6xaXiCN2X0DyOLsg8C+bQ8TAApAAh5G6lxr436APX2hWsOwmPXH5QyGXL22WfvbxbtmbNZvNT2EPkgJ8hyK/LDCSAjc0DHpr5IvfVnQFr8rvUSPaMdHL8ElfX1fmcjVjq+3A9Cnfbq4nxk2V+uTznPVGbf5Xz52dJvsaPqKvU/UDIp/pE/Ww4ALeq/LLvCE+c4avZqIccokzNpG7DEWAF0lH3nvPT/cozLaIm8J5s7ASr5Lm0te6Ze9y0dvtw3nz3yZE8tQKg6ZXn7892xuRc3sH2oBgRz76XPM1GeZFnipT6TUV9SZMT57D9FHvE9ezgqz2knByhZab736PbIL7+yBOz4BgGe57JV3Se0a5nqGT941GMvTCYDyPKCKc+gD7IMR4Ac/gUAZsfwPiSrOfsieSEL6tFn7pHgKRmv6uCrZZKXrQKc1fomvz03AjY5VuvDTWMRYEiPevZ6b8TYlsjGdKO9f46XNhbg63r1yJ5AuaZFnmRGBzQzcZ0YYaqo8d+QvVNP9squ6/G5euwxICZ7OooTQnydZNwnEz921hgulzsD2AJA5/qlz57nUEeLHx/ZLf0514rjvD0a0UXGdk9Z17Xai147FN3YgkNkuwbJLHRbKM9GBwIFgWfRa+XnJZdcMl3CV3L8oosuShXTZ4s/2NP3cAy6nVwEJ3EjvqxJAzrX5DwC4mlXrT/px4BsU8G9f7Gjfpc+MZ7AJ5B6zlQ6tIwzDLUJqQDfLL3N9iGwHKdklXFWS1Q3neAtGIw8I0wgdJDvZud2Rd6KZp8hRlCAwbnn1ANT3M/sWbIKBALQc2S2qcwCq9tTAi5Sz5N+XpYTJHj7IONtkMhWcF3WSQfQEUh6bmTgBnAA7NUDUhmBlM3Ge8q6TtAF4HF97ZA430scIACk55jbIN5GtQmgBFqlA+1e2uP6MvU3bQCqBrw02xMwIud9Oh8FVR6vvwMuLNmzzCTE4dRu92fE6rYpR2acjyNYXksJRbE6fpD6XK8+cmoWoyTgKLmUxSFgMM4y01+W2+Z7j1wCQROw28SYDAHOGHvjHEgpqMqSjDjZytXZRJvauq0+Sb2Muc1/gWRmagCQ+ooxyCw0Z6POVoi+UnaJbHBvTymy541fdF1mn1zDwc6M+eV7WVj0gjFgfwL1ms0K4KBd+gD19MV0wd6/g/A4dRzk87LLLpuWuXsmwAXdemRvpo0dMKbtiYQ4X+QDmRXLWMY/MkOuyJc+CbX0RcqWn3hsiaWNnOk6s6acIHKgz4xlr4hfR3QZp4GscDIFk67juNDN5bOlnkzCJOjM8bnPln4jR/QTHSRby3MBE8sguQwU5+4zd8x4lBFHl2gz+0iGvSgn9dGVHK+S2EpyL2sNiIEXCK8TBOhPbcU3e6OqVznjA8VZnX7M/GuVJ+2ODTXG9A8y4cbOOCej0P2RtseWBEyYTjT8OxPliazzwYARfBb6Pn0ZGfFCgMjIK17xiskhZwMB4sAisgvY5QM9+9nP3ud0j26nPwSenHuypS/Yw4DWAJGAd/s3qL4chkz1jJ8ee2F8mPCjT4FO5Nr48cx4bWyF2De+M7nmq9KrAh9jg7wL3BPc9eoz/ig/mA7NW9kE8XQfnziBctrS89k6Fp/61KdOGRYmKdhOfvqRPduS7Ig8W+5dxh50ZALCcmPzVnki45nk0V5vi50juifB6tz5IXvHNsTvsaWHKXt1H/X4XL32mM7zRlHjRtzJbpJfcspHZneRMS7+tB+heM4ErPPGcSv1PEerH+95Adh8ZtdoH10uTjcG+RHe9o16yirfai967VCPbdFmYBT/6vzzz598dvEs39x957Zc0vZWSv+t8wd7+l6sojy7Knbn2yL9Q0exSSVewX7IyJUkwCZLBkjflc9gUkVd6lDO5Crdye44huh9fR57X15/Tf/elXFGcHqIcEgd94nJZqfNnGE8FJjiCKVux4FFyimvLMEAHvRS6sxneT3QjuGV2i2AA4QIotzPEiFLR0Jz1+dc/ZkZCMfVNfcXZ10ZijLgIeH3hwzgci8ebQvN1ekYHqOessr3PJ/ymyjLcTxnHOpcIxALaOZYZkFz3mcyGBLg5JxMj5J3CZJyfu7z+c9//uTEcGT8PeMZz5g2ymVU8EwWQnheX29mPOm59bltfvfWRynJMAtoBngRmEjJBZxyVPUdpW551S6pRS4ZlMz0ktdy3xtAubbpr7yeWvtiODaBZpHJcla6R5+UvEhdjtlXhWHX/2bJzCwJ6JVhLJLhWV4fGS4NUHned9lz6gWekCuOkX7jSLzoRS+aJhFyjXsLPOg5Y1ZZfagsB7HOxGvpi9Tts5XH5TXl95Jf5XHf151LWenxMmIFbBxCPA5oBjhJn5IL8qFO8hIS9NLBiHwlE8Hvlr5YaiN7Awh3XhAFXHd/hl9mcGYm3SdU1yVgT6YCvUHX6kN96dn0YUkJ5gMilefq7y39BpQOKIAvHCDPwpklU4g8h0/1Pcrf9bMJ6Gwt4BnoaECeesgr3mUiqazDxI/yeMGpUiegqrTtABNOKCcv9SajGhBjfKyjVnmSgWvsuU8ZuHrbnjqcK7OEOZdIm4EvS1TzKeV65Cl1RPZTx+kmT0AcfabPjR1jm4wI6oAW5V5PgkS+gGfndNNzbBowh10rqUe3W84tAKBfoj+NM/exJ20JJJX3KL8fhkz1jJ8eeyFoIc/0FD1KbwHN+K50DjAsZKzyBWQXkHeTD/juO58X2FMGOj3yJ4tPO/Q9e8Vnxn/BVzne05b6M7KfsVCebx2L5InvQ974ZmIEMsXukol6HNNZ2qitZFDbgZbJGNWGVnkC1uFjyPe5v9K/Tdnyc8jeMW6cKrKnNbVMtvpcvfYYqKpu8ssGxjfiP/Dr6cGQsWaix3g1OWYck1++NqrbnOvKz5bn6PXjtTO+Pl+Dn2AMsvPi9Cwl146esj32oscO9dgWbZbUAgyM3ohv7uUHm/y4TX3S6g/29D2/D0ZCF4mv/SGTlyZSSuI7mjRgM+hDfec5yWVJfGGxiOdhc8TeZNB1JpjZGZTtV8prz4Tv77A38z6hYQxtAvTDeHDGJFlAAIDS2M/dTzCgozgY6aS5crs4ZvBDXRk0baOcajI7Ip1WJoRyuyaz9fiDTwz5Jv7s+v7S/jkhJcixdI8WWbnwwgunAXnppZeeMCgtz7EfhewofcwoyJIrCQBBEXNwyuW5NitkPKTJChgpCgF6DbBx5DmUiHHOxrblPRiLONiUhnYigGkJpDECFGlJAlzr8SmVZCI6757uLYONE40OUp9snOzpxFmcW37CIZT9ghdmY83abKKW/ksdV7dcph31Z68+qa/327jHPzNMDA9nZhdErsmQALF0IubqpnMBDpyOK/YysxJczJU9Vftirq05loDZGMTjU4UElOwLR0gfbXJ46nbrC3pA8KbfSoe3LnsYvwXOdKF2A/LKYPig96NL8IYcs0Wc+k105MiRKZiXubVOhrWbs2bMCXLXlZ2755CnOa4c/FivPJERoLFxREbmbGxaRVfzLUxQGit1Rm/K5bNHt5tgNQ7ZNPLUO47dc9cy1Tt+Wu0FnYPnfBeB3Sbbwhcx1oBKwLd19q1Xn7Fv2k2n7yrTPf3f+un+9I6sHf2/RJ7NsiV6bJMN2oU8LbVj7viQvdWUqdxjS69K2SM7LXFZr/4kC8ax8cxmlsvh5uSE/jR+a2B4ruzcsdbnmLt26Rh9Lm6mrz1DuWdWfU1P2R570WOHtKnHtphY1/f6Bi6wSx+r5s+63619L2mHT6id2rtJpqzeEH/AWJb8MHE4HStWYkMOA/tY9+xX57l1cfKhLtUsH9qgShZBeXzpO2N8VRlkBjUo/lx7OCBeZgDAOyzBIbi9y0Xm2rrNMRle/moAapu6cg3HDiAgrR8wFYJcIzPT3iJjYBrsBjqCljvGAbYRbojCY1yR9FMOuz9ZXAHAUrblU6Drj/Fy/5rIBGVhJtSS0HImvS7b8ru3PjzAO2TWaQ40c04WFIXGQVa+BThzXStdnXK5ro29+mSuLn2S/QTmzm97rEd3Uc7rnP6yDadqX5RtrL8DBA+Dx/V9en/TT/62JX3Ruy/Wtveau87ESo89natj6RjdK8DcFGSW1wNEWki719naTXUMedrEoe3O98oTGdHnLf1OV9dZtOta2aPbZTst7T257h7luV3LVO/4abUXdA4frdVP46u26t5efdarH0p+7+p7D99adeUu5Knn+YbsraagvceWXpWy1+pz9epPMpIYpEVeTE4chFqfo+cegLzWmLWnbI+96LFDnq3HtgCeZG1d3dTa91bx9LTXZNe6CS/PDYRrvf/Vzaer8v5dSzWvyoadavcCmm3aA+dUa3Nre6RichisBd8VeYMHglaHpIRKe0VS7pP2WqZ72nsIyUgoZ0jte2d2geKjrAPGAYyAar0kUyTLMLM5bFmHLI4s3QKamhk8CPXWZ08ez4vKvWDm2qDfZEJeU+Vz7pnHscGBwYHBgcGBwYHBgcGBwYHBgcGBwYHBgcGBq4IDAzhr4LKZO5uzrtvjqKGaU7aIDABLDzftO9XzAFliKX1UmjDy1g9gEFAMEJnZ0PJVuHmlcT1DDUhCAeRkVllSq77sszUVqP7ZmDhvRPXp7SHeCGZdOCDPjEXeAlldOu275B7K2QD7oGQfp9b6kl2n/KZlYGa7DisT8qDPPK4fHBgcGBwYHBgcGBwYHBgcGBwYHBgcGBwYHDidOXCVLdU8nZk02t7PASmgwDEbCHvzoxRSwBXKK9Nt0Pq1X/u1UzaXrDHZZJZfogBvvtvTINlheQOb1GNLFC0xlbG2lJX19V//9aqYJXt/PPnJT15ckmQpn03DvUBAu7wNtHxhw2ylaw721GcJKZoDM+3v5u13NXkeb2QcNDgwODA4MDgwODA4MDgwODA4MDgwODA4MDgwOLAbDoyMs93wcdQyw4HskXO9611vOmsjSZQ3GgGSvBkK2QTfxqw2kZRlBRQLfc3XfM30NW9AtXeav7z+3Ntpyqy1XOfTMkx7SNkPQ70hWXZ3uctdTthHLefKTxv95znsp3bQJZut9QERkU1Ha/Imnrm/7IlWlx+/BwcGBwYHBgcGBwYHBgcGBwYHBgcGBwYHBgcGB7bjwMg4245v46oGDliK6a2F3kziDR72UqtBMRsPAnz8zb2q11LMgGIAMq9anyOZa3lFcnneiwOyAaIll/e5z30m0E3W1vWvf/3V6173urL47PeHPvSh07LNLNm8+OKLZ8u1HmypL22WieflFOVSTG3O8lf39HapvHShtQ2j3ODA4MDgwODA4MDgwODA4MDgwODA4MDgwODA4MBmDoyMs808GiW25EDeBOmFANmjLC8ESJWvec1rpq/ebJn9zcq3rt3whjfcf8tm3qJUfgZQOrL3EgIA0zoC2gHecs23fdu3TW/xXHeNc5ZLPv3pT5+KWbL5pV/6pZsuWXu+pb7wTkWf8zmfc0J93kD01Kc+df/vZS972Qnnx4/BgcGBwYHBgcGBwYHBgcGBwYHBgcGBwYHBgcGB3XBgAGe74eOoZYYDACp/MrU+7/M+byqRFwKk+K/+6q9OG/QD1651rWtNh1/1qlfl9OrLv/zLp+9/+Zd/uTrvvPNO+rvnPe85nZeZJutsE9kbTcaXlwK4pnXTfy8jyJLN7NW26V7rzm+q74orrtgH+G5961uvPv7jP36xOvvIDRocGBwYHBgcGBwYHBgcGBwYHBgcGBwYHBgcGBzYPQcGcLZ7no4aCw7IjkKWE6Lsbzb92PsHyLIsEbhmKed//ud/rv7xH/9xOu2FAFm++XM/93O55IRPwFyy2FoBrTe96U377XCPdS8QKG8GcCv3SSvPbfN9U30/+ZM/OVWLN97qKtOt3PPMfmvf8i3fsvrO7/zObW4/rhkcGBwYHBgcGBwYHBgcGBwYHBgcGBwYHBgcGBzYwIGrZI8zgIY9mOxzZZndS1/60tWf/umfbmjayaff+73fe3XXu9512svqOc95zskFjh+x5A8Y4k2Lv/mbv7lYbt2Jr/7qr17Z1P7HfuzHJjCnLvvu7/7uK5vWe+Mjko3kjY+An01kz65b3OIWJxUDInkTJSDpZ3/2Z1f//M//vF8GSHKnO91p+i1b6uEPf/hUdr9A8QUYdOc733k68ld/9VerSy65pDh71X795V/+5Wn/Mne1ub8XAtT0W7/1W/vLHwO0KZOXAgCrasCtrONFL3rR6g53uMP0Bs/P/dzPXf3u7/5ueXr2+6WXXjotgbRv2k1vetPVz//8z0+yOVv4+EFLLC+77LLVbW9723XFms9tqu/3f//3pyWi3/iN3zgBizLP/JERYNo7v/OVw9fLBLKctLkBGwp6UYMsPsto3/a2t61++7d/e+qHFhlfqvorv/IrTwD/ynKe1z1CPWVdc4Mb3GAas/SE5b6vfOUrF8ejN7Ha507//9Ef/dHKkuE///M/z623/gQQk1uArxdT/Pqv//qqXHa7VDF94mUVZOJxj3vcUrG1x+2H9z3f8z3T22kf9rCHrS17upwEFtv/EKhOPmSjZv+/bZ6hR6ZkyX7UR33U7G30LZ1RUo/8fdInfdLqxje+8UqmLeD/9a9//QmyX9bb833bMbskf/aoNE42EXtMRywRH8CYMy5f8pKXnFCsh28nXLj3w9L5b/7mb179zd/8zeoJT3hCffrAv3ctfz0y1VPWg+orWyLgib1DZTUvjZVTTf603ySQFwQ98pGPXP3Hf/yHQ93ED6CDn/jEJ04vBequ4BS74DrXuc6kJ97v/d5v8jFf+9rXnvDipN7mXvva11599md/9uxl/Mry7eQ9ZVXoTeDePG7bDHLH/pYveSpvyp9lJz/iIz5i8nPZSc/29re/vSy21fdtx6zVDXz+F7zgBVvr4m3kr8dnsPLgrLPOWn3oh37o5AeaBNbe2iezmuLo0aPTS6TodnHAAx7wgEV+bms3FivcO9Fja3vKumePzThdfD3P5KVn60hMWNvPuvw6W1uXnfu9jQzP1dN7bNtxu3SfHvvZU9b9Tmdbu+TrLfHx6j7eKs+70N+tz3pl5N16RUc5HfRDP/RD+0vwXMoY25QdePaMZzyjuTaGwMbujDMDvwScMULf/d3fPQELQK1tgDMddfbZZ09tSxZU2VDAFwAPeBGyAf6NbnSj1SMe8YjZTepTzqe3Sxp464gSEXBfdNFFUzGOU3mNoAtgNEdf9VVftV+WY3J1AmeWZmZZJIBijjhYnhdxnkJx8FynjiXCJ04CIOlmN7vZCU7P0nWOP/axj13d/e53n5ZsAi3OP//8/fssXSdYtu9aANO5NgFAQ6knnzmez0312b/MSw8Eh+TGOCiXZnKY8PjJT37yTrPhjF0ZccCYECeebN373vc+AdTN+U2fnLOv+7qvWyyGpwHOespqozaR9RAHE3j+rGc964RxYsxyIOmJEGAQePrMZz5z9eIXvziHuz8/5VM+ZVr6q48QXWd/OmPVPZdkQNnv/d7v3Qco/d6G6Cp6aN19tqn36rrGeCyBK/LPwTZpEDnpaVuPTKkXSL20b6JJgABnPfKnXhMgAP4QOfmiL/qiaZwbc9vSQcbskvxpl79NBKQxKbZEbCb7RTbj+Pfyba5uk3HqJSe7Bs52LX/a3ypTvWWBTl/4hV+4zyI6jSP5e7/3e6sHP/jB+8d9ORXl7+Y3v/l++425bYEzwAf7CFjwNu3Tmb7hG75hskt5BnoCCGBS9ad/+qdzuOvThPB1r3vdxWtM/iarvqcsn8iEYnxiuvrzP//zp+0m+Fh5S7gbK3vuuedOvkwaApy/5S1vOfn42YM253o+tx2zR/bAPvxmu+mTbeyLdvbKX4/P8F3f9V3T2+dLfrj+y77syyZf9jd+4zf2T1mFUE54mKBZooPYjaU6e2xtT9kem3G6+XrGW+mXzvHW5FTs59x5x+Zs7VLZueO9MjxXR++xbcftuvsMWzvPnSVfb7701X+0RZ53pb9bn/ZQgTPgkn2rGGIAwFve8pYp8OHUcOpkI9nLaRPJoGJ8gWbriMG7293utm+815VdOscI3epWt1o6PR0HslDKnDtKTDAg8Ja94tx3fMd3NAWv+JLgS8WMtqV4ACMGAoAHNAIs1cRJXgLOAJOnCplBNIOxjmRcfOu3futJRQCgrZRsvJSfqy/n8imDpS533/veN6cXP5fe7EneazpIfanL/m4PfOADpzdpymQyfgTusgpKkC7ld/FJjsmg+3gRAefmm77pmyYZt6+cpaO9xMlD5H5u3Jcgd0/Zb//2b98HzX7xF39xAl8/8zM/cwJjAXVmZZOF6Lk4J8Ys/eOeZnABToA27SIXveQtp8a9Mex+AhCBjsxSQSw5Cwhe103nKDPoSg6wD8AQsmKSxIy5vpRN2PDd0AAAQABJREFUQ8cLDGRe9lCPTKmXPkeyo2swki0L9cifoDGgGTmj2wU4dDYHQfD5lKc8JVV3fW47ZtfJnzZmuXzdmAD5guIyO7out/S7h29LdRzW8cOQP21tlamesoDNgGZk9fnPf/6U3U+fkXmAwE/91E9NrDoV5c+ER7LLp0aOf5M9I4Po1a9+9fT2b9mEJmJMXr3xjW+csjd7WZXJJSB3PWbpuNKfaC3LLw8QBvSS+U4n8KOd4y/88A//8NTUsqw2mKgC6pg45eff4x73mP56n0v5bccs2y2YzITXNvfe5poen0F8wadBMuNNVIitjHsTZnwPdjGZZ5/8yZ88lbWCgz3J8elg9W9bu1FVc8LPHlvbU7bHZpxuvp4JcDa1JnIS/7D0O+pyp+vvbcftpucdtvZkDq3z9U4ufXocuTr096EBZ1Kxs6G5tPksVxLYPvrRj16Z5dCJS8Fkukzqt5mozGTlePnpnDck1m8fLMts+q69jGechaXygmGAArIcKktOZQXJenEOgLeUol7WC1Sy9K8mywhlHjCIZ+0F9XPAmWDGzGodPJqBf5/3eZ+6yvH7GsABTi0Qzd9hkgA+jte97nWv/eW1ltQ+5CEPmRxdCD+QqYfUixj/dcsGlGktC9CzlA+98IUv3F9qItMC6GcWD8jij2OceumhS44vYQagyRSV6WC8bQOcATyA3sD0gKVAOUuTnQOMXHzxxScBMEC8TUD99HBn0D99es4550xPLCs5+xtazqOfOET6NWBAK2vS9y3yR77JCx39oz/6o4u36JE/lbBnyFKmBz3oQdP3173udZMuJ8dAtG2AM8+2zZjdJH/sdmz31Njj/4B/7Ltg23OsW6ZZXpfvvXzLdVfF52HJX6tMecaesgAyZLuIUlZNugEsBN0vf/nLV//wD/9wysmfMfb93//9VzloMTHsFP4nuwhv2LFkUsqC4qMa64DG8u3jrY9i5QJ61KMetXFrgtay7Ju20gFetpTlltrH17f8jAyaZAb6KWtChG8RvUEnZ6mulQOpo/W5DjJmgU4Jslvvt4tyPT6DrHUkizIgpN8m3u9///tPqy3EP3mxVmIUk055qZXyNW1rN+p66t89tra1bI/NIGOp93Tx9Zb8GUkBgLO3vvWt09ZBNa9P598HGbfrnvtIo/+mjp6yp7Ot3eTrrePnqXzu6tDfhwacCXwoL9kXtePNmJotKQNUjoKZJ0FSlL/OSj1mst785jdPM6h1J9rTLKAZIAsI5m+OBFyCE+Usbwu5T0AzxkgArf01AbNCsiBCsqYEEa6Jw5FzvZ/AsD/4gz+YAAF7fpTkHvZBAo6ZpQMWlMQxQWYTzeANGhzo5YCZbcRJK/ekI1OOAW0trTGO7alFzoxzAHnI+JMxaDzIgDCbllkzs+WbqLUskBoZF+X+LI4973nPm5ZcG7MAZTrEXlLGeb1MPLplbkkDgB9PGB4z6Zx84LbPUMA79Zd0+eWXT5l6ggFgA5CuJJl7eGS823doiYAiMq7oAzMsZuvtbYKvc2QWFyCX56ZrAYVlRoHr9J2ARXBjMsOLOd7whjfMLoWX7aVO/W8fOSQYt2zI23FL0sbb3e52E5jjHnhP5woCy2U75TX5bj9MkyF1Rq4+BjJZEi8bl6N5WPLHpqC8qCRtqz975A9Ypr/0DdksycQLGdIHNbXIX8+YLetvlb/yGm0kM2huv1JLurUZsCYwFvDXtrSXb+5F9tk3fcPhpnPq8aQcMql0m9vcZtI57DF5Yjf5IkvbPBy7cjXtx9oqf/wG7WGzf+InfmKS2dQD5GCn9a1l4K0y5fqesgn63aMkAfM//dM/Tf4I+fiZn/mZreSvRfdsK38ycekHvpRM6lpO8jw9eso19BPZptfoEVk6dE89nnv0VK9MtYzbPF/5qU30NmI7S6I3AAKei4zSGdlWxPOVIIlMQ2OR3TbJyya7hh7dtJ9nT9m8Dd1YLAEvtsbkkUkjf4AzWfLsh0yogGaej82hU/S/viv7qaXve2xGyU/L0sQD2s3XWVrV0iMn6m+Rvx6fgb7Tb5lEyjPwQbSbnrHVhYxSIJt+Rmwwe2vCqeRpru8Zt4dla4evl944+ZPuTYb63L7WLbY2tZJzE3f8WJO8ZJ788NtLXzblW2RY2R6/MHXns2fcDlt7jGs9YzZ89tnq67Xo29QrljIpQk7JCz/ffuRik5Jay/XIs/pb9XfZll18PzTgzGBCAipG2MaH0lAFqICxEjRTTjDBMeFQl8AZZxeQxYG4/e1vr+hJxDAI4KSIC0x+/Md//KQyOcDpsASoDlBkpwjsBJdAsLP2gLM5MgPIEWOYGHpvPmTQ7EHF6PvOCTgoZW+fOqNMvZ5R8ChNuwbOCFLKZJZqOjD+DQ40ciD7t80BXBxejm9AZr+B0eTVcsgEsj/4gz84GWizZFmCGRCYQ21JBidVYOd8DSS0lk250gnPYwKrEogBpcw8Pv7xj8/p/U/Pa6kmqoEvy0zi2BnbQDhO6v3ud78pEPNSAcRooHKfEb9d4xkBctoQ/jgnsHec7hLUMmxzZJYrmUrOq1ObzbRYxpOMhFzrmc38Rx+5B11hbxt8z7INTovlDHGy1RtQVKDj2R1DgCpBbvjpuO/0vGw+utdyfMTw4k8CemW1wZ97Wuq8bv8hxhMJ+GqgT9YZ3Rd+H5b8BdjRBktgyDs7435Pe9rT9jN9e+QPcFaDu56TLbJ3G6qzOFvlr2fMTjfa+9cqfymfT4C4QI6TVM+S6996ybqlhJGj1NHLN/0tk9N9kfrISZb5pF6f9Iql7cY/iqySP4A/P6Te92sqePxfj/yx9camsSAzwL6ZiG/A13DvTCi0ypTrW8uWOm4u+Ml4PbI3A4965a9V92wjf/hDr/CngI6ymeeoR0/ler6iZ0/f61N+oezI7LXao6d6Zap13Ka95WeWQGu7CZWSAE95JuVkgdsKg+9s3NH7SJAsoEGxadnbjK97dG9PWDaP7ufv2gu0zGTvKRtfmn6rKWMwQOBcBqt+smIE8RdKgKe173vGbNoIPMgWH7LBl5YL98hJ6m6Rv9iwFp8BCD9HeJ96xC/0YWIH5YHRSJmSr9PBvX894/awbG1swSa/MOWuyb5e+sWnsck/Q3zk6K3pwN6/VlurPB1B15YkK5EO4avxC71gp6QWGe7xC8u6871n3A5be+zlRz1jNnxu9fVa9a16+f7nn3/+CfEDH8sL5YCzsJSecj3yrN5W/T01Ysf/3nHH9e1XJ7hEDCZnxXJLMywcV2+qzGbwuYBS4PhR/iWZ5RBUcxaWCJhlD7Sl7IvyOoCd+9TAnWDIMgdOxCZ60pOeNDl7nJMLLrhgegseBaKN9h2rg71N9dXnzRpl5susek0ByxhFDlPIzJ9g1QzUuo2aU358Dg7McSBZlUCDmgDZKM6ysZkxI1jMrDzFaTwIUOPoBEixwTGDDVC39BnwRkdkeYH6W8tGX7g2DqTrUQl+Rx8dO3Psv7eVWrJi6QMnBVBo75AQMANo5jksqeZ8AKsEM5x9YHnaHPDJ2xZrClDFAQ95fgCQuj27zznKTI1zHAcBxh3veMdpg2jHAGIB/fwO0UFmmbXZp+AUDxhQZAbdfiHaTVdwzJWVRags3VIuIRVAe2bOlSXp2qAs8ASVwF5SpwFNdL16vSSGXiIfsovXUfpxboPwOP/agg5L/gIMy5oSRHIIOApAIPwkt+gg8udNTsYHPsqKEtAGZFF3j/z1jFl1t8qfsiXJdIm8lRnbypDvOPpsMvnydme2Nv2Vunr5RuaAZmSIrSd/ZjXretVvL0YBuxl1/FUW6JZ7mh2duy5t65E/QGcAY3uT4g1AyLhE5DM2vFWmXNda1jMar4jjWZLxHp3Dyaxpk/z16J5e+aM3Ae6I7zVna5zr1VOuQfqXfNA955133qSn6DqyEOrRUz0y1TNu05byk11E6dfyXHmcTmJbvYUU0SFsmr429hCZ9zIBlIxmWxLIWqDj2Wl+ufFxk5vcZCrXW9bEEArYO/04/i/3DOBdnhMb8LlNZtm/y/OWk0A9fd8zZtMGS4TpCRmxMpmXqEdOUkeL/PX6DKm7/KRv3QvvJCngnwmuyA696XeZiVhe3zNuD8vWDl+v7JErvwNy+dl8wwsvvPDKE3vfemwtfeBFHQiILqZml+lddZMf8XlNLTLc4xfW9fvdM26HrT02ud8zZvG41dfr0bf0Jt1Ch5kEj0zl5Wr8ZDLaWq5Hnj0TatXfx0rv9v+hAWdx1DBQECfbQ6q2QMuA5IjEOfRIUsnteXD55Zfv9gmr2iwrcp8yQKmKbPwZZaagZ4kBFCi2gHe5gSBSEJY/QTwFKSBHjJ+ZsJpkqHBW3LsEIJO1kBnG+rrxe3CghQOUHQpIVl4jcEUp4zvgRwAXJQkQQhz2ZK0BGsgrAi4Bn2UAxGEFFCeQ6ikrg9W9EXAmQLIx6g2roQB9+e1TFmz0lN/GVdrod5Y9ewZ7BKFySZbnVSbj33l11AQQQWmb8gyNe9FH9Uxfeb2lFsrRLbJb6QTOjuuSIUCX1iSDIJl+PpM6LZMMCZLSHvspJnC1L01eWBLgEe/cHwAoSHNfbVA2oEH4K+jL3pb6P7OknpGeQwCoUvdPB4t/cQxkH9QU4MzxBGO7lj91B3Dw3Jb22ZNP4ID/7isLTb8cRP5MtiRIdk/jDV9DrfKnfMZjy5jtkb+0JZ95K66+qW2d51G3MRLAHP/0T92XPXzjXAMJkMwkYwyf9MvcXqLkGuhqDADwlAVeJfjQb0tvS3WPXvmz7DsTVUDh6DH3JDOhVplSvqdsMs0A3eW4OrqXVRQKwJ/fPjfJX4/u6ZE/9zZ+9JMMqnU+X4+eUm/I5EaW5OqbrEIgR2bse/VUj0z1jNu0t/yMTQr4UZ7zPfYu5QDT/GuEX94+rz8yDqcTe/8+5mM+ZvpqPOC5scQPZp+MCaBbMsN6yuYNlJYXZ1mZG/H908bSRk6N2PsHwJbh7N7Ic5UrLHr6vnfMysbzjPQSv3uJeuUk9WySv5IfLT5D6i0/ZQThISqzoMsym773jttd29rh6y33kAQKxMZl8jWle2wtf0y8yO+W6MGHZ5ct+41/XvohuccmGe7xC1Nn/dk7boetPTxfr0ffyhRkE9koGYtkyiSO7SIAaXS5yZnWcj3yTIZa9Xctb7v6fSxC3lVtRT3QS8RISwONY8mZfdzjHjc52DbHZLxPJzKjnEDV0jRLlBh++0mYaQbKAQHta9JCcw48ngksHvOYx+xndNR1yT6xhwbnJBloeeuOgDUzDPV14/fgwCYOxGGPQ1uWD1hRZlUK1s10yoYCiiDym8DFb0bbckTybszk+jgFstCMn96yxgplbUyaQecEa49gm3PqvOeYy14CGDlPCQOUZIxwpmUouCZGXZvKINRvRkL9R44c2X8Wx+OI+h4Kz8JXwTWnwz4zGbspW39meQIHv25DnO+ACrnWM9l7qiT3kdmnfe6d5afaVAKMrolOUlbbOW2W3iLGUmYLhxevApJF35dZB9petzn9oVyAv6ni4h+nboncP5QActfyp352CggB+EhgY6mKP6AnHtG3wMNt5U+wg8d0NbBZEClDz2ywZ2uVP+2NbLWM2R75U3dIH2eM2seipvR9wNLyvCVlAW0dJwetfMt+XybdAhKlboB2MuByzLhG2qtNZN04jcw7h7dzIKNzvfLnGv0GnAlQEMDQuVCPTPWUtTxU5g5wTOaQ5/J8fme8zT3TOvmj33p0T4/80fUykdzDvj3rKH3WoqfUFwqgn990DYBIdgt5yQSQ8y16qlWmyGjPuE37ys8SPCqP53vGePm8ADDjRNZZwK96TySTLQIZy90DdgmOAeDK0kWyXGSA9ZQ1jvmheGvpI78e+a3f2KnIR57Bp/sCl41NGcvsN90qA5de7en7OfnOvWqbob9lyiATQbEjKV9+Rqc51iInuXaT/PGPQvheU+0z1Octl88eaZa/AkK3ofRLZKqsI22Ir+bcrm1tj1/YYzPK5/D9dPD1yjaTu/hh9XYIykUuW2ytiVPZOQhARv8Zc/6y7C99PRU6/m+TDMtibPULy3rL7z3jNtcNW3ssw7xlzPb4ej36Nn4gf6wGdeE9oWAlm8ppJ2qR5x79nXbs+vNkjb2jO2CmwcgwBzRTNSWMOZlt2tHtrrJqkuEl6C335KCYs6+F1+u2AGcMdjnbxYipd8mhLx9SIAw4I0QcA8s0OWwMUbJHyvLj++BAKwcEGOUyn/K6MhOiPG7vLs5oZq7qZVwClTL7orxWkCiYYggYcga5p6zg2ViScUHnCB4EH4A6oBgHfm5MZQmVQMF3M+6WrnBYyqWdDEqMStlu34FWHLoECDLnyqBMmWRjCaos787+MQKH6JMAjoJ9x2RWWXoR50l7simoOksqAzXH4wyXZfAnQbTldvQGEtQs1eu8/VIs99EvAJ1c59wclfurZLnaXDltWCI6DOm3miJfnsVfaNfyVwOPuY+AE4iJ55xXAd628pdMQ0GqLD9ZWsaAIJTdDG2SP+Vax2yv/KUNPsklecH35z73ueWp6XtkOEu3ygJ51vJYK98CztZZa+qaA18BRjKaBAhzzmXZhrnv28if8Wo8J+PG3oO1Q9kjUz1lzfACzgTT9IQ/4x1AQh9ycucmDtInc/JHHnt0T6v8ARazXNzeTrJ+a6I3+DDaEH3TqqdSV7YPyG+fAc7IaeyY4y16qlWmMgbU2zJulasp4PAcoKJsjtc2jc0lA4hM1NmY7Im/msipCQFgZvRyT1n6QLBkfzX6n952zHPYA1SWagny5f5kkq33B4g3UUyvmkSgV3v6vmfMZv9P13jePHPsjaVNdB0ebGvPNskf/rT6DOGXT/6BjMK065WvfOUJLzkry7Z8bx23ZV27trXD1zvm65U8zn577MoVe9sB1BQ902pr6Tj+cSZ26vrmfm+SYde0+oVz9TvWM25Tx7C1bfFZr6/Xo2+TkTwnf+knn63leuS5R3+TlcOgQwPOopDnDCaHCXA2h3IfxkPuss6kRs85INLlOUtx9Dfdl+HMzN+msvV5jqVMCIE6AE0GCKrfYFpfN34PDmziAGVIkQWkKMsDpVDttNfLfuy9UW5oK4B1rQCoBNLVVQbEzveUdT0y4+pPQMS5pDDVY6NKBPxxnJMsG60GtgXxskZdI5jLpv+uNeO3NDOWDZEBdZwSxqd2dOKseO7MFKpXBlhNnv/Wt771FOjSMQJggRKdKWtnjjZlKLjGc/lD3mAXnmtT9iWYTlb/lAX8mRTQDg4/h0qgBdwRUB4tloQBB5F2yyxcojqoK8uFp+FbeS7ZFOovaZfyp96Ap3PADFkQ4CWAVZ7s+Vsnf8oBZo/szfTKGC6JvHpuzyczq1xu3yJ/rWO2V/7KNrIzSL/PyRybJGskQHF5bQ3u5lwL3xLQzvkLc/UKLOOIkW/LUcibZeFZLpz7z31uI3/ssIzVkGU2JreSreh4j0z1lFW3Z5StYxINmB+59ZIElAyXVvkDWvXonlb5wyc6DnkTet6GPh04/s8+uMhLonr01PHLp485uUjmEZlIFk2rnmqVqXLSpGXclm3O9wBn+MRmpa3OOxb+1YFttvhQzjikE0t/0PPTL2SjrFP5gLypu6es6+l9L32JjSVvbEXalHFArvnIJqrqCV46zyQO4Ar19H3PmI0PY5ywtTXZYD8vHdnWnm2SP/ds9RnSPvrPsqjYQPJl+5uDUOu4Le+xS1vLJ2n1C9OGFpsROTydfL08n7Znb8Cl/u2xtSZO7D+M+C7GpuQVPvCN997ACmCZo00y3OMXztXvWM+4TR3D1rbFZ72+Xo++jb2Yk5H0k8/Wcj3y3KO/67cQl207yPdDA84YZ85rZnPKRuZYnLvy3Kn+nbFj/IPOlu3NLOZcxkdZblffzfxYZsWQUSYclSVFu6t7jnqu+RwwA8xxDEhcPnEMeplSyzmx3wYSoFLYxgJjnc1+bdJsSRogXeYSWQ1lKZbfHOqesq6xPFobZF2UzvhZZ501gUWCI7omb21xb5snl+ALZyUksOIwZ0bYdxkkJZlFt5QvAIjAANAjWyj7timvXdELgDDOemnQUidwQLCjbYJgGQPI88Q4CWZLEnQK0LPpec4BdOiDBCuOJ8tN/UBP/JCRo+66XsH10T0wzGzga1/72pVNxAOaefNnWW+yRwLKpS0CMIFQAg9tUMb+T2Qge2s4XlNANXyja8s68BfF4fJ91/LHkcwm94JB4yEEKHQ/BDxErfKnrGUG+CC7op58CRCnz3vlr3XM2lOqR/60ORQgfW6ZpjL6jUxlljHX+czETnmslW/GIVBbJhX+lyB2lgykXs58QDO6pxy3AvbQHAiXc73y5zrLu8m8vtO/2uqYP9QjUz1l1U334rssQD5B/CptiJ8SndQqf+rt0T2t8gdYio5wj5LSP7Lu9bF+79FTZV2WJJbjFhCEH+iKvQyO6P4WPdUjU/RSj90o25zvfEztow/od0BBSGY2co9kDPrN1tKX9HvGOLkwMUvfR+f4tN9luY2C65PBZKKkp6xrBd8ytOwn+JSnPOUEvgdMdg4BINlA8mi/pZICbCbY6un7njELMIgOL+8vewbPySgfIOCfMi1yUta1Sf6UbfUZlOWfGLuCRn3MfsjKOyi1jtvcZ9e2dvh6V/p64fGRvYk1YxAtBf49ttaqC0SuLdmkO0IBjnO/HPe5SYZ7/MKy3vJ7z7jNdcPWtsVnsQPhWz6XYo0efWvSRjxle5GaJEzwy+h8tralXI889+jvum27+n1ltLirGo/XYzaEgudsn3322fu1C8qy/1aZQYG5ACDB42ESp959OJrbUIIlWQEMbUiAlzfLEZarggKSCRQ4HRypuSUZV0Vbxj2uORywxNHYpWADVHg6TrxjDG/e1OV4DJkZC8vNknovPTwztwGYBKy3uMUtXDaRoMRbn5CAyUx4T1nXkX96JXurOCZIOuecc3zd3++LvvFcnITy7WrK3PWud52OO59N9bUHcS5KR5uu8iKOvAlMmSxdAzaWG3Tn2TjJQEFgCYCv/rvkkktUMwVMzl100UXT72QeuWeps+gbe8oxUHPGK3sGqKTMCgtgFfBDPcnKm264989bC/UbQFMQE8DE+QQ1vntOMoHieNGPgl6/GdAcVwYYBxQQTK1LoebMB5zLTKnr3S/ALRkN7Vr+9D85QOeee25uM31GTgS2luahVvlTNksTyFQJ1srmCp+TJdIjf61jtlf+tBmRf8EjsrfnHIUfxmK5BE8GSUCR8rpWvmWzWbJE5kNkNy/EybGAI36XskrveDteaB1w1it/9vHwzGTGXlH+fHcse3z0yFRPWc8jAwVAZql5CK9koCHPE1nqkb8e3dMqf4K3Wvfld8YcUMAxtqBHT+XZfbIx5CNkogRPgFIyJnv0VK9MhdctdiPtqz9jA/nOkdXSdmasuY6uznJTy27ZYM9pvAa4xdvoXJtAB6R3va1FgFlIlmRPWddoFxtkoqrkOfvLX2APoq8DoLEBxkfIZHr2QMwb73v6vmfM2tcsMld+Zmy4r+OCuR45ybP43CR/yrT6DMrK3AtoJot0F6CZelvHrbJo17Y2cl7KtvvM+YWOt9qM09HX83wofp4xXGeGHitxbH9A31tsbXxXdZWgGRAeP1GpD6YDe/82yXD8FeVLWzvnF6bO+rNn3Lp22Nr2Mdvr6/Xo26zIoe/zMjj94zfAFdGdreViz1rkuUd/Tw05hH+HlnFmxsbyIsGlJUkMO8NkGSODDuQpg2+ODfDHDHGyVA7heaf9NTjy2henouc+NmH1TIJRs2ecJGBVnktAlbd39dS7TVkziZ4jDkgEf5u6xjWDA+EAg02RyWgya202GQUo5pwLgJCxHeNrk2qGGXAmcBbQAXCADRw9MxqMqmsoW0tSjEVG23WZge4pqw2veMUrpgCaMy5oECBk9tg9nv3sZys2BQRm2zkEnk17AUmc9jj8NjpONoJNir28RFaW/QyVFURlqQTDkEwObbbsgXN7/vnnT1kVsr4EJAIRwfQ2JDMAOOW+9BVHQ33aTI8CK+3RVpPntyzNzNCRvRlMutV13syJgHj0sw2GLY0ymQGsMrGQsl5yggTQygg88VdAwWnSBseQtuhHvLPp+NE9kExd9nB0L/2eAJSzLptlHZExs9HalzcpAl/cR7ZjHO7DkD+yaKY3S+DxTN/bRzJyYhlq5KRV/jyvvrKhLv55SQ7eyFQM+CmoD3DbI389Y3Yd35fOBbDEm3qZdq4hm7IXjW2AsYBcduFctplrevgGTLF/E+CVDPIfojtyf59ANoE6+8ynMEbJvTY4FsJvvF+iVvlTb/SjzCA8QLZtsOzMOVmb5KdHpnrKCsBlgso4widtMDbJFRn1JrxQj/z16J7Dkr8ePZVn9FnqbP5RgrzyxTSteoqu6pGpnnFbtrn8ftlll02ZvvqU7bhibzKWHvdcpc4HOGSSRLYY4AvRLewu20xH0rl8V/vD0O/Ok8lkH7iGDUs2YE9Zcm9PJuPLXr/6TLsDxrEB5APhOYBMG2RQabOg2zhiS/jSlxyfROrt+9YxOzWk8R+woVVOyipb5K/VZ2BzAozikbeHz5F+kPHXQz3j9jBs7fD1Tu6t2MtMHp5c4thkSKutxWP+IN+ULTCuZJgmblR//JryXptkuNcvLOsuv7eO22Frj/n5PWO25POm7z36VixHrvhjEhZkHfM1yFX0uFiL/9darlWeNz3HVXH+nfYcrPPdyJ5kc4PnII0QVDLMBN6ngYqpgjlvf+KEh7zCW+DF6bOp7RwBrCgACsX+F0tkRo3TaHarXFKmvAwaAS0DbV+jJTLTp632/4nRV1abLYeAqgoEOQieS/YAZ96btdYpPHVYKmOWUF0leOjcElF6SdNPRo+y+kxQIzjgoGSGApAnG4XgvuhFL1qqdqvjhyErWzVkXLQVB1r6DzDB6QYWkHF/iBLkSCLjyJI248TxUpaBMoJGYxqAInj057vsCI5zxg2gy5K4LE9Ud09Z41xdlLaZSkGS8cgpNx4FPSGz2cAXQbegw3UcfuPk0ksvnYL5lOXQ4wO9Q395Xp+eF2AOWCrrFiwzJPSCoIEOMsYBd/jz/9m7CzBbjuJt4IO7u1/cLbiHBA8EdwjB3SEEAuTiGlz/WCD5CBbcJQR3h+AQ3N3929/kvpu+kznnTJ+7m9xNup5nd+bM9PR0V1dXV71d3TOP5E036b8GnJIYxPqzsmoHukD9gJecNOA5Ug9Rr/QVp0j7eUYbWPoXRyV502P4BZBSL2UO+OWDDSmz96hH9qUUFawcdChHfYcdduifA6h4Lx2O95aHMr6UIWAcfnIKFxH+yo8OS9miYxl/2ms95c8SJ2U3dpVygrcAV/IZqpE/Y4MlQHijvcgqHU5/m/gIsCnvWvmb0mdT5uFxnvxJy3Ejg2Rh1sb10gH9yAdgSt38kR36gDyS1SyfrOEb3UDe9Fv9i0zphzbH3rQCKDDaAhpw/oHo2o1MS4u/wANyqEz6ZmZDlXtIU+TPM/SWPIErJUClvoxJ9/QPdkSNTNWkNSGpXfQVMqXO6k7P65+M3FCN/Hnm4Im6R9ptkT/PA+f1cXaZtg5N1VPSx45khNNreEHHkcFXvvKVW9mNNXqqRqZq+23qWR6NK/qSj00of/Q40MyESHQ+MNmkBPlmU3s30l/0FWOtCBZjU+orkph8RPfgjdUL+24BrDxvySkdPiWt50WhGicz/mbsA+qW/UxaciJfek80jHLoy0DuJz7xiat7mylHTdtP7bPyHSP2ijKZvEjUm3Th25TxrEb+5D3FZmBTlHsB4tXYX5xZ+Zoc1JdMTpRbHbg3pCn9dj3H2mbrbd0i2s4YRw61zSyaOtbKhx1jXNY/jcP6J3lhbxjbjVNkhd6ZKsO1duGsekztt22sPczOx8cpfXYWv+fZejX6Fr5DTtlfjnwCesnYY7I7E6xT002V51n1mqW/Z6VfdH2en3yslUiDfk2Kyq7nMkkOs0GJsRxwZ1HBt/f7lE32izHox2jZ3su9reVbb1nZ1vK15+dzoKb9GF8Mbw6ZNfOlQzP/LbPvMtoZ9fJkoJbg0/CpmrTy49x75tCVGfp5UU0UPJ3EiFCvcs+sYRn8lndC6DlR8/jAOJGWPpA2kUlj+dZc0xYMIAOU+i0C6JUDQA/MyjKdWe/DB8CjSQ0GFUdsSEC1DJKizhYZ5J43oNKRovE4IACvWjJZwnH0fBzG2jzK9DUyRU4yO0tOGIuzqEb+5MGp1Z76o9m+MZ7nXTXytx59NuWoOeqL2lvd5lEt30R8slU467NsCe0G9Acq6IPbIjdrLX81MlWTVrvTaQxjkbYlYDbG/xr5q9E96yl/U/RU6qocJjj1WbpjHk3RU8vIVE2/nVU+wBIQ0Bi1qB6z8hhep0/JNXBskZzUpNUvRTsab4xRs/qn8vAHTPoCA40ni8aGmrZf6z4b/k2Rk6Stkb/1shlSlinH9ei3NWNtTdqaMUO/3Wi23pT2KtNMGWuBZfQIOwOYNsUunSLDy9iFZdnL87XutzXjZ01afGlj7eEtl3YTqBTA7PC7h59NTTdFng/PdX3O5vnJRxpwtj5Va7keFRyYJ1BHRXnaO+s40Nqvjl8tdeNA40DjQONA40DjQONA40DjQONA40DjwNGbA/P85HX7OMDRm6Wtdo0DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DR3cONODs6N7CrX6NA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA0txoAFnS7GtPdQ40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA4cHTnQAPOju4t3OrXONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOLAUBxpwthTb2kONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA0d3DjTg7Ojewq1+jQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQNLcaABZ0uxrT3UONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOHB050ADzo7uLdzq1zjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DiwFAeOu9RT7aHGgcaBoz0Hjne843U3vvGNu3Od61zdv/71r+6rX/1q9+EPf7j729/+tnTdr3e963UnPvGJR5//xje+0b8jN2vSeuaKV7xit8MOO3QnO9nJui984Qvdhz70oZllvdrVrtZd8pKX7E5ykpN03/72t7uPf/zj3Y9//OO8eunjaU972u5GN7pRd8YznrH7zW9+033xi1/sPvGJTyzM70QnOlF3n/vcp/vTn/7UvehFL1qYfizB8Y9//O6BD3xg989//rN75jOfOZZk9Nr5zne+7vKXv3x39rOfvX//pz/96ZllPt3pTtdd9apX7c5znvN0//3vf3vevfWtb+3PRzPfxos777xzd/GLX7w74QlP2JGPj3zkI92vfvWrpXOtkanLXe5y3dnOdrbRd2nbD37wg1vdq5G/85///N1OO+3UneY0p+l+8pOfdJ/5zGe2kv2tMq74UdNnT3WqU/WyeuYzn7n7wx/+0MvqJz/5ye7f//736huXlanVDNbw5EpXulIvC/rY7373u14e3ve+9231hrvc5S6d+694xSu6X/ziF1vdq/3RZK+OYzWyV+a8FrqvzG94foUrXKGj740J73nPe7a6XaPPasaMtejfF7rQhXodccpTnrL74Q9/2Omb3/rWt7Yqf82PGn12znOes7vUpS41mv3//ve/7sADD9zq3nnPe97uKle5SnemM52p+853vtMddNBBM/X0FL2zVeYVP5bts3e6052605/+9N1znvOc7q9//WvFGw9PuozuqZG/E5zgBN3Vr3717oIXvGBvQ/3oRz/q3vSmN/Vj9uGlWLuztZa/GpmqSavGpz71qbvrXOc63aZNm3q5Y/vN6ivbo/xd61rX6m3Xt7/97at2wMlPfvK+TotalF6gH5ahWj1Fl17sYhfryO2vf/3r7r3vfW/3ve99b5lXz32myd5c9ozeXFb3jcmeF9TqM31wt912685xjnP05TMOTLHV6bBjHetYS/kuNTb3KNOWvHisFYfpf55lZJ7hDGdYMpv22DGJA01WNnZrT2k/zswznvGMjuNc0l/+8pfuUY96VO9ol9ennHOs5oFCX/va1/p3yqsmrTIqEwBgSG94wxu6d73rXauXj3Oc43RPfOITe+d69eKWk9e//vXdu9/97uHlyb8ZtA9+8IP7QaB8iGHhnRyOWbTXXnutApT3uMc9ZiWbe52hBTDzHoPeFLrjHe/YXfnKVz5CUobYYx7zmK2uS7f77rsfoX5///vfu4c97GHdn//8563Sb+uPzZs3HwG4UrdnPetZq8ZlzTtqZEq+z372s7uTnvSko6/QD+53v/v192rkzwN3v/vdu8te9rJHyLeU/yPcnHChps8CPxk5DJaSAGiPfvSjV9tyGZkq81uLc+32hCc8oQcZh/mROeVVbvSCF7yg42Bquy9/+cvD5JN/N9mbzKo+YY3sDXNeC903zLP8TRdy+DjST3nKU1ZvTdVntWPGWvTvW9ziFt21r33t1bLm5B3veEf3xje+MT+rjlP1mUwf8IAHdBe96EVn5n+3u92t+89//tPfBzoBtYf09a9/vXv605++1eWpemerhyb+WLbPXv/61+8nCL3m4Q9/ePfLX/5y4hu3Tlare6bKn7eY1DLGmkAqyXgI7NsWXVfml/P1kL8amapJS6Zud7vbdfppScaGhz70of1EYq5vj/IH7HvkIx/Zj8WAM0ACuvCFL9w96EEPStFnHk1cvuUtb5l5f9aNGj1Fv+sbZznLWY6QHfDsta997RGuL3uhyV4955bVfbNkTwlq9Rnfg70YAqyaxFxE9Fom+6f6LrU296IyjN2f5ye3pZpjHGvXGgeO4RwQ/UQ5AQj+7//+r4/gAJCI0OLoLEOAJcTg/u53v3uEv9L4q0kLaApo9tGPfrQ31t///vf3ANJNb3rTzuxVSL0ocwraTJ26ZWby5je/eXeBC1wgSauOxz72sbt73etevfHzgx/8oHfcGTTeI2Lvzne+88z8rnGNa/RpZiZYpxtmaAOaiTJ74Qtf2EfxeB1DXbRhyGw8kA3QYqYb3z72sY/1bcmYBxiuJZkFE+1FVoCfz3ve87qf/exn/fsZ1cCRWqqRKXmTdQT4HMqr6JVQjfwx3AOaiaB78Ytf3OE9OWEocwCWpal9NjOD2pJxs//++/fgMuPlFKc4RfeIRzxi2SKsy3OiKEXm4dEhhxzSyx7+kw3AJudoLanJXj03p8reMOejSvfV6LOaMWMt+rexjAwiOpZD8tnPfrb/vcsuu/SR0v2Pyn9T9ZlsM57SD0PdJ5JAtDEyfgQ0oyeBc69+9av7CHX6lhMcWk+9s2yfVU8R4kc21cifsu255549aAYMOuCAA3qgRGQcHX7ve997TYu/XvI3VaZUZmrayBTQDG9e+tKX9g7/b3/7235sKG3VpF2PcW9Z+WM3AseUaUiidfS1sT+2OMqYOHx20e9aPXX/+9+/B82MuSaXy4hudd+0Av6tBTXZq+fieshebSkAqwHNyIfJTMcx2XWN3CI2ZyY9a95ZY3PX5Ds1bVuqOZVTLV3jwDGEAxz4AEhmwv74xz/2Nf/KV77S7bPPPr0Ta6A89NBDqzgiXwQAEYE1j6amFY1iKR965zvfubqExGw30O+GN7xhb1gyLhknyRfAtu+++/bPfepTn1qNLtpxxx1XwaP+5sR/AA9LUBmzj33sY/unAIF4595lLnOZ7mUve9nqgJFsgXi3utWt8vNIPVq2iDhlAJycAyK0vyiNzH7e/va37+///ve/7yPRDHz4JjLt1re+dQ9yMQLjUPWJl/ynTW92s5v1T5vJ/MAHPtCfW/bKMeMAatfXve51VW9I20+RP/JNXixbFO00i2rkTx7ASsQoftrTntafA84AgeTY8mFAVi2p29Q+y/lWN0aw/m0ZNsKXLFk67nGPu9WSzdryrFV65bSUGB188MGrvCF7u+66ay8Hlodph9RjW97dZG99Za9sm6NS903VZ/Rc9MaUMWMt+vd973vfvn8aw17+8pf3LLNNAgdbWQA9JXBf8nTW+VR9luctD0XPfe5z525hYLIJGQdKPWn7g7333ruPmjOJBchYL72zbJ+lWx7ykIf0vO4rcST+myp/xlNL+o0P9LUocLxEti0Q0UdXWz5XTjxuS1XWQ/6UZ6pM1aRlW2lHut/kXbYZ0D9Mtp31rGftZdAS7e1N/tTTZGsAbb9LEvn4pCc9qbzUn5vciu3w5je/ud8u4wiJFlyo0VOWttqaAwHMsvUIfYjHQBMTIEDLbaUme4dvkzGFl8vqPnnPk70p7y7TmAhAxksrd5AtSIbbmbhu4pg8SUuO9V0yNJXUearPNzXP2nQNOKvlWEvfOHA054C9SpBQ1YBmfpsZcM3+XZY3GDRFgxjIRVkZVEMGW0vZGDVve9vbus997nOrUVXf//73k2zmUZQWWpTW3iqIEh7uuyLii3MtIsqeDGYk7SVlVmsYWi6Kygy5yJYhMQrwhKNnhgTAsN9++/XHpI0il39JnH3AEuPWgMHhL2mPPfboefTNb36zs9/ELAKKiJ4zQAGoRAII68fXMVIXgFzqLbpp3xWgMMCW8jLYDFrD5bPa0f422fNNGyZai6GO1yGglrqRjeTtnjJyDpSbfOC9iASOIB7OI06AGWSOQjnwei+QyT4vQD3A2XrJ30UucpG+iPbSmkc18gcs0154STZLskxTm4wZEFPkr6bPmvHTh0WSlGATABVwpr3tEzis+yKZSn30L8vMtL18tL29EclrSeTC+zg36u19yhCwVlqgGGeIjNEjJVm2BkBFohPJV8h79S15kyOyTPaGdUr6HJvshROHHdda9srcp+q+KXKSfEUXKzPDnGwDnMhzSTX6jFzWjBnL9O+ybPQmnY2G8k5nAM7INP0oIpie+sc//tE99alP7eU8eQHZRADQKxyZqfrM88Zu+dO3GQOS7/AYpz/OUu4D0kyyAEvoJkvJltE7U8a9mj6b8jmKAidbP//5z/ttaoZykrS1Y9ki3VMjf8pw3etety+KLScCmrlAlwErOJJDO2lKv+0zHfyrkT/jFfsK0a3aPCTSUF9kF4iYrJGpmrRZOsiuCmimDGwRQKIJS3+As2Xkb4ruWVb+7Mdroky52dki4qYQsFf/1DeHY6rnp7R9jZ4yiUlm2fgBzVJOPoA2YF+WhCf2hGTHk0+yauXAl770pTLZVudN9ra2ubYX2Vukz2z5Ua7q2bx5c2/zDZfpa2zjKZsPzdofb5GdWWtz9y9b438NOFtjhrbsGgc2Ogcs00NDY8w1zjanO6H0fnNeOa6WUQUYsrQA0CSkPDOhmZVg1FgOxlBgXLs/BBKmpk26EgBQTsTZjjEMlDJD9pKXvOSwm8V/9U1UyxD4yv47knMkgHAcksc97nG9seijAsjggoaGgWfUESCnDOGPtLe97W3764xhjgVHcozM6meG0H15KrMZI0t5EpWQZ9U5e61J692W09izBt993CHAJMDQklygHmOHoWsJa7mJNkdOnhw0TgaDGbDGeNN2w33hDPj4E6cqZVAO7zBrDoCdRRmEh2Cc9KLOAGfh93rJXwZnZbA0lLwD/LzPUiS8QDXyBzgbgrvyYEwk+m8YxTlV/mr6LON3aABrX/sWIX12CDBNkSnPAroYTmQDpe05+fQB8BaRA0vgynQB5Dk66u3Zn/70p92sPf8SCSS/0mnzO8uK5aHsZMr+VmY4RcPMoiZ7h3NmPWQvuU/VfVPlRL7SWlJUkqWEZKCkGn1GV9aMGbX9uyyX83Of+9y9vCqzyZSSssSFPEsHZDYu+G2/InvSII6JviGPTGZN1Weez95mIrZ3X9nT0tion4og4Pzqk6gcX00mDUm50KaV6F1Uq3emjns1fbYvyMo//DH5AlQHOoqkH6NlxrJFuqdG/pQpY51JOBtv+5CPPGwxYWJpaPtM7bdj9a2RPysQ7B8oGk6/y3YNwKDszxd7aqpMKVNNWmMnMrYOiXwivEK18jdV9ywjf1Yn6LPISoSpy4X1d7aIvi0adEhT275GTxnT0ec///keALUhOzlhO4p6HIJmgHJ6oyTtZCIdiGoyY4ya7B0+Sbs9yd4ifUaXsu0Qnc8XLEHssq3JACDVZOrYihHPL/Jdam3u8v1rdd6As7XiZMunceBowoFsQAs0GJKvPqIYLACvS1/60v2sE4MdsGGGiiI1uJt1iGEXIMVMVMjsoi8oGWwN+olGmpo2USbAHwZmyif/HXfcMa9ZXX+/emHl5Ja3vGW/R0u+8gko9DWwEDADwKQer3nNazpLTmIkAsHucIc79M6LMgcA8LXFIeUrpIzwECMBACRvznyWMeR+jone8ZujZG8xwCMgwtIDgBhAMPu05TlpzAYCtjIIA/w4rGapy8GHMaNeIeUS2ZXlm5nVVY/HP/7xPXCatIx4wCmgLPVMCDi++MAEoML7bC6rDMLxLRGcRXEUxr5uFkAnTtl6yV+A4SxRSFkZkb5OB4QhL9sif/LhHOoDiKMaR9fvGvmr6bPyDnEqAIM+DISnHMkhEJu0i2SKo+IjHfoCYPTJT35yLxOcAgY/EEM0mT4KDJNO5KRl23QNh+ue97xnXxbRkvbymUWMr3wAw5KWoaGmLtlomS5SLvuh3fWud50JUHtXk73DOL6esjdV94lAmCondCsgFlniaEk3GeDQi1oqqVaflc8uGjPKtIv6d5k259EF+uEYua6fmYQABPiqrD1uTGQAuPSL7DlGN4oSQVP1mbSJfNZfEsnqun5kLBGt7b36nPLox677kmGIns94l/E19xwX6Z2aca+mz3o3WyH7gpkEGbNzpEPLjGWLdE+t/GlvdgI9mj1JlY2+dI39kEnOmn4rjyHVyB+bw4cJbO3AftE3RElmosPYKCoYTZWp2rQmJdmKAYb7l235l3fqE0NaJH81uqdW/pRF1Jh2BSKxtaYCZ9l3li1o7Cxp2bZfpKf0ZYRnZI18IxOobFDjdLbTYC+xi5EJWLYEPaK/0fuiPK0SGKMme4dxZXuTvUX6zOb+m1YmR9hYxoQAwsM2FimdAIVXvepVw9urvxfZmdtic6++ZBtPjr2Nz7fHGwcaB45mHDCgoxKEShWzdDNpXDeYUpiuUZ6AF8RoikEHHMuAC1x65Stf2YMPjAYk+izGbE1as15xmgEzAYAAe1niIP8Afc5DZs5Ko17kV8ooDaMAqQPQDJXLYtRXGo5DqFxKkWsAEZSySc948C5fSJv3FS/LHaUDQtm8n6PCiPZcZv4tBR2SyIBE+jkmpF80D8qyAMalcpk9liZLc0SgcciQSCDEOHaeJQKeQUAXYBhi/ARs0v6J7lFHziwCPsWR6y8M/gUEyga45e0AZ67FIF5r+ZN3nD58B/YAdzii+O+9wCbtsi3yJ+IhxqJ36m/aNjRV/qRPf5zaZ/MOoII2jdzrS4mmS5ocF8mUmXeyhEeAVLoCaG4ZFyBN3hzxa17zmqt9AbAax9W+NFmaW4LeeX+O+o+oRQ4TfsljSKJ1suSTg5GvKQIcEp03fMbvJnuHcWW9ZK9G99XIib4kb7KbyRp9l24Y6pEafTaUkUVjRpl+Uf8u0+Y845E+NEYZ65IOiBUHmg7OGAowo69CU/WZ9KKakL4lyklEFkDfOEZPAEg40SiRZoDuUqfvXkScAKqGtEjv1Ix7tX02H5cRwad+s2jZsWyR7qmRP6AMnvsDmonUEA1usoyM0LciNDjbqKbfjtU7cjVV/kQbKQvSX20KbixKP8w7amSqJm2ilyxFZrOE8Cp1oReGtEj+anRPrfyJxlNHemksamxY1vxWp4zzY9E6y7b9Ij0VPno/UEN7k0GySC7Zn+n7xm3X6CSRuuTI+G5yli1AF7DvxyjvabI33T46MmRvkT4ba8uxa7aaQeR+1hYz7i+yM7fF5pb/WlCLOFsLLrY8GgeORhzIwGUAHFLACgNoiLNuZslSr4R1U25xXKUDllmOaPZJ9FaeN2spUkkUWpZe1aRl3HPMDd4AHIaI8jA4GUzuq8dY9BKH232GDEDJrD2DxgyKZzIoKX/pCPjNCJD/ppWZltTF9Rg2zkPhWfjKwQHkAaB80GAeJTLMYDMsQwxCYEBJ6mT/gJK8R2SY8nl3yiSNtrPkE2kzUWXaUXr5lHywFNUscwjf7akBgDMzWc78KvuwzGkP6QL8Ja8cObyzKOCj+3Ei11r+5G3wZgxyPgOGWqbpD+hJji9xiUv0m3QvK3+iHbSHqD1gM4dKlJaoK3Ur+T7kYyl/yhvZmtpnPYMYMJxsciwqTB9SP9GK5QbkU2Qq/ZcznejDw97SbfWlTvVDylyC267hK8IXMqqeJTGufXyDU6tMlrmMLdMREVMSWeP4A9vMeg6XdiZtk731lb0a3Zfl5FPkJHonQH3a0xHAnwkDv8t+tUifZaLIc2jemHFYisP/z+vfQ7nOU7NA69xP/y6fpzMAw3E8AxjmGccafWZSBgguejzABOeJrnjWs57V900RMhxjy0N9FIBDrF/Sxfjrd3T9WJ9apHdqxr2x/FP34XjBzjBW4Z+6zKPIlDQ1Y9ki3VMjf2VasmjvOnxFB6+AfiK3jefGYO8t0y8aM/pMBv+WkT+gKuDKxFoAVbwt86qRqZq0xl6ADr0u0iVfpfab3mAjZWwsq7pI/mp0T438kaN8UIMdFRumLNus84yVlkzHJknaGns1z+S4SE8FlCV3tvoIUE+n2B8Xj/EdwB7QU0RcSeyBRCKW18vzUl7K6zkf031N9qbbysvK3iJ9NsuWSrs5kqHYh/a7nUVkbJHvQpaWtblnvbf2egPOajnW0jcOHM05wMEsl1qU1S1nrsvr9u4ql50NQ3EZfeUMePmsAZhBa2AUDUIR16QVDcYAMevN2WbAGYQBdUAxRhSDfkhZxsIBcG4m3XIUznvC0z3DiIohNcwDaEXZx0gTyTV0thLtZobO8ors4cF423nnnfssAzgaYFwTWWVPiQAJylMumynLURrLro8ZivijnHgsZFr+jByDUECz5GnJDV7KV3nsaxYafjkJIMdoR2Zx7W8QypKh/C6PyjCLstxVuw0pEVrq4i+01vI3HLzzHo4kEBNvOFbApWXlL5GGHAWRViJltA9HgKMaWiR/0i3TZz0HUNbf/DFonv/85/d1A+SVwNkUmYrRbPnMPIpTzOCeJdOeH274r2+KNMN7bW/ZciJWh+/jXAwpwFn62vC+30321k/2anVfjZykTcdkL/0s7V2jz4byNW/MoN9LynvH+nciK8v0zhPBNTYB436ul+MZXW4sScQNfT4Ermv0mXHH35DkaeIA8BT9LZIUcGbvGmOUP2MN0M1YzFkam7RapHdqxr2pfdY4bKsCBDAVPTgkY5YIGe2z7Fi2SPeIdAstGk+zR5j0xtpyzBMJr03US5R3QE5pp4wZ0g1pGfmTB3uPDCAyMdw6okamatLiBzBH1J32YjO4ph72nxXlUoLMfQFX/i2SvxrdM1X+lCv7N3lGH0o/iq1jOSP7Dw/K6Hp2TyIVfUlzSNF/rte2/SI9Rcb0ZTZJQDPvMWFsoiLRe65lEpcM1FKTvcNsru1N9hbpsyntTKbZe/rAmPwmjyl2Jplb1ubOe7b12ICzbeVge75x4GjGAc6HgTggRVk9oBQqDXe/h+He9psxOxoCCHiW8iwHX/cBESH3a9LmObOv/gB7wB5Gh3yyJ4R18a4zVESj+dpYSRSxL0F5hkGdTf+lERY/a1Yx0S6AOjP+Br1DDz20zHo1EkC9y1lsEV1DUv/b3BXKAtsAAEAASURBVOY2vWHHeOKEcJYY+lkaOXxm0Uyd9OrlD3Ec0wZjA5VQfMAZMmMfg8bv4bs4GQGS8DXOo3KLLJxFQ8O6TBeeJoKivJcZbfmXtJbyJ9+Ap2NRcWQBeBMnVvop8icd8GfTSnTXcEaWvKq3+omOKR2mKfJX02fVjXENBMgySWVD3gvMYsAvoqFMxVnHm3mU/k4Ghx+WKJ8rAQ7Rf5YBMeC1vSV4pQNaPud8rAyJPonsD5/xu8ne+slere6rkRNyLFoykxRl2w5loUaf1YwZwKmp/XsRcGYc8O4ymtk1f6h0ZkzWiJYOXfWqV+1BljIqpUaf6Sf0EN1Xvl/+6eMph2sAHFGqnuM4R2c+6UlPcrtfzu5Yo3dqxr2pfRafUm77svobkq+FI5ueLzuWDeVNfqXuqZE/jqYxWrnTH+QXMuEByDRWlhN2U8aM5FEeUzbvmyp/ns++Vs71Q+Nx+QGaGpmqSet92smESvqq1Q74ljKlH9TIX3g9ZYyaKn/KGvtZP2HnDckXBf2h7BvmPHug6X/swiHVtn2Nnsqk3BgAyS4FnBmXkTTaD6+HxF7QLrOoyd5hNtf2JnuL9Nms9iyvZ3Id+Dr0Icp0Y+dDOzNpptrcSb+WxwacrSU3W16NA0cDDjDGDN4GxCEx0lC5JAZg4ssrCCDCQQJg+VhANhq3kbclaQZWS7XKAVSYf4gzX5PWc5YrKIOZ7xII2HHLfguMcMa8DYxtFu3dwsZL8IXRFWIsMcZisDofRmWZyeTMBwBhnDFeRQuVUQrKlSg9Bg+DqXQg804zjgxOZeOIZMZOfTJwDZ0thj8nKZtlJi+ADichBqPriXKTP9BT5IDBzHsZySWAFtDEAMdQ46R5zgBmQ9jScPNsnALXGWTIdSBMnA/XPG8PHjKgjrMooBq+McDKPPAXxVh1vtbyJzImG40zyPWHkCVI3ofwEE2VP2ktq8IHkV0lH90LEKfNa+Wvps8CoMghOc0HILwfpS3jIB929bCyLZIpcqKPZWY8zzoC0UWf2LhdX7RcklwPZZr87L777n3kl82FkX6299579/xhxFsSNg/88oylZmW7qVeiSA8dANvSh5rsrZ/sATprdF+NnGg3MpWox7Sn43BPnRp9ZmyaOmZ419T+Le0Y0bkBjeh2zkEoH9WhqxMl4p6tBehbeoNuIeeu+UM1+iy6yTEf1+gz2fIvUVgBtY37+C6KQNSvNkPKkMiJjIc1eqdm3JvaZ4ELw7FyS7VWI8ptn2BiBI+TtnYsW6R7auRP+YwFxkL5AvRKoseRicDaMaPMJ+fLyB87T/nYCOnj5EIEHFujRqZq0irzTjvt1EdoGVf233//rXR+wGT3UI381eoe+U+xV7RT7AfPhKy0MP6TUWMb8K+k2GSpS3nPeW3b1+gpvDCRnui48t25ln5vApDNln1uy7T2uGUji8Ad26Kkyd7fenZtb7K3SJ+VbTzrPEEY85ZpenaK7yJdjc0t/VrT4d7iWufc8mscaBzYkBywxJERBFQJUKEiDHnXGJX5WpLrMdzNlFhulmWWlj1k9iwAk5mpG9zgBh7rSYSAL+0gRqsZ7pq0njOoc7izv4VrDHdf90RZpiJiKwCQr+uVxDlitLmfTfWVB5kZLI0d4IAvGNnjKoBTwo+BjRz9UOoGxAIKAksAfMO/fffdt3+E0+RelnAk8sg7OSghxok95YARY0BFNuuXHtgDjEABrDgzliuo85577tnf84+TkJlaBj4CdOWT44zkMqoDH+XB6GE0AZM4Hq4BSxxDysCJY9BKO4sAHgH9vC+ErwFuyWhoreVP+5MDtNtuu+U1/TFyop0sj0JT5U/aLOsgUyVYC8SMcZGZ+hr5q+mzMb61Q4BOZWMEZy+otLfroUUylShNspkPhHjWb8YXIh8xnlxPRGh/c+WfL7TRGQCLgHd4zqAir9p6EWgmLzpG/iFAeeS0XAab+zk22ftEz4r1kL1a3VcjJ+mL5LlcgsfhFF1ZUo0+qx0zavp3WabyPOPfrrvuuhrJUY6bqatn7DGpzvSVfaX8OXfNPVSjzzwb3WyD9ID58vGxmAA1cXxFpgHIbHMQ0s9EoCH9KbJUo3dqxr2pfRYoMRx38zv63oSGa+yQZceyRbqnRv7wMLymm0tAwtLc6O/wK7yeYrPIe4xq5I+uzpYMABH2H1uAHRHgtkamatIqu37B/jFJWup7th9blS0SW6FG/mp0z1T5U177mkXmymP0hve6HjDYM8ikFcpkXf9j8K+m7fO+KXaI6EXtwj6hk0Jkz2Q4ymqIgOTs1dIO9uVO7UE3lFtAJK8cm+x1VfbRkSF7i/RZ2m7Wke9EH6BDDjlkVrLV64vsTAlrbO7VjNfw5Fgrgt97CCIcRDw0ahxYxIEmK4s4tH3fn9J+osKylIFyRvkanY0Zs8TKcsMMpjZOzhJIM1qMakYUx5ex6Et7GUwZsUKzOTaMcwDU5s2bV2esa9KK/gJkIUa/v8zgeYfZiZCQ9wB3ygZIAhjE6GIk+GIQYqzY8NRgr3zSAuTUyzVGjK8thvbZZ58+FF9as9We52gwOkTMBIhK+vIIFGNo4tNwE1VfJPJepC3kp8wGI2ClvUW8UxppQ2YhvXPTytJAETee22OPPVZBKcamr5chkTxmOaUVCaQc9g4JgKXOlt6oN9AILxhSMdxtEh2HzlK/3bcAdcAPgKF2Tx0Y2CXwmvKWR0s9RB4iRp5ycIDVWbQj+ULrJX/56IF3ML7V9yxnOcuqnJjdTrRUjfxxfACVkSm8MQsb8BOoQ+ZQrfxN7bNkgZxEJkSPaCeROcoFoOL4audamRKpBxxFZk7lIUol+QJTyRaZFb2IAGHaV7RQykQXcARKOZCWDI+RvscxIofyQJFTMhpQ0td84xSN5eNa+c4me9N031TZG+P5PN03VU7ka+IkgC1dRs4i0+5zRvN11Rp9VjNm1PRvZRojYxEATJ+h3w9diZDctEUv+82xAOyo21577dWno4voJET30sH6Ciec7qrRZ0BuezEheXie0xv/gOPr4yFI2xlTUlbjk7GJTtP/6BH6BNXoHemnjnvSTu2z0o6RiSp18GXuMpqvZiyr0T018qe84YX2oBe1f6IrRaFl4q12zBjjxVT5A1qJItKuxhCyiIAmbD5kEhVwVSNTNWm9W18xOcgGMp6K/ArAa1yIs14rfzW6Z1vljx2pf41FeZLLTKSyi+mDMapp+1o9ReeYKEbGa+OiPNhj+gtbMWNzKat0R9kepe02Vocme4dtQbI9yF6NPjM+PepRj+p1vo90lMSPZB/onyb7x6jWzqyxucfeN+XaPD/5OCuD3GaZcJziPE7JtKU55nKgycrGbvsp7WfmB6oPLDDw+UOMZl/hQQZqjrKB3fUSDLH3gQ0hgWIMPMuu/Dk3Q82I4dCKugF0WRJHUYVq0nKY5cVBFw3FSZYvY46jBPgIiaQx2APsGH6eY3Rxsvbbb7/uoIMOStIeTMAHBgPHQX0za8Z4ZSCUedsbDGhgEGC4cR4MFgceeODcWTYvlLfoPgYx46kky3UYKcqqHeyToX7ARw5MInDUw5cRAwhmVkYbANGATVliI38GGHBD/YBl8pdWvvaQKtNyfizFEaWkXqmfdwEby6WsnCfPWh6afLU3wwo/DzjggLJ6o+f4a5mHiDo89z51ZqQpm/ZaT/mzzETZOaelnKgvgyLLCBW+Rv4ARJx6vNFeZNW4izcAnRe+8IWr/ABm1cjflD4rczIpragu7zYjqBz6ccBgDjqqlSkzzuSfIeVIXuWrbwMEszciWdJXgaFpX7LH2faxkMxK77LLLr0O6guz8k9eY3/yIxvSyweQIG9y4z3qDDQbLnVKvuWxyd5he1mth+yVfM75PN03VU7kJbKK8wmEJs/+tLuxiC6kJ6OnavRZzZhR079T/+HRmKIuIorIbvSyPgmoiL43ZtKrljXSiSH1tYTNvR122KGf5KrRZ5bB09/0A90XHYWXJpX23RId7X0caOMFPU1X6G+eYWOI3soSLmlr9I70B08c96Sd0melm0X2NjO+2Os0QJ+0NWNZje6pkT/lYJeI7tJXYgOEn74sHKodM/JceZwqf5x7E2LGLsvnvRvR9ewrdh5glV0UPk6RqRr5wwPAGBsmtl/sLqsAEgWtXOHXlHFP+hrds63yx1Y2Fps4G0Z7s8FNCOAzG3UW1bR9rZ4yrhun2UOOdJJx2MSstsfbEDvGyguySndoD0QHsZnVYxY12TuMM9uD7NXoMzrJVgJj/ouIVP6LcSKrf4btX2tn1tjcw3dN/T3PT24RZ1O52NKtcmAeEruaqJ1stxyoaT/GJOOHYrN/RWlULltBhjXDSp6MhBJ8GuZZk1Z+FLRnDl0BhTgUs8igD8TjUKlXuWfW2DPyxgckmmweHxhw0jJkpAUGrAVpC4aLQUr9GD/zSDkY22b9svxmVnpGDn5ojxjAs9KmfoymOHGz0gJP7Gtk83rGM8CrlkQPMtQ8v+h9U/KukSlyApD1fnJi8J9FNfInD46F9tQfzZTPMyhr5K+mzzLWLX/llIvIWdQ+NTKVdjPLHMBsjHf6IB6TJ472PD6MPT/rGj6IXtBmZGcZSh2a7E3TfTWyV9seNXJiHCDL+tUimqrPaseMmv49q4wcTwCw8WlZGS7zrtFnnqP39AFAWgmClXk61+7GD+OIKO95aaWv0Tvyrhn31rrPKi+qGctqdM9U+VMGk35sC/rauD6PasaMWfmstfx5z1SZqk0LNBftyNZhH3HiZ1GN/MmjRvesl/zNqsvY9Zq2r9VT+jn+AS/m8Tiyqi+wAebZ+WN1aLJ3GFe2B9mr0WdjbVl7baqdWWtz15Rjnp/cgLMaTra0PQfmCVRj0fbPgdZ+238btRI2DjQONA40DjQONA40DjQONA40DjQONA4ceRyY5ycf+8grRntT40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwMbhQAPONk5btZI2DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DhyJHGjA2ZHI7PaqxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoGNw4EGnG2ctmolbRxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBw4EjnQgLMjkdntVY0DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DG4cDDTjbOG3VSto40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA4cCRyoAFnRyKz26saBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzYOB467cYraSto40DhwZHLgeMc7XnfjG9+4O9e5ztX961//6r761a92H/7wh7u//e1vSxfjete7XnfiE5949PlvfOMb/TtysyatZ654xSt2O+ywQ3eyk52s+8IXvtB96EMfmlnWq13tat0lL3nJ7iQnOUn37W9/u/v4xz/e/fjHP86rlz6e9rSn7W50oxt1ZzzjGbvf/OY33Re/+MXuE5/4xML8TnSiE3X3uc99uj/96U/di170otX0F7zgBbsb3vCG3fe///3uta997er1tTo53/nO113+8pfvzn72s/fv/vSnPz2pvN5/hStcocNHvH7Pe94zs0jXuta1+nZ5+9vfvlX7znxg5UaTvXncGb+31rKnP531rGcdf9mWq3/4wx/mtv28h6f075Of/OTdda5znXnZ9Pc++clPdj/84Q+PkI4uuP/979997Wtf6970pjcd4f68CzvvvHN38YtfvDvhCU/Y0U0f+chHul/96lfzHpl7b0p9k8HlLne57mxnO1t+bnWkVz74wQ9uda1G953//Ofvdtppp+40pzlN95Of/KT7zGc+M7lfbvXSwY9l++ws3TfIfumf8/TU6U53uu6qV71qd57znKf773//248Fb33rW/vz4Qtrxoy14PGFLnShvp1OecpT9rJNxr/1rW8NizX5d41MnfOc5+wudalLjeb9v//9rzvwwAO3unfe8563u8pVrtKd6Uxn6r7zne90Bx100My+cqpTnaofI8985jN39IcxUt3+/e9/b5XnMj+W7bN3utOdutOf/vTdc57znO6vf/3rMq/u7nKXu3R08Cte8YruF7/4RVUe26Knhi9adhwo82myV3Jj2vlayt62jnspsf54hzvcofvlL3/ZvfzlL8/lyUdj7yUucYnelp5ix07OuCVsHNjAHGjA2QZuvFb0xoH14gBn5hnPeEZ3/OMff/UVjKlddtmle9SjHtUbvKs3Jp5wrG5605vOTA28Ac6hmrTKqEwM8RBH6OY3v3n3hje8oXvXu96Vy91xjnOc7olPfGJv4OYiYPDa17529/rXv75797vfncvVRyDXgx/84O5YxzpW/yzn49KXvnTv/Hgnh2MWPehBD1oFKMs08uCUnPrUp15z4OyOd7xjd+UrX7l8XXexi12sA3Q95jGP2er62I8LX/jCfdnUaxZwtmnTpu4Wt7hFzxP1SPuO5ZdrTfbCienH9ZA9gC0nbB4B1Ge1/bznpvZv4JG+uYj+8Y9/HAE40w8f/ehH932HjNYAZ5s3b94KuDr3uc/dAb6e9axnTZLhYXmn1jfP3eY2t+lOetKT5udWx7/85S+rwFmN7pPJ3e9+9+6yl73san70Cx0AWKTvl6Vt6bOzdN+yZRk+N0tPqffuu+++qq89d4ELXKC75jWv2T3sYQ/r/vznP/dZ1Y4Za8FjOrOUe+0EuHvHO97RvfGNbxxWcdLvqTIlM33/ohe96Mx83/zmN3f/+c9/+vtApytd6UqraY2nxpCvf/3r3dOf/vTV606AlLvttttWPDeBdZOb3KTvq+H5Vg9N/LFsn73+9a+/Wn59blngzKTdCU5wgu4MZzhDFXC2LXpqyJptGQeSV5O9cGL6ca1lb1vGvbLUJgbYXfJbBjjTN9nSv/71rydPqJbvb+eNA0dHDjTg7OjYqq1OjQPbyAHRT5wyTtr/+3//rweybn3rW/cRWnvttVe3xx57VL+BUYcY3Iceemh/Xv778pe/vPqzJu097nGPVdDsox/9aD97bZbMDCCg7gc/+EF3yCGH9HmrFzCAI/2pT32q884dd9yxE3kFaFMu0SW1dOxjH7u7173u1TsE3sex4Ozc4AY36AGxO9/5zt1LX/rS0WyvcY1r9GlGb67TRTOJAc1EmX3uc5/rrn71q/eOIwBTpGEN0DBWTDzhFAdIHEszdq3J3hhXZl9bL9kjEwCjIXkf5xj97Gc/G96e9Htq/xbhJYJljETDiQbTl9O/k+4UpzhF99CHPrQHzXJt6pHTz9Ggp/SBn//8570eMXv/gAc8oLv3ve/dAepqaGp9k6dIWPS9733vCIB7yfMa3Qe0CGhGx4nI5Rhd5jKX6YBLt7vd7br9998/Rag6Lttnjwrdp2Kii0wcoB/96Ef95AoeiL4lUyZAMnlQM2asBY9NAJFB9LGPfayjn0VzmYQxcSX6WJRvLU2VKflmEorDLCqsJP1NdB4yhgQ0I6tve9vbOs66sZTMA2Fe97rX9WlN/gQ0k69JKlGPxmn99RGPeET/1yeu/Ldsn1VPEeJHFW2LnhqWeVvGgeTVZC+cmH5cD9lbdtybXuqWsnGgcWBZDjTgbFnOtecaB46mHOBAmHlHj3zkI7s//vGP/flXvvKVbp999umNXZFEY+BXn3DGP/kijp8IrHk0Na1IDsup0Dvf+c7VJSRmu4F+Zs45uv4AOMkXwLbvvvv2zwHQnv3sZ/cRHkC0ZYAzTqclqGarH/vYx/b5AuXwzj3O6cte9rIjOMFAvFvd6lZ9+iPzn+gZ9NnPfrZ78YtfvHoObND2ljdtK3AGSIyz1r9gwr8me/Wg7XrJXhzeYbOJqAGc/f3vf++e/OQnD29P+p1+uEgXWGLypCc96Qh5cjif9rSn9deB1JZbh4DCIlhECtUSfXKzm92sf8zS6A984AP9ueVkdAR5plNm8WbW+6bW1/N0K11l6doTnvCEWVn2kxlTdZ9MsuSVUxbeAWVEycgHiLYMcLZsnz2qdB9e3P72t3fofv/73/cAWSZSLPc1QQQ4BUS4nrabMmasBY/ve9/79u1vDEuUiEhdkxDKAuipBc6mylTPlJV/loei5z73uXO3MACQIXwrZVV/3Hvvvfuoufe///3db3/72x70I9cAaXaFaFVEB2Sp5HGPe9zqJZvL9lllechDHlI9sdMXeg3+bYueGnv9suNAmVeTvbrlwusle8uMe2U7tvPGgcaB9eNAA87Wj7ct58aBDckBs9vIPh0Bzfw28+ya/bssb3je857XPfCBD+xni0VZ2dsjZB+T+93vfr1RahZa9EqiVMyYL6KpaYWhIw7OcN8Ve9XsuuuufQSBWXDLQOznY1Z1uF+YqAMz5GbAhyQqAk84ev/85z97Q3+//fbbKtomDqz8Szr44IN7R4xDINoDSFeSyD0G/De/+c3OvjjzyNIdUWH2v+CI2HNpbJmc+kkLALN3inrbm84eY0hZAQAcl3I/Nfe0ob1thvu9WaaLD8L2s9+dcs8iy1Y44px/MiTaYAo12duaS9uL7KVUZCpRS5YtDiOvAFqcYNFglu/97ne/68HZIQg7tX/nvcMjhxcwRk4j10kD+CKb5D79OvcWHS1Vli/nvtxHjH4BMul/QGXA2Xrpvotc5CJ9MfFuHtXoPmAZHUzH0YslWaZJJ2ivIU2Rv5o+W+Y/VfdNlSl5T9FTZCMRgJYSatsQoJSuNtaJqsKTmjFjGR7n3Y7AOmMVMm6WpN0AZ/oWGRUVTFb0wac+9am9zCY9kM04oW1tQTBVpjxv7JY/vgzHgeSfYyZGvKMkQBpQEgBHPt7ylrf0kaNsh+9+97uroJlnTN7QGdrFeFXKPX0jalyEIN6IVNPf8TlU02fzjKMocLIlotTyylnjmfcCWpVFenpFdB1Qkz0wJHUg29qJHsFDact6eaZWT+mLog6T709/+tPuVa961aodsqwNkvI32dv+ZC9tk+O8cU8fEZGqrwP02ORDezP5mCy57W1v29vk+qj+bn9d+5gNx2rP6Bv6qLytRNFnTFiNbb3BN2BDsvmkNcFmb0Z9YGgvpDzt2DiwkTjQgLON1FqtrI0DRwIHLNVDYwAXo5fxC5xBfovAMENvuVQG6j333LMHmgyaWYJpYEccEssyDKyMa/eHztzUtEmX2ev+BVv+cYBiDAOlRAy85CUvKZP05+prqSYaAl+WpcbJ50hYxsMhedzjHtcbAj4qgBjL6Etf+lJ/zD/PqCNAThnCH/cZLq4DwTgW85a/SmfZC5KnZWN+24em3EfG9c2bN/eGUNJ6lpMF+BNll/oAFRg2gBDOB2fRJs1DMM49G6yXZImOcoyRyDtRSUiUXc1SmCZ7h3N0e5G9lIhxbdka0mfLKC/XyIn70iHyEZBdxKX6RGbSb6fogj6z4p9IDfpHXiJihsQBAHoBHrIcb5hm1m/ACwpwUqYTdQY4S19fL90XQEwZLA1VVw6697361a9edT7Cwym6D3A2nFhQN8BQok+HEcRT5a+mz4afU3VfjUxN1VOAKeMCJ44DCHDg6JFbcl3uc+lDODVjRi2Pw48cLY1WNrJtMqUkS5Zdd186gJO+4Dd9+4IXvKBPzsEFsEmbyaypMiWD7G0mYnv3lT3gjI1440MS9gwF2KByfC2XD/c3V/4pF9q0EkGJOOXDDcaludvd7tbfZyuU4JJotkTwSaA+ZE0ksyWsicar6bP9i1b+4Q8AHLAFdBRJP0aAMmN9AEJlMJ76I2+W8w4/BEDnqFfaSvme8pSn9FGepc6s0VPqnI81JF/t4v3GZnIavVRjg5R1brJ3+ETF9iB7Zds4nzfuaXsrHQBmiIyQu0wQ9Be3/GN3i+LWf1HkiUwDvchBaVNKYwl7SbYhMXFkv0X7Lob0JfmUxMYE+CqLyfRGjQMbnQMNONvoLdjK3ziwxhwADiGO25AYeyjRCQAvg+JZznKWfkaKc2kmFUhjQDYAx7GL8WmT45DZbYOwWWmOWmZwp6Y184sMzoyHlM+1HVeWXYaAXUO65S1v2e/Rkq98Agp9DSzEoQQyqcdrXvOazpITM3UMVSCYrxVxXpQ5YIEv3g0pXyFlhIcYJ5xweVs2laUxuT92FLllOYxZd0AYI4cxIrLL0h2GkI8kKAtj3jI67wZcMbqAXWYTS4f7mc98Zl+nvE+ZRNZk+aYyByyxdMhyNU4BHpiBHyOzospiNlJeNcBZk73DOLq9yZ5SaUf9nsz+3//931ZNL1rBfltkj3xaik1/kM173vOefUSHJckHHHBA/9zU/r3VS7b8IPtI3/OuITHol6U4n2MbhMepDyCwXrovkxKiO0sCivsyIiecrtoW3ScfziH9i4AkAVn8rpG/mj4r76m6r0amavSUsQrRjY9//ON7cLe/sPKPg2giCFgSvZ17jovGjDLtIh6XaXOe9gDojJHrdCvnFAj1vve9r98PDagCSOE4Z88x8inqBE2VKWkT+Wyj/EQTum5MBxaJ1vZe0cTKo8+7bs+8kPE2413G19xzBF4ChRPpJZ8AYe4rbz6OoJ/TN0B2fV9UjTqaCBPJUtNn5c1WsHUDAkSP2Tn9zZV/2W7AGO/jGYAv4+fDH/7wfgLN0kbLTkuiH0TFGWvxzJiMl3e96123mhybqqfoXe1b2iHnOMc5+qW78hU5J/q/1gYpy+y8yd5hHNleZG/YPvPGPTKm77MR6S5jFblhIw7JUnQ6RP8VNW5rEoC0pb7sXXZdgN/yWUA+exGJaKVvlInNTFfSdwHNLPnXP8mnscRXnI355LiMFi3zb+eNAxuFA8feKAVt5WwcaBw4cjhgUEUlCJU3Z+lm0rgO+DEIu2YAB74gM1GJWgOOxeEELr3yla/sHUDACjILFmO2Jq2oKe9GjFnAFuLgX/e61+3P83v1x5aTK17xiv2+ZLku8itldI2BjtQBaIbKZTHqK00MVvflMSROKUrZpGc0e5cZO/tZLCJGMzAiQAGjPLwNMGWG0Ts4IYwnbQW0tIwGkIZPHCG8Rhwk6e1dx9DPshwRaNmc2qy88qp3QFBOhDYXITAkzg6j3r2xaKBh+uHvyFWTve1H9tJGNj5HnNUhqOBLhJFvDmacUYBuljwGyK7p33l3jsDfyEjtPmPJY94xINCYbAc483xm9tda98k7gIN+pp/r90AQ/dp7AQ50x7boPv06jrJ36m90TGiq7pM+7TGlz9bovhqZqtFToiCRyQrnWe5LDyJgDkBkjBaNGeUzi3hcps15QCZtPUYZ65IOiJUxQZkzhgLMyExoqkxJT38j8nDwwQf3EVlAVeMYuQMeZjlpIs2A4gHnPLv7SqRaCFA1JM453me8Va9yGZelke7pAy984Qt72Vce42Ui3gAAqLbP6j90lQg+9ZtF+kfAa7ZMosWM1yaQEDC7rLdrAAb9FmkbQDcCKiQ6s78w8R+ZQ6LbY4doX5GQ5IQ+TR+UbooNIt2QIlNN9o562Ru2zbxxD3AcwErEl/bXV8igsXpIZN/EkOhRk6HSkqdMhul3AK+SjPfyZk/6c65vSusDWMiEBJtTxKMxXzpjpo+LSYsyadv/aP8aBzYoB1rE2QZtuFbsxoH14kAMpxi15XviMJr9DXGYzBZbcsGQRJy6GI9+A8ssRzQgi97K83HCRaFZPlGb1qAPGGJEc3iANcrDmOCkua8eYxEkHHz3gT2WkpkN4zQ87GEP65+JQa5MpSPgN6NA/ptWlqGkLq6XBqzfKDwLXzk4gD0Omw8aTCEGiOVWJZnRYzTHaAr/ODNDYMPS2JD9x0LazbIXpL1EYGhDURfvfe97+0+ZuxenwXmIo4lvIUZRNot+znOeswpo5v6UY3jUZO+wqE88O6plTxks9YoxPQZYZfmv9isBa8/mOX1DX6jRBZ4vKXlbNjbmIJZplzmPgT/2bIBB9wJgrLXukzeHhjMO+EgdLdP0B3DHT18NBkouq/tE2mgPEVYmOoAYIlRFB6rbVN2nvDV9tkb31chUliJO0VNl3Th5dFXIOGIvKZMKoqYyUZT788aMpMlxHo+NH2NUgkdj96MXy+e1G3AmwAcZBuiWVCNTwCmTMKLHs4cRMEikiAgVcmOyBnBjeagoaOCYpWL6A/76nbF3rE/Ji7Nu/BQNbewm2yKnyHUcbAD2UP8Zd1HGvbH8U/dhn2VnaFv8U5d5FJmSRnmG5Uj9pAuYJ61ovJLcAzqKuBGlY/+3Gko0+nALBW2TZa6RC/lOsUHG3t9kb/uRvWH7zBv3sn+hvfcCZOd5QCuZK4kOQyJ6yS49qx9G37qnD+vLobHlv3QtmzNAO73hDxm/5O2efGMDl/0xebdj48BG40ADzjZai7XyNg6sMwcYeeVSi/J15cx1ed3eXeXSH5vWlsQBKWfAy3uMegYt48+MLMOyJi3jgLNn1tsAzdBkBALqgGIM1tIIyLsZ7ogD4NxMuqUVHNNyaaeBvzQq8rwj450BzXlk0IvmGjpbWdbKsLF0LXvIcB523nnnPrsAjowZ1wBln//851df5dkh2Z8HZUY/Bow91eaRvKUFrgU0S3rLbfCR4aQsKddYnsNIuQc/+MF9GwJGbGDsD2VZniVa6qZeyjBGTfa2P9nTTolq1G6HHnroEZoujq4+UC7vGia0F6IlhjX9O3mIAEm0kI2J14PILorMlu9IhJb+7i+01roPYD1GHGVAgr7JKQEwLKv70nc5OqIDRJPSvyIbyr21Fuk+5ZzaZ2t1X41M1eip6E1lf+lLX+qwSiYyAGdIVFQiopNg3pgx1NHzeJwozOSbYxzfMfBDmlwvxzN90liSj3bQ6cOJkxqZop/LsSdlkyfwFvAU3S6SGXBm7yJjpj9jMdDNWMyxHpu0cs0478+Ha57//Of3cg3EJdcB2+U3S5/oB2hqnzUO21sPAQISydVf2PLPElDRstqHrgpl+Wt+l8fwIteA+kMKcBY5Hd6f9RsfAhRGNsbS1tggY8+7lvwjY8N0ud5k7/Aox/WSvXJ8WTTuJSpyLEq6BHTTnuxFUZcAtRJwzf2xY6Jay3upe8ZF9yz3FJkemS3Tt/PGgaMLBxpwdnRpyVaPxoE14gCQhIFXDojJOrOfpfHk3nBZin2x7IMQMkB71oA6HITLAd/9mrTJ35ILf4A9gA9nQj7ZF4Kz7jojVzSar42VxAG12b5nGNSWRYRE2Mya1U4UGKDOjD9nbwgsJBJAvTm8IVFdQ1L/29zmNr2zUTovyj6kgJiJfomzFIdimD6/w/9EiuS6oyg2wBkyO8iJEA0Q8K+/seXf8D2RDWCiOgzJfmzZrNbX68aoyd72KXucZTSMekgbpg+TrXJz9dzPEWixTP/2fMA7cl72jeS9Fsf05/TZMs8sT0t/y7211H3yDHA/5vTQQ/pdnFjpp+g+6UwKbFqJLLBnVEl0pXqrnwjS8gMpU3Tf1D5bq/tqZKpGTwUgwINhlI18vBePjRM1YwZwaiqPFwFnxgHvLqOZXfOHSnDGZI1o6RDHFQCYaEXXa2SK3icL5K98v3wyxqQcrtkyQLSY5+j+yK0NyJHoc6QMgFjgIz6XROYAZCZXkD5GxgFcWUJbpneetpvaZ/Ep5bYvq78hZU8oX6wOEKosouRn0XA53HBc9FwibTL2zspreD38dh3/hpNydGlAlqk2yPAd+Z1+gUdN9o5a2SsjSheNewHoE9WV9nQck8VHP/rRqxOi5FH/JcMmCbIEuczD+Vje5BHFFxCtm21ayClQHNBuL0YrHsrJ6P7B9q9xYINyoAFnG7ThWrEbB9aLAwY8AMfY5u9xoMslMRyMfL3OAMxBAur4ulc2/L3LXe7SLwtiEFgOFGNPHRJq7pxRXZPWMzbnVQYz36VBvuOOO/ZOOsOXMZ+vrnm3jcxLB7gEphgTjOZEkTkfRmbd9KY37cPR44RyUjjbIjbKKAXlCsDF2edYlA6k8iMRLgAqZWPIDL/UFaf9sNSH/U/kWoxpzpQ6JiqnTAvENPtvTwsz/KIqvJORXAJocVwY4Yx27WlmMtFsZZ6WiZYEjFTfIYki5AQpJ97GkRqm87vJ3vYne8AWDhqaBXjqX+SEoT4EBYAJu+++ex8Z4quttf27f/HKv8gmGV4vihOsz3IM4jx7n76N4qg7X2vdJyorH+PwxTz9ISRSIP2LQ4Km6j5pLenTjqJ7hsBjgDj6s1b3Te2z9pSq0X01MlWjp+hJehYvfFGz5AV9GJDDdWOTD6FMGTNqeCztGNG7AY0AYEDRUD6qQ1/HWXbP1gLKre3UiYPqmj9UI1ORD8dscN9nsuVforAStWfc1+9FgIq81GZIGRIxmPGQw2788zsfn9mS7SrPAxSpS5z+oT4BeOFNPo4xtc8af/JM3ptjIsptnwCcxuOkxVvAXqkL8Md+cuyZ7DWavCxzLfsteQpocOhItG6eGzsqR+TBeF/aISY2LckFbu699949UDrFBhl7j2tN9v7Ws2Z7kL2yjRaNe6I7TfqSMWNEOcmb7TuSnwnQRD2W23S4nz7gfAiUlffcR7EJA7iaQELD5e+uJYJbX2rUOLDROXDEMIaNXqNW/saBxoFt4oAljhwFg12cRRkyVl1jzJWfoI7hbqbekp8sw7K8IRFGAZgMyNlMVJ4Gcl+FQoxWRmBNWs8xBDjnwsRDjAhf90RZpmLmOg6TL1yVxDliDLv/5S9/ub+lPEj0VBxWv4FTvhRknyG8QFk6BmwsNwtO3QBrQEHOGIBv+Lfvvvv2+TCS3RsuIWJ83+QmN+nT+OcdWZojSgwlSo7Dn5k/1/1mzCMON8fFUhn13XPPPfvr/jFqfCkUJaLBEiCEv+XSFsbc0JiyV9CwXn4npB9g53ccnT7jwb8me9uf7HGMEcdqGIGS5tO2iKwlyjP3HvrQh/Z6AAjBMa7t38knH7UIaJTra3nk8CZSB/Af0t8yaUBGQ2ut++geOgjttttueU1/jI6iI9Ivp+o+GaQf0mflRAEQPdHFogNQje6b2mdrdV+NTIUfU/QUsMOX5JA2LqNpjQv0IlkXiVc7ZtTwuC/AyL/0j1133XXVgS3HzdTVo6I81JnM2LPLn3PX3EM1MuVZ9UY+zhBA1W8fjAF8oezNaUIHQGabgxD+iUBD+lNkKYC36DjlC4kCz16ZaZdEPRpro3+kp1/s6wUQyATR1D4LOBsbn1xLnwMq+03P0TNACPUx8eQY2n1lIgAgqS7hV+6xb5QzZJIuMlUug879RceAbeyZbMvgGctO5avswIupNsi89zXZ61Yjbo9K2SvbaNG4lw8/kYXseed5MshOLSkArmsBqZ3TL74gGxoCZ5aDxpaXRnRodEH29MuEQwncSQtgDmBWyq97jRoHNiIHjrWyOWxvpel8Ih4aNQ4s4kCTlUUc2r7vT2k/UWFZypDZ03wRyobUWY5luSEDH9l0NEsgRTcwqjkgHD7Oii89BlRixDL2ADCMcwDU5s2bV2esa9KK/oqBwIj1lygn7yg/Fy/sPcCdspktZrjH0LU0KcvRGCy+HsQgUT5pGR7q5RrD2hfvQvvss0+/HFVas9WeZ1wwbM0IB4xK+vLIOeCE4xNDO6Re6hcSaYB3+KYMzuOkSCNahUGPzP5zskUJSAss4wB4h2g8Xy5D9l8RBbZpJbLILL/7QusDIAAkA8RJ575oM3kiQFi+HNZfGPzDI2PLWATDIGn/s8neYV+ZPaplL22T9hj2pdzP0T5HIniQyELyY1aaQa0P6N9xomv6t/zIWsBkz8aZdG8eAUYA+Pojh3gKmTkXFYcAIeoBJGb8i7Sl29B66b5sUO8dnBB6xxfLoqP233//1ai+Gt3H+QGU4yUdBcgXoRIAglNP5lCt7ouMeHbeeOH+kGbpPulqZKpGT9HhlhLiBR2Jx8DDADo2vQ9AVTNm1PB4yIf81s4AMGUzGUXWN23RzX77wAJghw7ea6+9+nSissgFAupwbPU5Mq9uNTJlksV+lUgenjdhFv/AHmQ28Ufabo899lgtq7Y3npIrfDU2ZY8zeuCZz3zmqj4QtcZ5z1ginfSeQ9LG0ZevsshbP8QHskGO0dQ+2yce+Ue34Lcvc5fRfPiIn0hZ9Rk2TMplj8BMIpKZgAeRKfIUUNqXxAMG9xkW/+bpqUSWKV/0AT4kiscese9617v63LbFBpFBk73tR/a0hzafMu4B+fQH6dm/ZDi2tXz0l3xx11dqAVj6DhtWv9IHS1DLFzZFkOoP2UNNOvafSRfyh8roMrrIOCkdW9e4uWlFb6WvSF/qDr8bNQ5srxyY5ycfZ6UDbFZwzlMMs+21Iq1c2wcHmqxsH+2wbCmmtJ+ZR9EMHDbLlrLc0MDna2GIcwWoMVi7HgPSPQOqjeCBYpxnS7T8OTdDzcBkVBqEDfSWJVFUoZq0ZrLlBSASPcDQlC/DHKBTzoCZ0WZ4MyrMqnmOwQAM2m+//bqDDjooRViNjhFZxkhVX0f1BQAw7Mu8RX4BrRgKADPOA+PkwAMP7PmzmvHIibxF93GIAEwhRgtHJoCDdP6UgTPhS2ZxNDwjmsz7N20xWLSbtHjLKc5+FJwxwJq6AcvwQVsB4iz/yFIceYpY4DRxsPHWn3ppYzICJCmXkHimJHJgbOGYJ6KgvD88b7J3mJOGD0el7KVdAM1kSvslIiH3yiNDW/9jPOsn+gCZIp+cOzoiVNO/PUMPAW8Z5frpVMI/IDoj3r5FU0hf009EtaQe9AlnRN+gK/TB9dJ9PgKgT3JmSh0F6Oec412oRvfhAcfHki86Tz/WL/GUQ8+hCiUycKr8TemzyXt4nKX7pKuRqRo9BaSRt4ghejr6Go9NnpT6rGbMqOHxkA/5bUxRFxHF+lN0M+fXHkT0LTJmGkctLSu/okkv77TTTv09X1A2yVUjU5Yi0/8iRMlf5ITON6m075boaGUALBsz9BUyhY+eYWOI3srSTWk9T07kS+5Eccvb+JRJKHUMHXzwwb3Trv7GMXto6ofeB7gLH6Sf0meT79jR3mbyttdpgD7pjLF4oc9knMRzfUZdDjjggNXsdtlll17fATHpQLzQfuoNNJunf+bpKTwRrakt6aPYLPIV+fe2t71ttQzbYoPIpMneYazcHmRPSaaOe+w7csuuNVaTEf3Kx542rdiCxuBEiZpEovf0UzIqLXkWOUy29Uk60QoGwDH9bBKB3e6e/KWnI8sl18Z3kaNkVBo2I53guueB7PpxQN7DON3+Nw5snxyY5ye3iLPts82261LNQ2K364K3wvUcqGk/xqQBzwBoj5rSqFyWnQZsA7w8OSUl+DTMsyat/ABNnjl0BRjiUMwiRgUQD/CjXuW+RWPPyBsfEMNjHh8Yy9JyPqUtga2xvGuuMXIAhIccckgfzTfvWYCB9KJkApiNpWcY4YW2UOZ5hL+AAzPv601N9g7j8PYiezXtrV+RUzPPnGeG9hjV9O+x59f7WvoQB7p01Jd9b0196Sg81IfpKIDBLKrRffLgBAHmjAX68qz2kbZG/tajzyoDmipT0tboqehrsjqvjWvHjBoeK/MYcVSBMMYnMritVCNT3kX29AHgUQmCDcuh3Y0hxhKRqfPSehZwZukzUEjEsjFlFsmbrHK6jevAyXm01n027wIG2KPPxzC0xaIym/DSZ9ei3VIGgAa+KYN+O2vpfGR6W2yQJntdD6ZuD7KX9p9yFE0LtGIjzpIPegAoJ3qSjTpP75XvBMKRQbbirLzpADwDqi8aW8q823njwPbEgXl+cgPOtqeW2iBlmSdQG6QKx+hitvY7Rjd/q3zjQONA40DjQONA40DjQONA40DjQONA48CAA/P85GMP0rafjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DKxxowFkTg8aBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoERDjTgbIQp7VLjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONAA86aDDQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA6McKABZyNMaZcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGnDWZKBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHFghAMNOBthSrvUONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40ICzJgONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DIxw4bnntF7/4RfmznTcOzORAk5WZrNkQN1r7bYhmaoVsHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBw4ijmwFXB2nvOc5yguTnv9RuDAd77zna7JykZoqfEytvYb50u72jjQONA40DjQONA40DjQONA40DjQONA4cMzkAD95FrWlmrM40643DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DhyjOdCAs2N087fKNw40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA7M4kADzmZxpl1vHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHDhGc6ABZ8fo5m+VbxxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaByYxYEGnM3iTLveONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOHCM5kADzo7Rzd8q3zjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DgwiwPHnXWjXW8caBw4ZnPgNKc5Tfeb3/xmu2fCHe5wh+4UpzhFt//++3e//e1vu2te85rdBS94we4jH/lI94UvfGG7KP+wjNtFoda5EMN20Cauff/73+/e9ra3rfPbx7O/xjWu0Z3oRCcavfntb3+7+8Y3vrF67zjHOU5f3nOc4xzdSU5yku7Xv/5197nPfa772te+tpqmPNFfrnCFK3TnOte5uv/85z/d9773ve4973lP99///rdPdrKTnay7+tWvXj4yeu4dP/nJT1bvye/yl798d4YznKHnHbme1S9r0nrBpS996e7CF75wl75+8MEHdz/4wQ9W312eLErr3Re5yEXKR0bP3/Wud3X/+te/+nu1PC4zPO95z9td97rX7X74wx92b37zm8tb23x+whOesFO2v/zlL9uc10bK4CpXuUrfhic4wQk6n2P/xCc+MVPWptRrhx126M5ylrOMJqWrP/axj63eq0nroVpZX31Rxcl1rnOd7mIXu1j3nOc8p/v73/8+90kyc+c737n785//3L3yla88Qtpa3SP9uc997u5///tf9+lPf7ofy6JLhplf9rKX7fvxqU996u4Pf/hDR5d96EMfGibrf5/vfOfrtLMx88c//nH32c9+ttdVY4nlp4+d/vSn737/+993X/nKV/r025p27PnhteMd73jdta51re485zlPX1Y6+N///nffJ3/0ox91733ve7tf/OIXw8cm/V4rOV9Gp1/gAhforn3ta/f6XH3ors9//vOTyp1ExhFtfspTnrL729/+1j3rWc/q/vjHP+b2mh69gwyc8Yxn7N/x1a9+tR8HlX1I2ux617teZ8x0/+tf/3qvQxb1HbbBDW5wg+7tb397d8ghhwyzXf19l7vcpTvWsY7VveQlL1m9VnuCb+oyj/70pz91H/zgB+cl6e/V9I9FmdE1xz/+8ecmMzZ/6Utf6tPgw61udavOOEj3/PKXv+zlYG4Gg5unOtWpulvc4hbdWc961p6vX/ziF7s3vOENg1Tzfx73uMftlH3Tpk29rmJD0Vd4GFqmn6zneLCW7ZY6tuMxhwMNODvmtHWraePAZA4Aei5xiUt0D3rQg/rBcPKDR0HCi170op3B22DIGbv4xS/eGwI///nPtxvgbFjGo4BNR/orh+1wznOes3d2T37ykx8lwBmj/vrXv/5MPjAeA5yd7Wxn6+5///v3cpUHzn72s3eMuUMPPfQIBurlLne53ohlzIYYtDvuuGP32Mc+tnf2AAg77bRTbs88/vOf/1wFzm5zm9v0DlISc0jkyTF+/vOfn8v9sSYtQ1v9znSmM63moX6XvOQlO+BZCURNTYsH/hYRsATwV8vjYb54ATzh2JXlHaar/X21q12t23XXXbsDDzyw+/jHP177+IZN/9CHPnQrkIsjtPPOO3cvfvGLV/tFbeVuetOb9qDz2HN//etftwLOatLWyPrYu6dcAxRzCJE+sMj5v+c977kKGAzzr9E99MR973vf/p3J5/znP39329vetnvmM5/ZAY1C8n3EIx7RcYBD+hUAG+j0lKc8ZStA5YY3vOFW4L0+f8UrXrF7//vf34MWycOR/rrXve7VO9R+6290OtAJUAPQC9WkzTOzjsc+9rG729/+9v27nIeAhoAFABog71KXulT3s5/9rHve855XBXCvpZzX6nQ2yt3udreurFfJx9R13hHwox1DJoJKkCLX1+JoIgiwUo5rbJlddtmll61yYkEfMdaV4A+Q1mTZk5/85K3ksCybSRuAmIkKY/As4IxNSq6Nj9tCQEvvnEdAv0XA2VrKvLIoV8nnsfKZJApwdqc73anTFiGyVUvse6BWqDzPtXlHk3n6E/s7hC/AU/3SJCmq7SeeWa/xYK3bTVkbHbM4cLi0H7Pq3WrbONA4MIcDHOhGjQNHJw4wmBAHjAE6pBjsjNd73/vevTHISDcLK/rmSle6Uu88AhQYdYAVdNrTnra79a1v3Z+LFPvABz7QiSrg2Inc4VA//elP7yPWYkj2iYt/Zz7zmfu0nKhvfetb/R0gFCcJmWkWvcbg5zSpi+Nb3vKW6rQe4LwBzfACUCZyA0jCIQXMiXqLgz41LTDP82OEZ/iKnyIjank8lud6XRPpw4k7JpE259yQh3e84x199AKQmWN097vfvdtzzz27f/zjH9UsOfGJT9w/A2we0jBaaGramn4xfOfU38DTG93oRlOTd1e96lV73TDrgam6x/P0BRBC1KroTFFhAHcgBEDtMY95zCpQpG2AZvSG/vfJT36yB79FkQKY6LEnPelJfbFE2WhnJCJElBNgBBhG5ukYEWUIsMMx10+9/53vfGcHZOPcA9CAeCK8a9P2D8z5R1/usccevZ5TJ3JD76lbIpzIJDCGjUKHAWvoVyDaIlprOReFXKPTAR14q274Z3Jv2A8W1eEyl7lMnwRote+++/ayIL+1pkQjkQETksY14Azg1OSXiZcnPvGJq68VbQk0A4gbG4EpN7nJTTr9+oEPfGAvt6uJt5yY+CDD8/QtfpG3tbJJv/zlL3ebVsajIXkP2UbaZR7V9I95+ZT3vvnNb/Y2QHnNOTCLjYFiGziPTtFHRImJPKwhOiZA2UEHHdR95jOfqQZg73e/+/XtbGwQCS/alR1hMouuevCDH9zLem0/UY/1GA/Wo91qeN7SHj040ICzo0c7tlo0DjQONA40DszhADALcVRETMwi0R2MSvTCF75w1THibHIAGNcizwKcmZFHjEYOHCeGUwpEu/GNb9wDEgw2xuOzn/3sPm35jxOyefPm/hJH2RJPJOoJyUekSch9s7yW63z4wx/ufve731WlZdSK/kOvfvWrV5defepTn+ojA9QdcMCxq0lryZe/IXF4vA9fRMlZpqktang8zLP9XjsOiFqKrL3pTW/qHSC5W5L1hCc8oXdgRF4FpJ36ZpFPnG4A0Lz+Jr+atCnrlH4xtaxJR97vcY97LFzKlfSOwAT9fB5N1T2A8pOe9KR9VpaHZtm0SNjonjve8Y59NAfeWsqJREa+/vWv78/pHgCX5XVAJu2rz2WZG+dbv0fyBdTReaJEApzd7GY365e0c8bpNGRiQWTTzW9+8x7E+H//7//1fbombZ/RnH+i5ywhBb6o/xjQeIZKAABAAElEQVSAQX/TTW984xt7YDAReo9+9KNXwbWxV6yHnNfq9EQFAaJMTixDATu0FUBxvQiYSsaA6cDXLK/HfxGfpzvd6XrQBKBJvgPkSJsIOEs1Ab1AOH08kzGAst12260HbeeVX54AuYwV89JOvTdLjymPsR0I9NznPndudmsp83nRi170opxudcQ/RBdYyhoiz8hEB3C7lrQfMi6/9a1vrX28j/YGzqOXvexlq6Ae/fP4xz++B4gtcxchV9tP1ms8WI92q2Zce2DDc6ABZxu+CVsFGgfWjgP2EymdgIc85CHdr371q27flZlNxICxBMSeFKJfLF9hvDHas5SFocNQYgjc8pa37J1mBpgon9e85jW9UeV6nGdGJEM+RlX/opV/ZmfNtHuP93LAGG0HHHBA9SytPBkK3suZMDMK6DD7WBojefeUel7oQhfqlyyIohkDRJLXvCNeMi7wiyEkr+9+97s9PxLhkfcAJcz6+80Qefe73907OsoKRJEPwxw/7f+Cv5Y+vPzlL99qnyKgiD2zpGUMc060H95OIfwTjSG6iBHP8LL/DQMOP2tJlAHwiRNoltEeQZGVyFSZp/KLkhClFZl47Wtfu7D8eIfijJZ5luccMflyGofRBJYZMq6zTxq5jsPwghe8oOdF8gJqcRBEWHE+ZpHlUIA1ERPaLZQZ16GhDzAguxxM0TfkoCatvWSUO/sb5X2ODGCRKSLsUE3a/oHBP7IpWgCJbAvva3g8yHL0J/m/8pWv3IMOZFHkTZbaqKslKXhMz+BfSfqCKAr9jgzGGQAWiTLcZ599Vttvat+x/E1aQIy2B24CXvFgEXlOWeeR/s9x3rQSOREQdF567y6XVZVp6RPvU85yz7EAwPhq2SI5JA/0Cj1lKY5+EgI40QciJzhi0iHtsYhq0tbIuvfSLyJE9Vt9Rl3pGPpUJFVJ6pf9jz760Y/27U9+5tF97nOfvj/pM8bPMZqqewKEWcqcvpL8RIUAzbQ5oofxn44RlVWSpZeAM0RPGluzPK3UMe7TH4Az0VvykqeINTTcIxRAJ4pIOpMHwJ+atH2mM/6JcNQ+2sZEAkBG/wHgGNv85pSbXCBr5BOo98hHPrKvG0CPbTCL1kvOx943ptNFaGkzpJ4mP9hOWWZuHMED0UVklAy8733vW12aRy4tYQ3gwT4CMIgEwxc0VT9Jix/0FHkSHaaf2ns0e3ga+4zx5DCgmecAIYAz/QLI67ks0WcrBjST1tjnmnzYOsYXpL8bw5H8tTOeDMnYEduPTqeP59EiHs561nPkGVmaHttrVvqpMo9vQGw0HHtMAOh3+DULNGOL4wvZz9YMokTpZH0QsZ3oY/u+aQuyoy+ok0k5el9Umr4hXZ7x7hBZlI4Ngzwnip7uwH/2hsj7Ul+qG1khB2UknHzYbWykWRHoee9YP3FvvcaDqe1WgtqL+knqsqzs5fl23DgcaMDZxmmrVtLGgXXngIGOUxtyDtRBjKuHP/zhqwYOwxUAZUka4+txj3tcn86SEs7Kox71qK1Cz4FtQrcNquWAymA3m84ANtOMbne72/XOWv+j+Oc9lg2JhOA8TiWGnY1U4wQpuzIwyjmF8otxOLWeeONPXsuQGfLMQOd5/GRQGoTNviPgl/cwnGMs+c0R5DzjaYxpZcFP+4E4V1+GVzaTT9SCfN3XTpw9PGVYcbDnEV4xZks+OucQWNojYmXWptRj+XLm1DP1UiY88Wc/E4BkCahyHGNwp36We1jes9dee62Ct2PvCo8ABA94wAN6uWYQMv5Kx5Mj4m+MyDYKCKH86s8gtTkvw0z51Ee+nN15BBjWrupSbnhMBsNj+Q4p9zjkNWnlE+eNA8RhsfRn00q/AihxTsoohpq0wzL6bZmn8uFXCQDW8Hgs3/IaWcmeP/iozH7TNwEz9QHOADAszlvyADC4r976VUh6f9qSzEztO4n2ST6OnAhgM2eFYzaPyomLWemA4/q+d6nnIgKKZynyMG1kegzgJcMBJD3HeSKz5E+Exite8Yo+O440Zwf/TY4g/RLJ13Is7ULHcuJEC5XO6dS0tbJu7DKulPpF2ckMR57cx1lUVnLKkVcHQPYiRx0gJy8OKwAKiDZGU3UPOUTAoyGlDuGBCQ8TW2NUOsX6tXqqt/YJKJ7nACSuu29MEdGaqCbtX5J0dKY6A/k4mDVpy7zKc2C1JV7yB8gCCuhFeh0QEDJe06/4KY30QAe6X2TrPOBsveQ8Zctxlk5PG0inDema1A2I4rmQehn3jXfGZKALXV3qJzzzBzwDnE3VT95hsoA9VBJ+3/Wud+35KRJxLIKYjADvkP5L7lHKNbYFgv7O1pJ/yHjpWfaCcYgtNEZkjTxaimism9cfp/Bw7B36lcleREcm4nssba5NlXn1Yy+xs+jA1DOTwvIz4ThG7DhLwJH6B/QyURJ+uxfdQhaMU2y5TC6QIzrQH9vkaU97Wg9k0g8B0rWp/DIJAijCj+gbeWg/+tK+xyIKXWOXlTaLsiBtlIlF9sQsmtVPpF+v8WBquwU4m9JPlHdZ2fNso43HgQacbbw2ayVuHFg3DphR9pelNYwxgySynwHD3gCeJWwGUiCNQRioUi7VYiyY+Rb5IaKD0xpnlNMn0stzjDUGJCMh0TPAOGTGO04W8Mtst4He5rRjXy7rHxr8Y3CKNPMcB1IYPsdEfq67z7hJVN3Uev70pz/t8xuLihoU4Qg/zW5yPpBoL8YxY4djgFfOOQiMyhBDhmMnEmLTiiOEt5mBZvQABBh+QDdGGuO8JHlzjLQng05ElDbCf+AZXnCMY6CVz+acgYCPQMuXvvSlff0BXIxwhhtjqAY4MwuvXoxwMnXoipHNETATyfhSthibAA48KcsvregD79aGQ1Ak5XaMMWmWPUSeGeRmce3XMq/uZl8jl+QXMW4RGQAql4CwtGT+Gc94xkxAz/IohO/AkBCnMNEkDFl9MsT4048QHtWk9UyMR5ENe++9d9+erjOs7R2kv9mvBNWk7R8o/uFXDODXve51xZ3Zp2M8np368Dv6s+WsAGK6QT04yhwUbeXrkGMgEx6QATIlEkf0hwkA9Vbm8L2m78QB1qfxUt5AMzIGXCKHmSA4vAaHnwFu9LF5FAdVdBfZX0SlbA3TZmng2B45cYxTHs6S/r3jjjv2fVH7AvuzFx/9HaA7fWMYEQdA0zfoYXVFU9PWyjoQkn7xHMASaASQsGSHLqQr1U0bIe01lehgoKJngT3pk2PPT9U9wDAyEme4zIv8hLRZGdmT644il008IXpa3QPIcazHyHV8MiYBDgLoiJQcUsa71Lcm7TCv/BYppR0AdXiAX2wPeQP+jA1kBFgb3mQySESTcgIH9NtZfFkvOU8dcpyl09WH/cLOUScgBgIoiZhFxvtXvepVPQBCj9E7+hYAAphlwsc4ow+ZfMjXqWv0k/5orEcAAqAcHWLii0wbR016lsROY1PgvXYiL54LiUpCgK4hBQQOmOK+SYrNK1GFi6jcomBe2qk8HAPF2BXKrx/j/RSaKvMmCthJ9hrUX4wDxhk2M6L/RBWOkT6M13haAlD47k8ktD7rwwv6DGJP6TvsGICy+hrjyA35t1wb8GWMYzuZdGVnOCL1Ujb56l8mLk18GEd333333r6hU8f0pEkD8hO70zgxqy9616x+4t56jQdT200ZpvaTbZE972m08Thw+FTOxit7K3HjQOPAkcgBAy0CMGUJG4ffoG7wBZyVxEkBjgF7OFQBJQzyAI5EQQCgUGYkDb4Ga84eQ8bg61nOfBxGYNdUAkQZMDkQT33qU1dn8+3pZNkHAsiEptaTkc8ImWrcJX9HAAGDUh6WRzGwGEj2zQqfGDxDYiwpM0eCA8OgQaJ5ElHCKM3GzeXzwASEh0AzBLDKkivGUtL0Nwf/GJfK6b1mGrUjYxMwEbAsBvTg0dGfjKM4X6I+gGaI452lr+4DjlA2RbaJbcovrboy5AFgDM0x4iTnHrmytNMeOlniQp7M7s8ijoPlfvIgz9lPKECZcjonywxhe7sg1xirYwS4jZE5tsdIIs0Y2+kb8ikjBYAWqCZt2sj78U0/AG6rl/px7vK+mrR9QYp/+YIpGSsB4CLJVqezeLxVopEfZDCgmdtm8APIxDgX+ScdfqfPSJvlbGR5HpiVfjGl76RNLIXVV9SfvOEBAC9703j/GKkLsHXenwgEdPDBB89NlzziWI29D3iOAoiUaUpHOOUGLga0AGxzuJH6lpGbAV3pM0t8TMYYB4wV2kE0YvpkTdoaWVc3gCCAgV4kA3RGHGTvnwI8ljxxTlcC7D1P3uZFP9foniyN5PzmgyPeR1cGhPZ7VpmVS5SW59XVOIH8RsbAMcp14IY8QgFO89sx/YSc16Qt8xieZ0Ii4zGZ0nbGSACTo/Yz4RPKeOF3+A8UnUXrJefl+xbp9DJtzvPVSv1EBKf+oe3YTum3dPI8qtFPJo7IrXFwv/3269/H3gK0kAPtSmZLAjAby9JfpaPXQhnHjCFDCnhSysowzbb+3hYeBpC2VH9MBw7LVtZjUf/wLLk1xiIAMZAKv7R3Ge3aJ9jyT/9mI6JsObDl1swDcDyTFCZVAxLqG4lyBrhmbB/LSPnST8gDuUBsPBO2qJx47C+s/MMTk6+RA/IbwDRpyuOifrIe40Ftu03tJ9sieyVP2vnG4cDWIQkbp9ytpI0DjQNHIgeASYwmRl0AmryeU+hvSBm4c52RwfiKU5vrMazimMlfJAxiDFgSAhTxlxD1DNDJY94xy1YYMMpfEpCDc61uDAplqK1nmd/Uc0axP+S9nCJGP4M1fIgDnjwZWmXkCFAjaQIkJS1AU11jLKhTDCJGTQm+eIYhLG1Aw+RTHhmVWY4rL4CWNvFMDLa8r3xu1nn2FGFsD2WCw8BpT/uTiURNDA1JIBVQax7hW6IZOf4AI0RG1YuRlH01hvloE1+IInP4BHDDQxSeOgeClksXEl3AoByLhLA0CVkSNhbZYW86y1i18cMe9rDeEPU+v71fm5IJVJM2bSQPS5QjU8AFTqr7InIAqjVp+4Js+QeoTr8DFi2ieTxe9Cw5CZCTtKK95CkKBXHygF6uiQhLxKBoSTSUqf7iln+1fQeww0HR/nQLGTO5YMIhclPmf1SfA8NnUXSR+2Q/JFpMJGiiSMgheSmJPNFtwLTIN7DDX6JE8V9b1KStkfUAR+SRjtX+9JVjSJ+a5+QlXXkEhtO/+i4ndR7V6B560H5BoiU5l5Yfkt1ES+U9Y+UFjtET9CQ5s3l/+kVADrI8RrmujaMbpUv/L5/J2Gt8qUlb5jE8N3Ehv2xyblxB5KIkwKIoa+Wlb0OZgAmol+vlcb3kvHzHIp1eps15Jsi00XBcToRMotPzTHms1U8Z49kIJRkHZy39tXeptvGs6Dh2nD5sAlT/jV0VOSrzjbyUslLeX4vzZXlIJwSELrcSmFemsh6L+kfysYTYknpyaoIIAabSL5MuRxM6eImvmZTMvVnHEljHj6EsxWaQLoDsMK9NK5PWyHsjy/2FlX/hk/Y0LpT9SXp2kXuWdLLZRUyKUCwjE5NX8p5l+6zHeFDbblP7ybKyF16048bjQAPONl6btRI3DhzpHDDjiKbMyKVwBsWS4jQOjf5yQEt6yxOEhMcxy/Vljpltj/M2zMOgb8AHXMUAr6nnML+pv4ETZjvHjK+xPIZLqbKRtPKPEWc2/OPIhxhHMZByLccADfk9PAIuRWbFWBjer/mdqMFhvZIHQJUTyEhntIVPUz9ikHwc5QU4GyMAZmYX1c8ympBNqdWXEcvIBSwlukGaRL8456iWxKHOHiXAs0STSKPu6oUSZdH/KP5ZgiRKR1QPxxn4pq197ZC8AvrCu5q0ZFt+IkIDmnmtfgjkSXSBazVppQ+J8tRe+vys+iXtIh4n3azjUJ9Il3YJsOya9rB3C9lXNvUEmuCpKMZZVNt3REuKKuAgAVcAzP7w1zKdWXKY90+JYNXelhaTTfxbRACkcgPnMn30YvRkeS99VDtGf7tvEoTjHfAb/4Y68+CVaLgxsuRMX8J7ThzHuyZtjaxrf5FtdOWYUz9WvkXXRCxmXzk8IOsocgLscA2gC3Co1T3aSrSZCFvl9ycP+gVYgYaRPYAVoBmeaicRdaW+ic6MDu0zKf7RJ0hf8rw+Ia329+6SMqYAqWrSlnkMz70r/Vi/UWd5J1In6clo2jFRve4Zj6QfTsDkOcf1kvO8Y4pOT9rymH5Hv9tTaYwS+Tt2L3LnHt3mb4wytueoH00lbe0PeGYi5ClPeUova5Yq67/uKX/spzLfXCuBlvL+Wpwvy8NEHOurWWK+qDzLyrzJZWM5Mj4NJ5bL92b1Bv3hfVMok8rSZun82HNZBjl2L3ad/jhLFj0nGm7Y11If44xlm2yeTEyV75rSTw6uGDumjge17Ta1nywreyVP2vnG4kADzjZWe7XSNg4cJRyIoV5GIJQFYcwOB/gyQqFMu+gcIGCzaQT8sfzNoMzh4pCUS60W5eV+wIUY/MNn4kyoY85r6jnMb8pv0SiMTsRhYJBa/mqPOPtRMEKHNATI4ghkVnqYvqxD6fyYWU2k0vCZRCgMr/vNeLdPCydLW5u1VGYADEdnOMM5lkd5LcsAynKW99Ne2q90yoFoZX08MyZ/ZV7uA+HwqgSLpCnzjgPpOuOTA+tZhrVlvukH7qM4pM6Hs8fqF4BgGDESg927532JlHFqc3MOMUMuM8WuoSxzdj41rTooz1j/FMUBUEqb1KRVhlAAQ7Ix5EvSOE7hcZl+7Dx9tryX/lM6axw8ck9WAdZZnm2fsKHuKvMqZW1K3+FEiswUbaWPA820Hdmz5x3gtdwLsnzX1POUN8dFz81LFyA4/a3MK9ERQ92jPva3DOEnYDI6yXX9VDtEZpPWUbuQ6bRdTVrPT5V1AGaABf3ec5ZkAZVEWy5Dia71bPpxmY929tEAekvfrtU92ir7GCk73UNPZPJKvy3bk5yJFKK7tJPIv2ylkHIFSMZv5Ssnq3JN2kx2ZdKFIz0EFCInGStq0qY85TE6NzKWL9+VfTfpAyjgbe6zB9RhqJvzTI7rJefJP7KwSKcnfY7qjQd07zCaP2lS1/wuj7X6ifzoe/rckMpx1H2RmWyTjNVJr/8AVjZtAencJ6uJCk86x1wLMFreW6vzZXioHwB30KKP+AzLuYzMZ185eZk0G+4JnHewfwOUTo2C82zkHy/G9iBL/gG48rs8ZqymK2d9IEl6YJVx1GQzXpCRkkQZ4y1QiRylbNJM6SfrNR7UtNvUfrKM7JW8aucbjwMNONt4bdZK3DhwpHOAw4E41MOBkHMoeorDmC9BbksBd9xxx/5xBuHmzZt7ZyD5cUgQo2cqGeTNxmX5R/kcw5CxiMqw8fWuZxyA4fI+5cBfFKey/zHyT3k5UMrPeQAMhNQ3DolrDBfOEr45t5StJPtRccCGyzfKNPahkad3Pvaxj93KSU67lOkXncch41wzwhg1IXVPtAunj3Hiz3XRHmX5pbO0V7nMhJdgVvKz0a7yM4Z8er10PMslmimTCDR7VyBL/EQRef+QgAJpAw5fCYIpq3qh8rrfcThmRQFJA7QTKWN/KHuxBYDAr8wMZ4+2mrTy4eRkDxXvCuVa3lWTNnk4ZrZ23jLNqTwu8x07H1vGBKxCQ2dNHyEHlsFl5n1RRFxN31HvG9zgBr3DYClO9iLjQJBRDiuwcB5wBpyeSpZ/bivFkRIZMtTtiToI6JB3WbpMl3CY6R8y6Rq9gOgj0X1IlGYZxalPRMcZV2rSym+qrHM+A5oBokqdUS7VDEgs7yl06MpS04yHZXqAEoeYPqAzEtFTo3ssDzJpxIEVbVrqMhvAo+TrnM6mz/4/e3cBbstRpQ24YWAGd4eBi7u7Bxic4K7BGdxdkhBIgOAeLEGDBncJOoPL4AwQYHC3gQF++M/bN99O3b699+7a55zck6TW85zTvburq6tWrVq11lerquka4+/++++/08SAdPKL/gRy2lcz5DdyP+0solB99JMycg3gngiijDc1afPO8hggMIBcQL/ISZxucpY9C8MXenCPtU3L0SKwwP3NkHP5hqbo9KQtj4DRRB+X7SINcFr7kLl5VKOf5KG99L8SAE7ewGTtYN8++plu1f7DDzHRYyiTTsZNwG7A3eTnGHAKYL1ZtAoP9bXYfvkYztTy1cq8Pq3f0A3kW6QtPWYyeDhGBTimW71nKkUn0cvsuPQbz6unfQP1NbppHrEplU37DmXRJIrJUaCaj0rc7GY366Ni8T5bqyTf9GX1zcR17i3rJ5s1Hnh/TbtN7SeryF540Y5HTQ5M9z6PmvVrpW4caBxYBwfilHKgGd8GYF+2KsnX6tAwkqdMU3Oe6BxATwlWMOTiCHEUplK+jGfgl0eIwxSjm5FhoKypp5lUAFgZeZG8lx0DqJRgkWcYNwEFlzl0McI8xzALWMEo9gWsISU6CcgVHksDZOCUCasv+T18PkCW66UxxHkLX2OIDp8d+22pjfepb9oh6Tib2lgd034x+HzZM/yTXn28l7zEoUo+OQYQBPxFXt3jXPuKGMIfeQA5AgRyWGywPo8vIgFEVSFtkJliv/NVLG3MuCopvFzkEJFXjqGPA4TUM20LjEib1qQ1i42v5Lfkhb6eTboT+VCTNmUkW5HhecBgDY+T77wjIz8fIpCGPGYJ4XCpFxAScZrIgn4fsLS/sfaPDKC0kfPweUrf8W5grI2WQ0ANTguqcYby/GYeRQukTGQ4hI9xesvlpaJlyQoZsun0AQcc0J+75h4iP+6jAND9j7V/liuRY/zwwYSatPKYKuvAvFAcfL/p1XzQIL+TbsoRCG457fDv4IMP7h+nK9zLB1pqdA89AdDdthbJY3+zEHmy7xme5sMk7uElPUkfAy0XjcEphz6fscUxOqD8gEf6iXeSg1B0JXlJv6lJm3yGRzqSTqc75BvZMSFHVpTTRw8C2NDzHHyRdurvGc78ItoMOS/fF32xSKeX6XOedjEOq1PIOG4/N3tFBljLveGxRj8FCPW+sm3pfnoZv4Gi9oRFInNjB/ptciU2T/S7CCNt5nlgawjo5xqdumwvwDyzynEVHmarC3ZtdP7Ud9fIPDAxSycBkj4I4J3k1mTDkLLkMiD28P683+ROP9J++XBJ0gK8AFLacmiLJI1jvt5J9uxRVpIvnasLXUSfpr/FDk5aNlB0vgjkIW+X9ZPNGg+Ur6bdpvaTVWQvvGrHoyYHWsTZUbPdWqkbBzaVA4x/A7sIMksy7Kfji48cK4YcI911UVzSMZrGPhCwSiEZbULADch7rkWcMYoZaxymkIF9KjHuzDYzSg3oHFqzeaIOGOvq+tznPneW3dR6MoYYF+qewXOWyZIToIf3M0AZrwwMvCwBrdJYnZedfW98nt4Mn826GWScC8bTkGzkK437e+21Vz/zyLkMXxleMRaGz/rtnq93ydvzZpC1kbbJ+8gCQCIgwVg+uaasDBkROoz4ffbZpwco8CNRDSJ2gA7owLXoGvKAR9IC0rybYY4WRQ6RKaCovC2XEyXp97Y1B1V5yUC+tEfG1QNpo7E9pxiD+YQ7h9nSSQZjysU4TPtpo5LwKg5gnJPyfs7xhmGNFyLpyAynWlvjbymzNWk51yIdAaWWTXB0XNM/AF4M9jg5NWlT7oAteDqcTU+aWh7nuXlHwK++JEpVm+Kxd9tTrCQ6S18LyDwW+UXeyDVZJycipmr6jll77WSPRpsg4yE5pWvoikPn7N9SlvPIPrfPH6BXtAMdARjBR/JAd0a/6Q/Z00tEQiLJAJSi09zjUIl+8MESutYzljnrr/gS3e0LidETNWmnyrrIrAAyd7jDHfoy6bf0bAm8AyUCAm0G32t0j3EJTznPJqhstI3oLTKNT4mcMmkT3U0n+MLzGHHU6RkRWUBd+iR6Ci/oIro4QJ88lFlZpAVYGRui08hwvtJXm3asfK6pk0hiY4GN1Okf/Q94yOGnj9Xfux3prAA0IpJLXTjvHa5vhpzLV5mm6HRph0SeTTwZ1+573/v27U9OyYD+N2yb4fN+1+gnvNUvjf32xdMvnWe80haASfJCp6rXYx7zmH5fLoAJmVFfYG1sPmUEKANmRFJFR7DjEL6XS0r7ixv4bxUeqgfKpEFNcab2D4Bvua9ZxlURfPZeNBHMHiknJhJNPlz+uKx8ZEabAclMDNEH9BrdkUkENpJ+PY+kZxsYB/Q/YCr+aMeMX+n7ouWk9y5jhwlNciOSkU2lr774xS/e4VVT+gm7YTPGAwWZ2m7STu0nq8ie/BsddTnQIs6Oum3XSt44sGkcyP4GDFaOBQPOUhfGNWeHQW2AdJ/R5EtnQ+fDADhGBtSS8jtHs5eJeGGsM5w5CICOfLmNIRAHLHkxHFCOyc81X4Hj3LnGWDFzxqgx+wYYyYyttFPrmfd4ZirlGVEanHiGBGcScKQ+jM84+wEgUo8cy3fJA4jJ2cJv4I32YIjlXQnZBxgAYBhC2k37MdKUQf3LL0KW78i5NGZMlYMxrcyMewY0Y4qTirL0J+9PufM7R2nJ2ete97pepjgOIhzIFvACaFYuF9BWQAyGHyMOfzKbjWeRWfmOkaVP9rNC3uV5Bp58LVEhX4ghGMKbsT/9IaQNAMvKJT8AFCdEOwBhy0gOzzBkEb4sMo45MHigXfFbvhxkvCE/ARTlVZNWegatr3Upg7LihXoCfYAcabPatNLHIYncuTakWh4Pn/c7ZSSXnDKyTKbVQ38ADoxRIhg9n75WpuN8kFH5kC+RCTV9h5wB66Mnt60BUOSV0wnEKHVN+d5deQ5AFMmk3uShBFFL54ezhy9kPctQlZsDrb3dkwYBxuwdJM/0VzqOnqBryr5dk7ZG1r1HP6Tv1ImcO6fHDlsDg1AAmP7HnH+lzpqTZOHlqbpHJsYqegEvjQ3+EIeWbIbo35KkH/ujk5D2ET1LDukTY6Aj/tDfwzrut99+vSNM10WnSeOLd/RESTVpy+dynn7ogwjGZ+1DPsiK8iv7K17xin5cS7+n//RlulcUDMDm8pe/fLIcPW6GnHvRVJ2esueYQtoTMZHLxlT6Ed+B/9qmtKXybI7yqNFP0hvvyD95AWwHNBMFHhCSXFj6C5iUTrtI69yz9GuAb3m+7GUvm028AVoCmgErlu0hlrrkKL8hDeVzeL+Gh55NFN9Qlof5zvs9ReZFtgZwyhd+5ce+zcctRHxmIse92LX03BQqeSZiDCin37AF9XG2sjQmPn1Ffkjl8+7RmSZ3XM9YoA7aWgR6OX7RVbHV2W3ep78aH+iaMq28p/aTzRoPlGFKu0mHpvQT6WplzzONjrocONbaLGHvxUKKGRWNGgeWccAA32RlGZe27v2p7ceoBlgBFoZOsAGVg8xBLfdc2chac1oZagZwZS6NtPW8h0FnkAeilAbpWJ6bXU8GEz6KSuEsDY2YsTKV10TEAA18abGsC6OJgYAe/vCHz/YiybMMIYCA95lxLkGYpJl3ZBgx7Blkos6GsjHvuWXXwwvREcvKE0CDgQawXWZUl+8GmG5bAzMcyVUAvzLNquf4jq+McW26EcSBwm8gsnE6AN9Y3jVp87y8GevAnmV8rEmb/I+so77K6VQPTt88spzQxwu0ESB5jDiHjHwgA6e0pJq+A3AA0iqTvI4KRD/iJX20ETKMl9pFnvb3WRR5UpN2qqzLE/jE8afrAN27imp0D7kx/tHrgIqNlB+6RN6iS43hiyg6zQQJ/i0ah2vSDt9pGbRIJX0XeJ920s6lXtKewM+UA/ApysZ1Ewpx5If5D39vtJwP81/lt7qyB9gnxrWh7pmSZ41+CqDsCKiZNxYaH9j82saYv+hjBepgDCTr5HbZWD6lTjVpNoKHNe9bj8zXvKc2LfvM+MN+18cXtdm8vOlM0dJsDn/z7FMy5118BjbVRtmEmzEepK417Ta1nxzZspe6tOPGc4Ack/8xasDZGFfatYUcIFANOFvIoi19s7Xflm6eqsKZbQcimZ0v976x5IMhw9C1HKNR40DjwPbZfBsZM/Qtzc3HFRpvGgcaB3Y9B2w7wFnhoH/605/ul/LPW0oHxLcHGgAQWYYmWr1R40DjQONA40DjwHo4sAg42x7DvZ7c27ONA40DjQONA7uEA8L97UNhiYr9SESIcDwAA8hSwUaNA8d0DtgPyv6GoiDMYot6aqDZMV0qWv23GgdEmtkb0J5f9hD1J2IJeGbZor4rkkZ0sugWJBLOV1MtCWzUONA40DjQONA4sJkcaMDZZnK35d040DjQOLCJHLDvC5DMRu+WU2R/DM6GvWjs19aoceCYzgH9IWCy5UhjH3w4pvOo1b9xYFdzwLJUe+fZTN7yS3smWVJluVhJlmmaJPLRire+9a3lrXbeONA40DjQONA4sGkcaEs1N421R9+M21K/o3bbtvY7arffWOntwWAfH/viaN/s/zKWtl1rHDgmcoDzrZ8MP2JyTORFq3PjwFGJA6LL7P1lXLPPWrmf51GpHq2sjQONA40DjQNbnwNtqebWb6NWwsaBxoHGgZU5wJFY9IXGlTNuDzYOHE04MPzC19GkWq0ajQNHew7Yq9NHYxo1DjQONA40DjQO7EoOHHtXvry9u3GgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWCrcqABZ1u1ZVq5GgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgd2KQcacLZL2d9e3jjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DiwVTnQgLOt2jKtXI0DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40Du5QDx9mlb28vbxxoHNiyHDjucY/bXec61+nOcpaz9F+z+trXvtb9x3/8R/fnP/+5usxnO9vZugtc4AJLn3vXu97V/fWvf+3TXeUqV+kudalLdSc72cm6P/3pT90znvGM7ne/+93SPJLgWMc61g6ftP/Zz37W55H7U4/nOMc5uitc4QrdKU5xiu7HP/5x94UvfKHDi5Kuda1rdf/8z/9cXtrp/Hvf+173xS9+cafryy6stx7nOc95umte85rdaU972r4d3/zmN3ef+9znun/7t3/rjn/844++3kbMX//610fvlRfx5NrXvnZ3mtOcpvvNb37T/dd//Vf3mc98pkwy+Vxbn+50p1uY/ve//333oQ99qE+zXr7I5OQnP3l385vfvDvTmc7UyU/bvuENb1hYhrGbl7vc5boLXvCC3QlOcILu29/+ds+DcjP6E5/4xB15Xkaf/exnux/+8IfLku10f6web3zjGyfJ/2677dZd6EIX6t73vvftJNdj+S7iT5OH7U2zTB7KBrzwhS/cXexiF+v1y69+9auODHzpS18qk/Tn9LB2GqN//OMf3dvf/vYdbp31rGftlEPftLm6j4e8973vnau/L3GJS3TnP//5u1Oe8pTdL3/5y+7QQw/t6KxQbf/0HN1NP+jXdPeXv/zlvn41X/3dSJlSJn30ale7mtNRMsb94he/6O/VpPWAr7bSq2c/+9k7bfKpT32q+/znP9/9/e9/3+ldxznOcTrjxrZt2/q09K30dNx66VznOld3xStesTvpSU/atzud/J3vfGdptsbp613ver3uJyuhzRzfVpWR4x3veN2d73zn7g9/+EN30EEHpagbdtxouavpv7VyR08bX3x5VN/9+Mc/Pre9t6LczRuDNlPuSkG5y13u0o//L3rRi8rLs3P96BrXuEZvQ+njP/nJT7oPf/jD/XGW6PCTjbCbm+wNubr896o6b57seeMi+4d+5dP44q8vMBo39L2Sbnvb2/bj+mte85ru5z//eXlr7vlUe0AGGy0ncwvVbuzAgQac7cCO9qNxoHEABxile++99w5gkIHp6le/erfffvtVAVjyu/SlL93/OV9EDD6DDyftBje4wSwpgKfWobjTne7UgxnJxCBTS3e4wx26i170orPHGKbqwsl5wQteMLsOmAK8LKLvf//7KwFn66mHOt/tbnfrjn3sI4KLOXSMOw7SPAIkLQPOznnOc3b3vOc9Z/XmGBj0GRRATu+pITzktC8iDneAs/XwJe944AMf2AG1QuV5ri06MqIf+chH7lBufLjqVa/avfWtb+0++MEP9o+f8Yxn7K8tysu9v/zlLysBZ2P1mMKff/3Xf+37GdkFtAwB4bF859WhycN24GSKPISH973vfTtgRUh76ENAjmc961m53B85kec973l3uFb+MOng67qIIzicqKC/r3zlK3cvfelLu6985SuzR+n6+93vft3pT3/62TV6jt4DngHaUW3/vOxlL9uD0qVeBC5f97rX7Z70pCd1f/zjH2fvm3ey0TLlPcDHRcAZ8DLAWU1affw+97lPP3amPuc+97m729zmNt3Tn/707gc/+EEu9w74Qx7ykA6IEVJXE1XPec5zuu9+97u5XH00bpYgvbYEoL7//e/fCVwtMzcm3P3ud+8dQZNAJXC2WePbemTk3//932eTemU9NuJ8M+Supv/WyB0e3uxmN5uN8YDYi1/84n3/2muvvfoxJTwxebbV5G7RGLRZchd+OLLx6Epj7xgBVvSpUo8ZKy9zmcv0tohxPrQRdnOTvXBz+nFVnbdI9rx9nv2jD9H3IX3OmPLCF75wB7tZP6ZLT33qU08Czmrsgc2Qk9SnHRdz4IhRe3G6drdxoHHgGMQBM7kU/v/+7/92IlcY+De+8Y372foHPOABHYOshkQwiXwYI4MOo4ThkoiyS17ykn1SztWBBx7YG4G1QIyBBR122GF9FJGotRpikAY0U/5PfOITPRDnmiguxurrX//6PstvfOMbvcMxzB8Qc6pTnaq//M1vfnN4e9Lv9dSDowo0w7tXvvKV/QzpT3/6044TjURCAPSG9NWvfnV4aYff8gTMaDeRLO985zv72W6GLuCIs+h9NSTKhiwMybvkicz0htbDF3kwcgOUAbg+/elPV4Oz+gmwD39F8QEkRCcCQ65//ev3zjLZ4YjPc4bPcIYz9LIjj1VkZF49Hv3oR/esmif/+Mr5LB2C8LaWP00etnNuqjxILYIhoFmijUwY0C2uc7Tf/e53b8947X+iMQE70ZO5SXYS1QQ0CWgmQvajH/1o/yzA3+z4He94x45sJHIYsA408/yha0AZ/cAJoK93W3MaRcABfWr6Z2bqyZbyfuADH+hnx4HqJznJSXqg7olPfGKKP3rcDJnyIhHESP1LfdJfXPtXRorWpNWX9EXgJRCTXgSg07UANWNmwEIOkjFVJKD2+e1vf9vzXPSVtA960IOqJx6Un4xoMyR6jU4yjgFjRcKJIBQVPEZ3vetdR8cwaTdjfFuPjFzpSleajQljdVnPtc2Su6n9V9mnyl3JQ7J1yCGH9BH7N7rRjfpoT7YakDq01eRu2Ri0GXIXXng3OyU2Xq6XR20W0IzOZQPqs8Z4477+TXdn3F6v3dxkr26yVVutqvOWyd48u4p+BZoZL9/xjnd0VrKYhAZKm3h4+MMf3stIKUdTzmvsgc2SkynlbGm6rgFnTQoaBxoHduAAxy2gxL777jsDE0SjMP4Za2Zqyhn0HTIY+WGpyNgSPkaL2TuO33Of+9zZMs0AGox8wMMqZAYdGdw4MbWUWXsRcMqGLLvhfFrSxDkNcFZGn5XvCcDIYRkupSrTLTpfTz0SZZclYHmPNkacZNFhtXTTm960X+YJjNx///37x4FtogIBitr1Va96VZXz95a3vGW0GLe//e17J4nB+uxnP3uWZj18kYlZQET2ylnj/uKEf0CB8PGTn/xkd/DBB/dPcVaf8IQndCc84Qn7SI8AZ8985jN3yhWIsOeee/bXOdtTllMNM5lXj2X8AaBYEjSP5uU7lr7JQ9cDkFPlAQ9FhSB9MPqD7DzsYQ/rgSz3S+CMrKAXv/jFO4A7/cXiH3AKMehLp1k07yMe8YgesLnIRS7S/ed//mfvXNO/6NWvfvVMR5NnkcWcB1FqQPCa/gmk0T84F8aQLL9X11vf+tZ93wMcLVqyuRkypZ7GLsQJXtbvp6YFeJ7oRCfq8xUpmCWunGrgBeBffxNNBhSlG9BLXvKSmdOt7ffZZ59+okOkwirL+rPkjCOvPZEyAPVEv4loGwPOlD+TKf1Dg3+Rz8Hl2QTaKuPbqjJiTAMMbRZtltxN7b/qNVXulFU/048e97jHzfqTNtZ/AeLsGFHaW1Hulo1BmyF3+Mu+BXLRb4sIMBY9hr9sBYSfT37yk/vJZfzV3zbCbm6y98VFzTF6b1Wdt0z2xuwfNpUJUQSkNumBbEHA5mNPmfCaN1b2ief8q7EHNktO5hStXR5woAFnA4a0n40Dx3QOiExA1uSXyyPNuLkmEsHsCKPfDAuDEDBlHX/IzHkGNEs+xpwARosZPyTSgfFt9uh2t7vdDNQQMcWIFLHAsUAcOWHyDGgOmagBABanDJlhNyNoGR2yh5VoNvtX2IfLbA1wh/Gk7GZqReUofxmuL8JCmve85z19PvnHEQGczdsfLOkY9/bGYNQGeMs9RzzkIAEiDcjebX8sDg+QaFk98E8elm+Z/RRBAaTBC+eWX5kFQ8ohvNx9S68s30Fx8PofFf/UHwESS+KMikzEe/s2mTHmtCF7QHzsYx+bJQeucZ4QA7mUtSTCf/kgYfBT+DKlfckERzKEN+Tgec97Xn+JXNzqVrfqRIPhs4gQ+5+JrAsBUNUfj7OcLffsU8YR1baLyFJX5RUZVC6N8oz8b3KTm/ROt/aTzn4+AOiUY6weASTmyb+8yZ2+RTblqb+WNJZvyZ8yrfMmD9vbq0YetBOik0qizzi82j+k/bUnx62MiMr98ihf6WLU5x79qK0BPFlmsvvuu/eOIf09nNig3+lj+7fMo7H+Ka3oSuME/RLQzHXjAOCMM6ocw7pLE9osmUqfHC5LznvL49S09jRDJlmGOlU0Kydt2+HRtPghMhUPEqniWf2L3jauDKOzp+g0abLUfahLALD0HbkiR1nS670mqW5xi1s47cue6N7+woJ/y8a3853vfP3EgXqTSW39tre9bbZMeFUZufe9793zjlwmMmusmGQXqEFGyRrZNwbZy3ERTZU77SjCGhmzy70pOdf4bUwzttX0X/lNlTt1RKJCSxAaYE3GjLH+AD2ryN2UcXBVuVs2BvUVG/m3TO7YhyIsjd3knN577WtfO7MPZcnuNK67bwLh8pe//Mibut4+xEv9NKBZErKlyEomJ2vs5uQxPDbZO4Iju1L25tk/Jp/IO5lwHiIb/BN+h71Ch8AZ/UNvkUnPGsPpjHL8q7EHpsoJvdBo4znQgLON52nLsXHgKM2BOFVjS/gATIz6GGx+2/zZM4yLKGrLTRjxgI55y/4sETJYcBgy0HDg835MNDPvD3hmYMrsvXsGK84l41l4NODL8hQRFGUemTmSj0FL2kTayIOR6s9g9JSnPGW2F8FYhBhjK2DPGH+UC3FQLCdBNlMvATnXHvvYx84MLr+RpbEMPoa+fZIW1YMTIHqEQY7Uw/NmrTgqj3/84/sj5wzhM54Y9FF4gh/3v//9+/oDhxjbQ6Cwf2DwLxGB0pekHPJhTHImY9BzxoBAZsLd97yNUzlxHKgx0Mw9s8KIDCUaaxFfprYvpyZOJh7hTZxJDp/3ej9Sp4DFInVE0LjGyX3FK17Rpyn/ySuONLBtHpnN1o/kNdyUmDxaTleWQTnxFVi6ba2NgXxj9ZBf2t2709bkn6FG9kXxIVGBolCGNJZv+DNM63eTh3p50HfoCPqLzHC8yQNAEwGdQ2QSWTp/y1veso8eIRtANmBEueSQbhkjuosMoIA7AdYBWnSvJfJkS1k4lJzDeeT9Y/1T+rEIYzJpUgQZF0qnob84+LcZMkVH+kP0yA1veMOeJyaEOEJlf61JGz1MLw8pfZgOxgOR2sP+7hnOeyZj8D7EgZwyZmk3+ev/Q7CTjo1eoIujS72DQ6eMxlc6bQpwtmx8M7FFTksi25aDApJMPq0iI8YQOpDsAAOVfYzItcmQ8F7dPWd/PXo/0cFjz06VOxEnABg2iAnE9Dv9l25Hot1RTf+tkbvIy3BTcu+MI55xrlbupo6Dq8jdlDFIHYa0TO4A1GwoFHkXaffQhz60e9SjHtXLt3tsEH2AbWYScR5wNi/iTR5AECSyF9HhaMwuHNrNfcKRf032tjNlV8vePPsnG/wLImBrlmQ8B5wByYZkEja62dGk6mMe85h+Qj26uMYemCon8ceG5Wm/18eBBpytj3/t6caBox0HOFiIcTGkOAYx2IAswAQGjcFByLL1/ow1hgvnvow4SH6MDgYNet3rXpfLvUENyAEKMX5FmnEMEcCKUS9fRutHPvKR3mhljHM+zZpzfMzk+HvqU5/aG86WLMSxFIXFaANkMYoMWvYg804Dnig5wMiQRD3Z6yIOEge2jLAbpgcKGSDxsHSCpMsX9JyLDgP2KZP6WWblnPG3qB42LVUW9Xj+85/fg0/awUa3eG/WC8go+otzzhEGCoa8A9k0OiQ/RqRIN/sPDcG+pHMMAPfrX/+6vNyfc74Qhw8pnyVIDPl73OMe/fKxOGscaPfHCCBLFrX3y1/+8lmSRXyZ2r6ARWCsvYQAQo5IvfCQw8UZsbySkcQZ2mOPPXoAzYz3m970pll5csIJN+ucvgGcKCPski7HAFZk1lLakrxDGUQRiLTjCPtggxB9fQC4Sr7m1UNeY/Lvuig3bSHiRoRUyuFeaFG+SVMemzzUy4OoQTpQu3Ly9beAOoDkErhPZA3gCygR0tc5GSIefeVtEQG5yAx5z3K9GOB0oKVI7iOTCD4EQ86HkWt5x7z+mfs5KiNgAYArfw6HPryMNkOm9JtQKfdAQ/fwRaQdqklrfLFUKyB13uFIn4aMMcNJArpw2xroFaBDO5ZpsqRo2ZiVsWno0OXdrtMpAKQ4a3hgnDV5ZRJAJPYUWjS+ASQTwcZx09bk1p6Y6gls4jSWNEVGPMsxNR5Y8prxpczHOT7S5+rK0bV01phEXgFadLS+R6+P0VS5Y9dYNn2ve92rLwv9L9LP+IHo/0S31fTfGrljXxjLY0uV9ck7s2S/vOd8kdzVjIOryN2UMWhYXr8XyZ32ZTeV9qExns1B/shd+raPdayHRLUlKjCRRzV287x3N9mrs8E2S/bm2T90MRrbLzkTQRlDyzZ2jS4ApNO/9BOZNJGU7Vxq7IGpclKWoZ1vHAe2hx9sXH4tp8aBxoGjOAdiwDOmhxSDnlEasgyRM+Y5gA7jFhkoEtmQtDnmi46Ak7FlnElXHjlyyIwe0Ax5nhHt/cqUNP3NwT+DLKMe+cJYnAcbtwMnECci0XT9hcP/icTIIO1SAMQyTc4NiJmNzBcgc8/RPc+bYWJ0McAZwD7CELAqHxQonyvPGYTowAMPnG06D4AB0uEF4GweAV4yuANsLGPgXGQpLCcyBsJYHmXbx1go0wEVUUAATsvLXvay/hqA9cEPfnDPZwYu8DJ17hMU/+JwWr4aMK64vdPpeto3mTGIs0RO2eJccagDgpVgY55zJCMBzfzGm/DZ75I4b+lnY/ssKQPjDGgs6gevRAwEQJQvOaslS5cANfpNnIjaPIbpmzwcAT6XvFkmD/imr4bSX/wWYRnZ81ubIXJAZ9B5gHt9jSxw2oEP88ikRhxyYJj2R3H2yCNgxd5mdBbdL1/A+5g+9OzU/um9IjbTF4DBeb98xmizZEokQUiEtD5wwAEHzJbZAcjTv2vSZsk6EAOvQ/RdCWoM+6x6AjiiC7RvObbU6LRMhpRL9lIOx1yPjjIOGS+90+SF4xRaNr7hn7Y2tgDjyDhZpk+VgZwPo9qWyQg+AUDky/nMV0/HyiuawzvIs8kDNosxlp4VHaQ85H2MauWObtZnkLGDQ6wtjWlZ9u9eTf+tkTuRewhoa0lmSP3SznGyc89xmdzVjIO1crfqGLRM7uhb5CM/sQ+NmfZnJAsmBaOD+oQr/sNn+hbhf9ogfXiq3Tx8fZO97RzZirKXtoptOGaPloEGQ7DaxGeiT+nF7NULfI1u0P5T7IFaOUnZ23HjONAizjaOly2nxoGjBQeivMeMjBgHDJEQQ9+sstkTwBMS4ZR9mJIuR4ZcjEN7jkwhZcmgxcAfLgNhkBtQAiiN5Vk6MICpYR7y9R7pEqGWfPJlUYCUsH6OoKV0lgAMI+rMfMoHH8eiQESSJJqEU+p9Bk/ORAbc0olOGXJUx+Q/XAYLBPO3iAzclrkweEWqpC0BiQwCjk82OR/LJ+ndKwfxpI2MRI5cBxICSM0Ix1Cw948lRGOEJ3Eys4x3LF15bT3tm3y2rUU1IGX3ZcGSUh71007DducY4o2oBtF/6qqt9t577zKb/jx529tsLGovUXj6inqRDXmVDqf+UDrZO71kcIHMZ2NbS8XiSA+SVf9s8rCaPPjCHX2pHegXfVn0mOVoHF+OXjb3Z3TTmSJ6szcXY1yfsimxfih6aGzpsIjc7IlieVwiJDR0+i/dJx+6AQFsRai6L8oRUFdSTf+0V6Q91Mgvp5nuBIIArRL5VubtfLNkilON34CcQ9f21QzhPX2uj1gKba/GmrQmiABxQBttx7kGDg6X7Qz7Kz1j0oJO8V5LvC0nFLlmTK3RaQEjjQ1jlOvq71yklCM9XPPxnGXjm3ZG5bJXv40tJk3GaJmMiAQH8tKXJr0WUWwLY/jQwV32JddV5A6AbZ9REXABr40FaQ9lrem/NXIHDCRvxnKRbokY9FtdjB/lOBy+LZO7mnEw9Yx85R055jq5W88YtEzuMrE5nKykL03obgTttttus69sikgXcRgKn1PfXHeMTVTKV3nfeXkverlMkzzyHvea7G2fABrjOf7k+nplT15oaPNtv7r9f2x3v7yvpKEfQDeZ9NJP6Xz6d6o9sIqclGVp5+vnQAPO1s/DlkPjwNGKAxS6JTxjSyFybTiAWJLBKY/xUi6/HDLHckSGAWfNlwSnUAA5aRl1MeyGzyZ8fnjd7+xB4dxXxOaRKIEhZYYb4CXyaM899+wHZfmUjqjnEu3FGVDHMeKMitgYM5DG0pfXEjkydArKNIvOzcCXX+sr06pfIgay71J537k6Md6UXXRaohCTLjPdiTzLdU69DYEZM57PDFzul0dGMjKLZ9Z4Cq2nfZM/wx6pW7kkLvdzFK0xjKZMOdXTuVlpYfkAt3IWGs+AB2ie/ANO7QHIqIrxl3everQ8Rl7AETIeOWe8IdGY+iYwYyyScN57mzxsjyKtkQcRKZEBoJQ9cJCJBLIDYNA+2kY/okv8DYkOAD6LWkqUa9JwmkXApF8Ag4Z62fPAHSBcQDPPM84B6YkESp451vRP5ffHOVA/YCDQV2TyPOBss2QKb9NPU5cc6XV9NstXa9LKA9gt2kzki/7rj260jyCgEJV6oL+w9i+Rz4A3y+e0pS82o7Sd82VjliX/aN6YEscbeKec9BI9DLjPl1jzPjxwTXT3UK6XjW8Zg7MfUF+oJf8WyYhHAVMIGJeyxiYg564ZK/SRAHdlBEj/8IR/q8qdySoANRLVljbNK2v6b43cKS/A2wd4tB194ZoN8QFwIvuHTnzKlDKOyV3NOJiPMk2Ru/WMQYvkjizn/SlP6rlRx2x7IT8ArmjGkrdkuNZuLsvWZG+7ztuKspd2yiRnbKZcd2TXIe3oryTyMiTyIh92ANCtxh5Yxf4evr/9Xp0DDThbnXftycaBoyUHzMYzSgOClZXMteHMOaMm96S/89p+Otkst3zeeTbN56xltnKYZvi7BGdEIM1b3je2SW7yitNi0BnboyrpYlCaPWaED2fOgQre477IghI4E6mV5U/zIqUYYFnOio8cSs6vaBB7rcVxS3mGx9SjnOEq0wBHhgP38L624nCUzrI0JRgXR6t809kafwAAQABJREFUNuf4DyBj5Awd0QBnw7awIX1AIEbuHmt7hr30pS9NlrOjcmV/FtEQUyl8mdq+Y/lGHvHF/nrziFOonMAN7ZXlGklvqYg98dSXI31oEd0S0AGvx8AQeQA84hgqC+fVklVLwjhKq1D6JzBPVNOQgCQBZeftazV8Jr+bPNTJg2W4ZIPjFdAsvKQPYlSLJhFBAWiib8xUlzPOnkmfJY8husGG8omAoYuGkRjS6jOAs9IBTB6iRMnDUM9M6Z8cWdGR6lIuOZU3GQZKbzs8ujPvGx43Q6boHfKPv9EXee9wTKtJKw86V5SYP30XcKNt0qfwWBqAmqhb9cOfkuwZSvdxqErAfYpOiy5Xbm1UykmueRcnLvu7uT6mC+gK14Eq5ZLDKeObepJX5R9SOTZNlRHR66Hozvx2VFdl1afo0/SHjMNl2innq8hd9nSTPweYPWQ8D9X031q5I8f777//bDzyxT5yljLF2a+Ru5pxkGyiyNgiuVt1DFomd2lz5SBXpb3oWil3fteS1RQ+vIREsGVrjzKfVezm8nnnTfZOOPMJpthgR4bslW2UCfTYuOW9jLUpU3lP/x+S/ojU077HNfbAKnIyfH/7vToHGnC2Ou/ak40DR0sOAEJKJ7qsZAANTnyI45V9XYBOIldEpt361rfunYikyzEz0lOXaXqOccggYyQ7z+x68jSzatnjEOTKfcfMnMtDutJxMmjZvJjRn68iWbrjOnBnCHAEVBoaaFkSxYiaF7WTmVOO6fDranE2GKHzKPXg0Epf1gMgJ5qNY+jLnGNkg11GoLralLwE2colmkNArMxL3RgPHPvs7+M+WUhUYhlNYu8g0WYoSzbzOfoynfvASnxHNQBO+DK1ffsXDP5xKi2PYuiUgKhkDCPLexk6IizxSlQY/uEjfoaUIRSHNr8DGHBKx4jDF9CME17KerlUcwhojOVVXgPuaZ8hceLJG1lW1tJRHaad97vJQ508RFbGnE08jn5hIOsLPnDimA2Gy3ZIlFC+7kb26C5OKtmkv4Z9LM8D4sjaMFrN/VyTpqQp/RPwC6CiGw466KDy8b5vuVA6uzskOPzHZsiUTelFBgCsgA0lRffRnagmLZ4Y7wAOz3jGM/qIn+SdrzAnAstyOmA6YM0HGUqKQ6bdAEE1Os27OW1kSjRzqb+yH537nD9jD9kakokQYwr5pAeyLDjppoxv2k0e7IAhAf3VUbSxyOYpMnLYWjRm+FDmJx8gFV4Zs8NfOtyS50RwlM+IzLKU095k9mYdo1q50+7GPOUwAWb8YA/Rt8DY2v5bI3fGexOR6uNDQ6XutlUAyjhTI3c142CN3K06Bi2TO3Id2RedWI6Z+rt+pn1Eu9ZGpJVf6jT5MG9CtNZubrK3nQNDnbfVZK9sp0yq6+9D2zsrFAKulc/ROWXfZF9mgpzuii03xR6Qb62OKsvSztfPgSOs+/Xn1XJoHGgcOBpwwKw3I8OsN2AkxPh2DYBV7jNyn/vcpzfWGVCM0SwDtLQks+3Jg+MeUCEGXe4tO5pJRWaXSwCAkcw5sbyF8TSPGOBxRLPRcNICRIA7jM0s8Qjg4X0ps/QM1cycWg5RUpzYscEz6TLTNHRcAHd5TwbSPFMeObJ4zSAXxVXSNa95zf5nyl7ey3nARc550rsHsPHlKYTX5cxxf7H4l/3r7OlTbh6e5w3sAd7kmy+NcSx8KCCOkOvDyADLE5E6LipDUZz+tLZ9h8/7nS+gMorsNVSSZSbkmYPN6edU6ifawYx0SQA1190f7kOXkH7lHaMYVO6V4AKZICOhRTKSNOURSOtrYsO/yAog270Yh+Wzy86bPNTJQz74QEZE55ZkEiA6AshMhqKTbJwcUM0zu+222+zrbtHJIk0Cmvkq7DzQzPOcQPlLX+oCgEYiLIbyO6V/qh+iT+UVAsbZ/w8t0/+bIVOZ8AEWZ6mysvid+kan16TVPvLbthZFV27STlfQkXgM2EBAd4TnmUTxmx68+c1v7rRfPkn31eq06HZtGf3gmLYlT8iS/KEe8Dv3jTF+D6MUp4xvmUgxLpdjA/6yH8g8mZwqIyatxsp68MEH93Ux5rtvE3iU9qPD7Uca8ltboHm6170auTMeZAktMFB0nnGLA8wuQrX9t0butC2AEAipfqHdd9+95zUbgz2HauSuZhyU91S5W3UMmiJ3mfC0ZUj0p7Kx38ic/lQLmpHhAJDkYh5o5j21drNnhtRk7+9VNhj+bbbslW1kwoVtiwDmIXouQQXxf3LPkf4t+ye7l0zqn/p7jT0gvxo5kb7RxnKgRZxtLD9bbo0DR3kOMPwYqwwGM6fZU4RzgRjdibS61rWuNYuOAYgwYhkQjEkROvb9eNSjHjXbVDODi3TDpTHLGGczacs/zdb4hDNDCciQEGnGcIz2sbwYTpZoAslECOy77749uGPQC1hhz6ksLWKI27CTgyM6wfsY/pnJttR06FRmf4bhEpyyPGaeRA5xIL3bUgobgZdgYOlsls/m3GbiBm4zWTaftyxSHgx2hvqiDwRwWjhG3q39zFr7vW3N6eOUa5t8vTHvGx7lgU9m3nwgAe8BQsrt/eVShnx6Xr657kt2yu19ogA4PiH1QDFQcn3ZsbZ9x/ID9onGMMPti3OcPeUg+4zxsm7O3/ve9/ZGkb7y5Cc/uQcEgQMxkmzenOgi72MsJWw/juOwHCInGFTex8ACMqobvpQOATkMODnM48j+3eRhu4NcIw+WAgP8ydo+++zTg9V0WfQZ5zU6UuShfqT9bdpPFyXiRlvjPzkhd3Hkydr97ne/UVGw3xkgB2hK3k06WAZnosQ1UTNAfBMAAeSS0ZT+Sc/Sb2RdBI1oOCCwZ5VLNNUiHZU6bbSOoTf1VXrnYQ97WF8uZUm5vM+HEVBNWs/R6xx8kxk2+kd0rPpauh1AWtSNfmsMEv3L0efQi9BSLnolm47X6jR8t+yHXiZT5ETdtINxPeBSX7gV/k0Z38gLgNeYiscZpzOm4YP6boSMjFXBWKY/mAgTZcV+MfbQy5E9/XQeTdVlQKtyX7P0ExGWJk5EchpfOdJT+68y1cidfqzf0gvGU3LFXklUPznS7qhG7mrGQXlvBbk7cO0L43vuuWdvR0X2tTmbDc3bT7S/OedfCY7g89hSYbIsIrjGbp7zul5up+i8JnsvnLFws2Vv9qLDT/g/9LZxmz/CPty2ZjsbL+m2AHnlc6XPYnzXR9EhhxzS23b0fI09MFVHlWVo5xvHgRZxtnG8bDk1DhxtOAAECwgFNAhoRmFn3ynGWWayXWeYhcy8cgAYdGWUTJyucnlhnimPnkU5OgcwCbU3UAGIOBoMecawCKnhskfPoDIPzijDFjBhht+MMQNfGvW1FCokOsoyJw6fQdHgCKyQlsE6ttQjgEmcpORVHoFGgC7lFqVgVtNzwMoY9AEYy+fKeliKwAkCynCS8AJPGG/KHEAlz+SY/Cwn+sY3vtH/BNh5H6dN5ITlNJyPZcRYZDTiDUebY8QA4HgKt0eiFNPmvgCVCCrtz0FAQMQsJfI7wGTycG0RlXWrad/kWT7vGjk69NBD+3ZWJ3Ujx3htxjmRj9IyxrWZektDnrSltCIihhEbHGnknYvAVWXQltrU+7WPc1ENQEpURoP6PayHa2je9e13j7g/L92863k+xyYPdfLAAPfHqdcHRcMwqskSJzwRNfgrOoveIhP0hj6jn8TgNqmAAOnuh5yP/XG8QpxrX/3SzuSdrHlG/wMGD9t/Sv9UTpMNADN5ARGU2Tn55diWgHLKMjxutExZhmlT70QEq4tyIRMhJmRS35q0nn/2s5/d92l1pNcT0QaY5NyVJG0mXehveoP+5TQ/7WlP20HH1Og0etXz9CxnTb4BzUxakJdFlLrnOEw7ZXzzjDFEO+MF/gY0E6WbcXOjZGRYRr/1B0tV1YPOBWgqi0kJ7T/8uNEwjylyx66h870jX0GWj3bNElf2ETtpav/1fI3c4SF+AtiND9vWbBTvo1OMFcOozhq5qxkHt4LcsV1MKuhD2oUeywoJY/S8PUsX9YnIu3YhP2N/3hWaYjcn7bxjk706G2yzZW+oC+1daNKJ3MQ+zCRTJjzStnnWh2diywHN9E/2cLldTY09IP8pcpJytOPGcuBYa+tyew+VA0TRNGocWMYBBmaTlWVc2rr3a9rPgGBpDkeLIZz9X3Z17RgrymVgMqO9SrkAZkAJBpdogUXGNICOA8ARBHhkQFwPHxi4QCURHuvJ0+AtH3XIPi9Ty6VdGduO5AKgWEsASG0hckNbTHGIa9+xSvqa9p2Xf/Z/AiT6m9fuDGrgsvQA1+GHEeblv+i6PDnf8sTXjchz0fs26l6Th+1OVo08aGMOPjtsGWgNXAME0EXL0ta2qSgoziKHe5FDWZOv/NgLnHzLUhbp2Xn5boZMKRedDmTSZ+f1bWWqSQsElS/nyJiZyYKxuhnHjEHALfqXE7iIanSaCGDlABIaG3YFxVl0NJk0b3zZCBmZVz/90Bjp/YngnJd2eH0z5K6m/9bI3alPfeo+qs7EoomzRf23Vu6mjoP4txXkDmDGJgGiLePFsM034vdG2M1N9ra3xFaWvegW9nu2vJgnP2TCxBaZXKaPa+yBzZCTeXU4Jl03HmuHMWrA2RhX2rWFHKgBXhZm1G7uEg609tslbG8vbRxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2KIcWASctaWaW7TRWrEaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB3YtBxpwtmv5397eONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOLBFOdCAsy3aMK1YjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQO7lgMNONu1/G9vbxxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzYohxowNkWbZhWrMaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBXcuBBpztWv63tzcONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQObFEONOBsizZMK1bjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwK7lwHFqXn+5y12uu8QlLrH0kY997GPd5z73uaXpygTnPOc5u2tf+9rd97///e7Nb35zeWvl8xOd6ETd3e52t+7rX/969853vnM0nzOc4Qzdbrvt1p32tKft/vSnP3Xf/OY3uw9+8IOjaWsunvSkJ+2ucY1r9Pn+0z/9U/eTn/yk+/CHP9wfh/kc97jH7a5znet0ZznLWbq//e1v3de+9rXuP/7jP7o///nPw6SdvK5+9av3aU94whN2v/jFL7rPfvaz3Ve+8pV1pd3p4XbhGM+BGrmcyqx/+7d/645//OOPJv/Wt77V99XcrEnrmUtd6lLdBS94wU6//6//+q/uE5/4xGgfkpYuk/YEJzhB9+1vf7v7zGc+0/3oRz9ya110ilOcotdjpznNabrf/OY3fTnkvYyOd7zjdXe+8527P/zhD91BBx20LPno/X/+53/u7nGPe3R/+ctfuhe84AWjacYuXvGKV+wucIELdP/yL//S/fd//3eve375y1+OJe15fP7zn79Tz9/+9redNqPXhnSe85ynu+Y1r9nrPzqNTl80JqzKt+F785suvdCFLpSfOxz/8Y9/dG9/+9tn18jA1a52tdnv4QldTM+GTn7yk3dXucpVujOf+cwdPn384x/vvvOd7+T2DsfjHOc43bWuda1u27Ztnfcaiz71qU91v//973dIt8qPc53rXJ22M9b8z//8Ty/D88pR5m+8w5v3ve99/Vjj3olPfOK+TmW6sXNjzQ9/+MOxW7NrZJAsPv/5z+/++te/zq5POdkMO2DKezda/mpkqiaturAB6Mazn/3svUyRp89//vPd3//+952quhXl72xnO1t3vetdr+8L733ve3cq81a7cMMb3rDv62xI+nGMVtXfGy13Te+Ntc72a2N6b37q8TtXvepV+7GSbTFlXJdLjR1V4zcYg9g8JzvZyXrf5RnPeEb3u9/9brzga1drxvm5mRQ3Lnaxi3VnPOMZiytHnP7qV7/qx8VcqUnrGTriMpe5TG8/fPe73+0++tGP9mNt8iuP5zjHOborXOEKvU3y4x//uPvCF74wG9fKdLXnNe02zPvWt751d6pTnap78Ytf3P3v//5vf1ud2FnL6F3vetfCcZOveotb3KK3SV796lcvy26n+6vI8E6ZrHBhVXtl3qtqZKpGL3pfjfzpf7CL053udH3/+/KXv9z75Oze9dKqfXZM/tZbls16fqo8byW7oQo4O+95z9sL1DIG/vznP1/oJI09T7AxhhBuBHB2rGMdq3vwgx/c58dhGQPOgFVAKGlDHD5O1BOf+MTuj3/8Yy5XHQ3QN7jBDXbI96xnPWs/EHzoQx/q3vrWt87yY3DtvffevaORixSMcu233347DIT/+q//2t3vfvfrGMMhzhsFcthhh3UGzlBN2jzTjo0D4UCNXOaZZUeGCIdpHp3pTGeaAWc1aTnpD3zgA/uBK3nrb9e//vW7t73tbd0HPvCBXO6dzkc+8pHdKU95ytk1uocxoV+uBzTn9N/znvec9Xv5XvjCF+4NVn2THppH//7v/z4DzuelWXYd8EWHLnrPMI+HPOQhOxi/AB7674UvfOGsLTyjPfANaBSiYxiCJgie9KQnzXQVZ9CExbGPfURA86IyrYdvKcvwCKwyXs0jxun/+3//r78NRFoEnHECApxd9rKX7W52s5vN6oZfF7/4xfuxYq+99upBy7yTQYC/pb5WV+POc57znI5DsCoZXzhOIeMAMPj973//DqBg7ueozTI26SMmaRAHSB9YRkDZZcDZuc997r4P6Je1wNlG2wHL6uP+ZshfjUzVpNVO97nPfTr6OYTft7nNbbqnP/3p3Q9+8INc7p3OrSZ/9Mjd7373HqQnH0cF4IwOp/f0nXnA2Sr6ezPkrum9mfjvcDJP7+2QaMIPk230pgmTKcBZjR1V4zcAzOjxkMnIRZMxU8f55DfleJOb3KQzeT9GwCITSqGatJx+9QsZE/DGJN1zn/vcXO6Pd7jDHbqLXvSis2vGwUtf+tK97VIzeTjL4PCTmnYbPsseSvlNigQ4Uy5/ywjf5k1cepbtys4zFqwCnNXK8LLyTrm/qr2yKO8amarRizXyxx68+c1vPrP5lRd/r3vd6/Y28aoYgnxW7bPz5E+eW5GmyPNWsxuO8GwqOPp///d/fZSBSIOxv09+8pMVuW180pOc5CTdwx/+8B40m5e7SDMCBjT72c9+1r3mNa/pIy04UwYDxt0qBHWOY2L2593vfnf3lre8pVeE3sU5AYyFRJkwICnXV7ziFX058JfCfcADHpBkfTnvda979U4Y58UsM6X5ve99r0/DgaNIkPdMTds/0P41Dgw4MFUuB48t/MlRQCIjAL3Dv69+9auz52vSMp70O0T3MK4+8pGP9AASoG7Y3yhqQI7omZe//OWzaCFAW947K8jEEyDRne50p77vif454IAD+r7vPQw/ju08utKVrtSnmXd/s64zRhlf2gPA+JKXvKT76U9/2tchzm3e7TfnUX1E5eKbqD7P0pf0TYjhgB/S0mlPecpT+rS5Xx7Xw7cyn+F55AHoNZQzgFUZnWPGGonwHab1O5GI6s9IUmYG0Stf+cruZS97WR9ZiAelvpbffe97315f0+cArTe+8Y19WhFDwI9ywkb6qaRu2g4ZBzgIX/ziF/vfIpHwf4yUm4M/9l7AIL6M/Sk/StuP5X1UvbZZ8jdVpvCtJq3249SxU0RNanv9UT3IVOnIbjX5U9e73vWuPWjm/OhCq+jvzZK7pvd2lqpFem/n1Bt7Zaodpd1q/IZLXvKSfUGNQ+ydJz/5yb1+Hiu9sWLqOD/2/LxrfBQ0NmayDUqamhawFNCJb8OOMm6K3mGblWAh0CKgGVBNpL6oduOUAAgTXKvS1HYb5q8dRR+NkTKOja+uKTPi2y2KGhzLd6tfW9VeWVavqTIln6l6sUb+Yg+yp9iZr3/96/tJem0IgxDksiqt2mcXyd+qZdkKz201u+GI0KUK7ujYhxxySMUTR15SwBRn2WC5iIQBIwaoyC6OFKeb82Qm3wzVKuT9OpL8Hve4x80UokgzgxuQTP6MXco9jvq+++47mzESBSB6IbOcZpHNKmeW2RIYyhZxnDhsnHORZwaZmrSr1LE9c/TmQI1c1nBCvgg4U0ZHjuUxNa2ZCEsHURltw0hhVDJiGEEPe9jD+n6ZfPX1gw8+uH+OsfWEJzyhdzpF7Xi2lm5605v2S1At995///37xwGBZoEZcAy8V73qVTN9kPxFZ93oRjfKzyPtiG+AQkSXWwqBhJnjBaPELB3Qnz6zLAxZosJAQPhmOSYei66SpwgjdUKMCeDkIlqVb4vydI/hgiyVCPDVXxj5F12vbmU08DCpsuIFI55uTyg+J8EYcvrTn77X7XS9GeGAGABJ+h7h2T777NOPTyKNAngN37Xo913ucpe+HPLMjLMloEAVul9E29Bxkd8d73jHvl3H8gacPfOZz9zpFj7uueee/XVRelOWgu6UyRa+sFnyN1WmsGZqWg6lZejoWc961mzSTNvHBtDGohm3ovwpfzmJ0VfkKP5vVf29WXLX9N6OgA3xWqT3NlP8auyoGr9BmS2tR/T8InulZpzvM5z4j84yFvKfltlyNWljk4hqFkEbMu6IwOE7mRT99a9/3Z+7LzorkWiWrIu8ZxOKho+tknymHGvarcwPP8oVB+U95yIUx6IU2YaiGIFn6lEbpT18z1b7vaq9sqgeNTIln6l6sUb+TFJqc74+/z3txr8RtXbqU5+6nziNnbioPuW9VfvsMvkr33FUOt+KdsNKwFkt06985Sv3yxQZGYTLfl8UGgEbI8rRmnVGon2C/vM//7PjjEyh3XffvRdmTjOHKcDU8FkAFjKbXkYfAKS8nxD6y0zA8Pl5v3VQ+XFqhs8a4LI/kOcTtmtpaxlmDZh0zT5JouI4XmaMDFIc84BmKYPQXsBZ9o6qSZs82rFxIByokUuRSGRelJWozZAl1xkwLccBEAijR4mSTNqx49S0DA6kr5X7Vrn2nve8pwd/GFKizOgEhhWQZ7gcnKHGqQNWD2mK/gp4J/+SgDE3vvGN+yWigO0hkHTve9+71zOWACXypHw+5/SYCQH7ZpgUYCzao2oe8CI9QC71lj+gMLrufOc7X5+P3+WSCnwE7tC/9rMEnOEX3SNSCk9LAlZmhlUUr/2ApEf2a2Hs0ntDfiePqXwzbgDpELCoXC7I2AEY0aEicLxXWdVlGWgmv7R5li26NkaZtdSGpTGEh/aYZAD7M1Ypr2vGkIBm8iSDJmfoavq9JP3oVre6VYePJknsIWe/lnKbAW2vTdFwmZvoZnwA4Km/NgsB6USiKbd99PTPKcQR8E57xwzfp4yWuMpbfiK35wGP+qCIaOOU9pGncnAkyvqVZZpqBxgjlUE7MjrN+Nq3kJwkWq7MtzyfKn/f+MY3emDSs/a8s49rSJszohH5I4dTZcozU9MGvNb3hzrUEnMAwba1yHO0ivxpEyA/3UEWyephaxGX9DqeolXlj6NvXx6k7ORgHk3Rt3mWHjPZod6WRLMXRc8O93ydmu7yl798J5pHH9L/St2Yd5bHqfq7fMb5VLlrem8751aVuxq9VyN3ysM5BrLwJfgyJhaA2KEaO2qq32AMut3tbtc75d5DpwMRbEdh3B5SzTjPd1IffR34Xo4f9q3Uh+lBOl46pL8to5q0iSJid5RkvNcfjR34aqz70pe+1OuqoU2iDfSv+ENlPlPsqJp2K/MmD9rROAg0MfYvI2NoViMceuihO+l19p5te/DQ2MbOHtqQeQe7i/3FrtBW7B/t8453vKPnVdLlOEWGpV1l7C7fMdVeMeG3GfJXYw/WyB8/HK+NZ/R0iE1OFrR/MIzcmyJ/NX02+TpOlT/tvmycT77kk31l2xPtyHZlz8NvnIempquRZ3nX2A0py5Fx3HTgLDOhKqMj64QcREspGWQipkoCriUcV3rKwG8N97znPa9MOnrOGGfsUKackHnE+BV1pqM4AuecZw8m0RLeX0sM53nEIUIUKwJwIR9EGBKDVaeMs2ZgLPdqKtNnFpehi2rSlvm088YBHKiRS3IKOPEMgCCDumVDFC3HNUswGRMI0HD/+9+/dxgZQ5ycofEzNW3SlQNX/5K1fxypGC90jigzSweHpOxxSgEVJU3VXxQ8GjpsdIg60mveEf5IC0hwnXHDEOSEjRFgyIx0SJ72g+MoZ8l27jmqc2Y+paVzzdoYkEU7GfCiM4D0AdOShzpk4sI1Ex32ixwjQE2IcbttzYENz/Efbw3U82gq30TFMTDVBVj72Mc+ts+S4xDeMBCReiLL3295y1v20TeAJPXgVDuGOD2ZRAHCMjxFipm4MI6U8hBDfGz/kexjFiNRlPCLXvSivGZ25JgnH2NOSJlFRion0m70P6PlIhe5SD+j6Vr463y43xJDznX8B0wkOsy4dvvb377PV9QjA3UK4avxR55jdbGvoDIiaThwkbsyf8DQox/96B3qpoxkX/3UaTi2T7UDyIG0JWlP+1IxUu3Lt4imyl/AUnzVb0V66Neev+1tb9vXDf+BZjUyVZOWA4DYOEOK3ETn1cofh49NFsdBe2o3f5xQy631CW2l7dyfKn/KSrcpI+ee/pkHnE3Vt/Jkt+nfJZFXyzrYYQExpqbjTOifSP3I9qJo4Kn6uyxfzqfKXdN7+/ZtsYrc1ei9GrnThva1LIk8A5eMQSa0UI0dNdVvMEGRfL3DWOWP7h0DzmrGeWMdna9/Gy9sQ4ACEOoTidIX0YrYD8ZjfhobjD34pje9aYcJi6lpo7vkGx/JeUi5UCZVhxOl7nHgM4kx9Kum2lHh7/B5+atf6Ze5hgA+AHe2FNDRiqEpZC9Y9ea7DcFCoAudHNsC/7WncW1IJjnZR+GRtM61iy1E6JHhR5ymyPAqY3dZtpp+u1nyN9UerJW/sQhCPAdsI75PCSxPlb+aPhteT5W/qeO8fPHjEY94RA9W+02m2CvkRrs+/vGPd3lyuhp57jNe+zfVbkj6I+u4EnDGgKNMx4gzEIOdAjOgYLiOK8SW48Ow4ciagdRZMpuZ/BiGwnQ5KDbZg7gTJk7S2BKUPOf4mMc8pvw59xxKDDyzTp7xJXSeYUfwzbRY8riRZDaLEkKZxaTkEQN8SDGO42QN7+c3MC4KcBlvatIm/3Y85nGgRi4BXhx7M/SAakv9gM8ABP2eQxxQK06ZCIEQXcIQ0g99kCO6YGraRF5Q6BRz+o38y/fEUcl7HYEkZhfTx+RVRpLU6K+AQ5YQDCkzMwatkIEHOIVHDK3yXtI4cgRF3iAzWfYWY5zRi8oHECt1bp9w7Z80IlWBlgFl8IeeszeXcySCdUgZ7GOEDe/ntxlQoAGy1E80EwdEhJ19fwBpHO5FNJVvZMiyS3up4ZW2EwFlfzuk7eKwJHKPQ8FpDpFJvBD9FiOyNEJLQImD4h6dio+IniaXcQSSr2PeiSdjxADQ5jGCvT9RxnigHsYfY54lkxwS490ee+wxc+A5JAFPhmBn3um6fIBJGYeBWd4rok5EZFnPPDd2TDpjtImkkoz/HAhjpXaRd+TM+0sCPrhGPnx0AuAC+CWL7AN8Jmv6QknL7AARnAHNzIACkbWPfuFrVM4BaPOiMr1rqvxJa4sEwDNecpJ9ECOGHSPZfVQjUzVpAb4M5EwW9C87/B/9GdK3I1u5tkj+pMlyNvqXE092zA6b4JCf6GHLUVaRP3LEgeMcmrgAUI1Rjb4FcieCDagpulB/5yTqZ5xItuDUdOQkoJlJDH/0hfqPjR3eMUV/j9XTtaly1/TejXogZhW5m6r3auSubE96jD5DlskbF8i6bQ+M+TV2VJlveT70G4DBZJJDq0+ZJDcZNI9qxnlgu3Fpt9126/Wm+rCr2BhIn5AGsfeQ/lWSMvFHnv3sZ8+ip6amNT4YT4wVxhLR+iF9MDZS7LXcczQWCLCInJg0K1dA1NhRte2GR/QO4jcau6cQfyy2xOte97qdHjExZayhy5/2tKf1IIxJapNNQwLKGEPZYcZjEZCWqvJt6UWAaGye8tllMrzq2J13pD2m2CuAqM2Qv9hmy+zB9cifsQKAbGzWDuqbbTTwokb+avqsvGvkb+o4L197pGo/NkG2h+LrsVXVF1irzaamq5Fn759qN0h7ZNP8UIAFJYkypVCHf0KjQwAvBLkHmiEGZsKAKcik6W+u/WM8BzRzzQxOnOMY8km73iOlFGPdeRxFsx1mFjaKLOXg6CEDX2ZCvRMlUqz/cfi/GL5DJ6RMQ3gJo3LLY9F6/pq05Tva+TGPA7VyaV8GBo/nyCOHAgEy0nc5yOlfnPDXvva1/T49mSUFVFDqqCYtcMa7EQUOmEeMn/JLiTGG+puH/zNDWBphAKOUUZLopmX6q+yjAZ3K9+SrSvQmkp7j7V30G2NnHuWLPQYvM8DqSmeZbU3kFKBqSIzpRPo5BlSii1D4FFCvfL4E8ucBQerw0Ic+tAcnlCegQZnPsvNavgkRFzWIOBQPetCDepnDmzJiCSiDlMskhfGGEa0d8Jwupg9RGTEnYhJIZkNicoWAVwFgo7eBF+GjNCX4GofY9ZB6Mt7Sr5SrBHjVJe0BtIjhDbQLkJsyBFBm5I1Rrkeuga74YdwNADj23PCaOqW8Y8svAQ2IM5UlruRsLApA3QC0ZFIbqj8HDAiMtAmjtiRpltkBHA98FCGpnYEMZJfzQiYQ8Gce1cqfvpIoDM6gKEyOorJqt7yzRqZq0mYZOBkoo+mVJQ6Yug55uUz+GMdxgC29DuBKLwUYUE/Gf638aSN6FI/oCMd5NFXfel5/IDfGEmAcvajfaAd9gK41jkxNpw8idh+ZRkDssf2b8HOq/u4zGvyrlbum97pquavRezVyl6akC4wrdI4/QJH+TyazpUD056r2/Ty/IWWYcsy4MnWcN6mUiGoBDgGELBEsVwUETFZnS+31E/0mdqBIKrxANWkTaWaMpm9CZWRp7Kjcc2TLBaTxuxxf/a6xo2rbTV2VScRxgiK8cxlldZOxeTi5AzzJpBA5MyFLd+J19HPyZ9uSQW0sMpxdKC37IWDZmP07RYZXGbtTLsfa8WIz5K/GHlxV/kyAmUiMzBuDtGuoRv5q++xU+asZ55VbFCs68MADZ9tDmUQ1Ua+fA87QlHQ18izPGrtB+iObVoo4IxTlvi1loa2BRwQoAqATl4rPfXkwIMJ01xDDN4p7+5XtG+AzghKxlevrOXI2KVtlO3RtbTlgj5HNIGWICrUVJquc6yGzN2ZC8EO9zAaECB9KZ8t1xyjveUg9flgOJ50y2ixYXcaoJu3Y8+3aMYsDtXLJSDG7IkSZg4UAD+XeRRwchpWB1OAYuWYEGMA5OAAJVJOWzAtx158NXKIilIeiBmK4r3+NRVZxspTDbJyZFGAAfbT33ntX6a/URdlLp8hvlL4cvoreYMjYN4qjuoji+BuEhzo0IE0MrOQTnZbfjt5js3/li6FV3i/PS7BsTP9pQx9bMBB7l+V/Q51d5jfvvJZv8gGAWbZv9jngl3YsjRRgJEBC9GNAHTOrjFMfPsADkzAc7k9/+tO9/uR0GwdCQCDLC/HfDG8+HABQUn+zbomc8VtdtEfaOPk4ukY/kwN5WbIrYtDspH6zbS16JelKsNe1gCCe1S6p59iYIX2uazdlNxONGNRjbdnfHPmXcpDRYRQl/mVst7dWSfZ3y+x3rgdUxR8Aj/FIP3MMya90dqbYAUC6AHWcLHkzkuUbGR5zsvLOVeQPSEeO6IoY5HjAaQrVyFRNWpMQ7C7R9+SQY00eMkOd95d8dG2Z/JWgG5kZ6pnoUOlq5I8sihB1xCPO9zySJjLlfcMykF1yF3sxR4Z8ScaScll5AMZl6YwdKMB88qTXyGLplNfo7+RTHleRu6b3tjuh0W8lP53neq3eq5W7vHdsSwa2DBsmeiFjQcqWZx1jE5SyUN5f5DeU6ZadA1TmUXSk++XYAAS0BD6TL8Ax4E1JJkHoXGBaxgegsz+gsnFLxBPwpibtS1/60n55Pb3NxqDL6AW/o4cyQVGWx2QJnnLmRY7qz8Zvk3t4UGNH1bQbu9WkGP5lkqEs17xzY2EmTax8GlJsYaDrcC9wYFips+m8LJvDK9F6VnHQkWc9PCKQ7hzSFBleZewu31MzXuS5jZa/GntwVfmz357xDc+B9uRPPzBZqQ/UyF9Nn62Rv1Jmlo3zdAO9pS9kAj7tI+jBH1LfKekCsk2RZ/lNtRtSpiP7uBJwRlGalV9EcaCl2bbmGPgboyEYNjT6PBMUeJERPJb3omscZcRZytpyDjvHypIx7xJNV4YLL8pv7F6WLLnHAXnqU5+6wwAlAqIMQS7zSFjyWCcS1Sc6h4BRTJZDzYtaqUlbvr+dH3M5sIpcWi7DYY6DMQw9F0GZmfwhZzm/iQywvwTQrSYt0JuCF1bOeNJ39AtgkUEMsDGmV7LsAIDi3CwnEIrRl1lSZaW7/I0R/cWg837Gici5RIsmfQxQfBXBBPhBnDlLylD0JYPKNc6awVjZkfKUyw77i4f/G84mxvAr0zDsYniazYmxm/zLtOqApPdXEv4waBlo7okaSiRMmW7Kueen8q3Mz6BtNhwZG4YzsPjmb0gMTAAHQxcPkHaPHAzTi/YiE5EF5TU+WJZDTvHONYYtAMQMcumAlPmljIAPy+aUgWOBYlSRn3ltLJ0yx4geM4SliUNG3rPfmHHNZJA/lDZnVJM1hl0ZKan9AyTY8HpIAbzSfsP72jTlcM9YalYUYGjMmkJj/XXMDrDc0zLFefxY9K6Uf0q/LfOhL4yrMRgZ5iXVyFRNWu/gyACDTPrhqz/6BnjNUEdjES6L5C97+ng2y7KcD4n8ZE/aefxOu2s/5aRLyQN9E12X9+lXronojYPlnXStvzGKvZijfdcW0dR06RNjEwD4m3GtVn+PlW1VuWt6b3xiCo9LuavRe+XYOUXu0p506pAypkZWVrGj5LnMbxi+d9HvlCnyXaadN84bC9gmlj8iY5uxs6RDi0mm8rqobH2ZfcBRN7bUpNWfRa8Z300I0BH0h0kwbQxQGpsEjf/DljRu77nnnr1+ps9EgaX+U+yoqe0mT+MPAkLRyUMywcGWUwb9PkTv0aGujY2xAbyGfPd8VhokL0c6lU8YW6K8N+98igyvMnaX76uxV/LcRstfjT24qvyRGX/AM0CobRz0Aatv9IEa+ZvaZ/kVNfKXcRefl43z0Ytj8pd2csxWE8vS1chzjd2QFU1lmY6M85WAsykFK51HwNTYLIF8hobKmEEWx2UMRJpSlmEayiDA1FCpc3wyc2TgWBU4E32TvcdEPYzNRohy4DBnoC3LmWtDB4LAEyxGO4X85Cc/edRQlldN2vLd7fyYzYFV5NKMQmQW92x0ng3c/Sav7gOGhgN2qXQZRzVp5Y0YJv70a+8w+Mony7spWNc5f/RJlt1tf7rrI04TGcoAYiyGpugv+s1AxnAZAjEBzui6DCDyztco8x5HZbTpNOPQgB8QgmE2nPnJc0O9qN5Dci3XARAxNFO2Mn0iuby7JDPMojm0kXtmoctImzLt1POpfCvzy95GrgF4yJ69FkIMFnVgYA5n9CNr+IyMN8BARs8QcBjqXuml2X///Wey5KudjN+UKUaPMUbkgfoNI23M0gPOGFOAhYAG+sW8D8B4N6MubaLc6lDWL9ekNVGT/qh+ZGpIjJ4YPvblCUUu8WoMgAyAFXnKczkOr1tSG2BYHfVFX70EuAIix2iKHcDBzLJwbYXP5JEszNubaviuVeTPxtmpo3LusccenZnqkGtTZaomrfzJmihFf3jKBtBOaccA5DXyF7knW/bRm0fsouhu5V4mf9G90o7JH/l0HZh80EEHzV47Rd+qp36u/wxJ28RJnZqODp2XXxmZU6u/h2XL71XkLjpGHk3v/T2s7HV49Hmt3stWETKbInd5aYC6/HaMLKY/rWJHTfEbyncuO19lnAc2J7BA/iYmTEJmbHNNXdlRYyBO+pJ+X5tWeuODaDH9UVnyDtdQvpRtjBfxMowmZfuxtdwXlcsupNu02RQ7amq7AR4jdyKQs31BX8jD/2UZsEjW0k6zBywSCZ/x//BH+kPardQ9uR9Qo/xtb1n1o/fwyzgob+PAMHo3z02R4VXG7uTvGFthyniR5zZa/mrsQWWYKn/6gElEdgeZKYltYxJ02+ETQDXyl7ZfZpvXyl/00pRxPh/gGJM/9cwYmzyXpUudxtIN5VmfRlPshnKLlv6hI+nfpgFnGMqop1icZ6Yy9TI7zxEbKj0G55CEnqIxR2aYdspvRgsFo/EJCES4pAgso2sVgvxHiVq6YkAeIw526byUaThWiIMREpVjrTQyS2sTaZ1gjGrSjj3frh1zOVArl2YGgbmIc8W5AGDZQJyDh2wiD0jWpx7ykIfMHBv3Epbu3Ltr0nqG0aAMZirLASxRbPoIY4IeEfmi7ytD2b9jAMmPc1irvxhr9IYZxjICS7kC0tMzjAID85A8yxlSNn07kRQc40T/MABLYtwyag9bWx5RkgHHe0pDN1Fu8qdHE4GibAyADIDySdRTBjvX6Go8kzeQCXgUJ9r9VWkq35I/mVJm9WAcmtUme4BQ9aLT99tvv/5oX7dhNFBm3WLQ2UCcAcL4UaeSIpfqi4A0DN58jjv7oLkXfZ8tDCzjBMBqv8c97nGSzCjjizoASDl76sHAG7Yx45/hi9eiOhnZ5Fk7aPsyvd/IfW2HJ+RvSAA9z5vckm9ZD2lj7KUuw+fxWd9heONRCUIrr7xDjKKAZnRBaQckck3aoUE1xQ5I+D9naPjVzzixZVlSpvJYK38ijrKPa5Zs+u167IgamapJy6Ak/2SArstsvvrY4BxFb9TIX/QRHcgeK3WB/mSfI+2dyLCp8qdd2FpDMrmgfeRJ9kwsrqJv5VECWXkPMFYf0/e175R0+KZf0JOl/pYnXRqia8OvXHOcp7/LNOV5rdw1vbc5eq9W7tKGpe7KtTh86Ze1dtRUvyHvm3KsHeflaQsYuoAtpf8DyFyzhQWi60yMIqtdyvEDUBPdq5/UpJWfsVxksm0+gJoBzZQh0VQBOwFpymfSYjjBE1AoARw1dtTUdpP3mC5Qj8iHsZ0OpDNLik4ZW6YpnQg7E7nqjaelHs1XF5MfuzqgmTYq7b6xCYs8lzLmt2Mpw6uO3WV+NfZKnttI+au1B2vkD6jITjFelBM/6sGWQ5morZG/qX22Vv4iq1PG+aRll+nPpU3ADhbpxi62BQlals5+tVPlucZu6F++C/5tn3bfpBdnZkDnLQ14DixDz1KVoUIhcNk0UbE4bAkZHu4/sZ5ix8FmYFJMIQ5M0NYYwoxvRrq/dIikHx7VLU4U5T8PNPOcyAPOk8gDDneIA+Qa4DF7IPkdJch485WVIe/yfE1az1Di6haHIPm04zGTAzVyiUMGOn3EIOlDAVlmKeIxkRAByA3w2TzXswZnX0BD9AWZr0nrOTqCw56wZdcYHNnfCXiNOGj6m8E0n4zub6z9A6i57n4iu2r0V/ZzY9QoTyh14ygxyBh4BpHhXz7zrk+758uXKLygV+imkMFMHeyTEWAt9xxj2DovZx05vwhQpEyIUxZS9oD2aUf3LJ3QxoAextlGgGbynco3aclSQsw5xWabyJxykUGk/RgpyIbfMaD93m233WZOcPRqJiY4zVnKKK3fiRhO9CHjAK8BsnEOpN199917fc241XcQkAuJqgnA4zd5z+QHA5e822wVydPeZyVZdqTeAKoYYZEJ/SiAk2P6VTYaBiYN5czvtB2j3e8YanlvlvAYZ+ZRnEOGZsogLeevJP0wlPL77ZlsPJ3fSec4xQ7IuF06FZ6Vb4DwsmzuDalG/rRdvuIKtPWhgBiYrruPamSqJi25JqPb1maxy49TkA16h+znA0E18qed8ZD+y6b34RPQlvPLpkm/mip/lk2NyV/kk1PsfvRzjb4NuEUvlvpWn2X/qAv7bWq6gL/6avnVUrZo5AxPavV3+Dg81shd03vblwlOlbtavVcjd2lHgG1sG9dMNgUMyYbsNXZUjd+QMkw51o7zoniBAXSJVTK25HHumnuIfeQayljW/1j7x07Q94Di+nlNWnmw4wBktkgIyc8XCxGQLu2VcYxfFH0vjcmtRFtn7I7sTLGjprYb4GJMv7kW/tjjym/jfIgvnPLOm5wC5OOhupe2KhshkyTJL+O13+UyVnqRrYLkM6RlMrzq2D18T3i/zF7x3EbLn3bIuDXFHqyRPxOoyNhYTvTZUiMRm2nf8GCK/E3ts7XyVzPOG5vZ1uRGhH1JsTP1v6npauS51m4oy3ZknW9axJkKUBqWazGCbbbPadMZCSfSkDFs+guH/6MYCB7B2LZmJGo8s9zvfe97y2TrOmf0Ws4hesGeZsrGyOYweZ9Q38zmGxBFwSAI69DRKAtSOqGWvGTZS5mG0yEqgmAyxHQ8Dkj2AFEGRIDwAMmXg4jMFFDGQ6KcoeA1aeUBqLNpted9FXG9JDzZzK1Q4RiIZZ42Kc+siYG5dKiko/ANyEAEQExJBkWy4Z5NwcfAQ+1oH6IxUkfvAxpwHMvZMunxgSOio5efsy7zMihzlLRfuQRXmZX90LX9HzIDtp78yncy0vQLRgXHwABJjnyMw/s2kmrkUlsmoiRffWR4ADj0c4bUox71qN6JoWTxxzNmLfzWv4Eb2jFf2ePwTE2r3paZ4Y1+RM9oW31If8Ej/QgZROkQil9ay5w5vga6ACG+IMhgQTX6S5mB8fSJDWnpNgZNaYT2mVb+038YP4wtXw0lr2RY5BTjS1sFZCuzVn+RF2Y8TQRw/tT/wLUv5ITwhV47//nP3+tpfNMe8qXjMthzJqOzRVb4AMMYAbJiTIzdH7s2lW/6NFlCosUCfJnpA3qSQXIF7BPZBHBSZzPidHsiQjzvnQE8bCpMFsigvdvkrW/hmXFAm9rcGFm2T5/LF3gICKUL4jD58Iv2QBxx983g4jEQlywyVr1LW+RDMdKZZdMOdCfnPzKcdiv1jOV0JpPI2j777NPXT3mNs/PkoS/UhH/qnMmhRW1J94qkU/+UoexHeRU9CpRRDzoT38lvZDLpAJL4UNIyO0BfMBYaD+gVQKR8y0m60qAt8875VPmTnkxFV6U9jF9kwXVjjnG5RqZq0pJFddb3GbPGFqTu2s1ej7FNauRPe5ApIBl51b+1hXzjPNmHJ5ONmyV/NfpW/+cIKZ9+G9sy7Y0P+pu/KenoQgAhPfKIRzyil1NjwtikRM/0df6bKndN771wxumtIHcpjP5GH+iPxku6D9HjxlxUY0fV+A195hX/po7zdGl8EL6PuiEBC4BB9wDydDhdo195hh2l/wH1Y0cdcsghMzuqJi17xySYsc0+UcogX+M3u6z0B9g9bCJjkGhxZWAbp8/yPzIJWmNH1bRbRTPMkmZiks3Lr51HIunYPPQSu9ZYuu1we7l8hg/NbiCT0pmMwRMy6RpiCxujYtu6NkWGVx275R+a2m83S/5q7MEa+VMvtgd7SeQ425FvGduRHZlN9GvkD9+m9tnweMqxdpxnm9BLJufZODARdSNL7NfUbWq6qfI8pS67Os0/rTlYeyqEULwYHfMKZZaTYSFtuSfKvPSEyKyDyDIKjYOR2UCzBozvrPvmUGigzCZI60/npjwpxrLTz3tned17OZCckMzq575rnFsIMMH3LsrG+0SmUNABZShtgwQyiJShsMkvR9EHUVaOY3+UUWalOKd4anAwWPhDDCsdMyTMMTPaY3nmGtCiJq38KXJ/OoPnlxGkeZGsJHoN7+PglnlydAFAeG7g0A4lGQTIGgdo+DxjxYwwR57sxFEon9deoo8iQ+VRuQ2sFABAgnKLEy0PG8xT4MpOfodlk4ZDYf8WdSj5RcloS2AnGULryc/zDBFAzG5r0TIBzRjT6ogPQD7RiuRFXabQsvaTxxS5xFeRTWTP+y2NCzEggZwGa84Yg8veQ87VgyOtLRidZoQAHAyDUE1aszryYijoIwAr+RrI8pn45MuYci8ABjlSFrrFBw1sLBuq0V+eoRvoHM4c3mgjg5VBEH8WkfTaUfrhBAEjVnnVkX6IntJ3gC+ZeVUPXwhl+DEiyaJnDHR0tqXdeBKia8kqkDv6WVta5kf/RTcDMOinUHTN8MiA8yyZ3LZm4HG0p+wROYVvDEj9jo7ykRVtg8iM/qovWOJhKaDxgsOsXuSPPOhHeOvrfonu87w6ahtlxgPpMoPLADXueA4ZD8gaQzZypo1dByxkdrtPvPaPzAM5lM0YQ9bIHr7gbxwT6S0/kae6pC20G7kE3JXyo8wcAuCZZ+SrntodoBN5SDmGRw6QeuoL/krSzkBtfE70Unk/58YwOg7YJ6/0I/rYb2W3X5uy0qHASXqLnJNJhGdkFr/xEf9q7AA8Ybjilz6Hz97hOruDPOgvi/aNU44p8mdyydfaEENc/0Lqpx/ig3rQZaLRpspUjfx5Hzkhq3SAOvtD5MEHAkqqkT+z3PpM7CHtSWbJgbGgtEXWK3/eYfzFq3KpUq2+9Sy7hUxFhtSfXOoHyo6mpiN/ykZm5EmO6Uf9CZ+BkWR5jBbp77H0rk2Ru6b3jhg31yt38/Rejdwlsox+oW/IHdkga8ae4ZKtKXYUWaj1G+ho9jF9m6gW+YzR1HGenUkfZ3xKXmw5vHPPqhRjqL7gt/HK2EOn+21c4CDTPaGatHwr+pSOy3jhaFkYYN2EaogPp276rDTaQp/VFvr8gcUkoWem2lHSTm03acfI5Cz7iJ83tMtNKhvn1DPRtmN5sG08y5bS1ngsTzYV/cnu4B/RUc5NrmsLckmHeVZbaDPjsXTGrRoZrhm7x+rg2tR+u1nyB/CZag/WyB+e0wNkVfv4I4PaCN9sp8AmC9XI39Q+m7yHx3nyVzPOs0/Z8/oX+8wYx35VJ3ouE6tT002V52Fd8nue3ZD7G3007mvTMTrWWifqrQuCFSR8LOF6r1GqDFlKTefNvjHz8qUkOB0GhVL45qVfz3UDH+XEuQSi6BBDYjjbnFX0jHQbTQQSfwwAOt0y/mz0+4VCWwpm9nYZcbYWyQpeiuZDQ35xzMp34Pcwes596bR9ufmfjgN0Cxngyw3oc93gmU2nAVvShfAXoCCqg0wizjGFgmx+DjRDHOY999xzh+ddZyzYV4ssO4a807vNsMXRWE9+BkF5cmAQXlHUjCXgEwfeoEhR6yMlX1OmseOy9iuf2Qy51AZkxFFZGFvzqCat9ky+IibKdh/mj2famSxwdA2ui6hGfwEz9GWGC11XC/jPK4e2YDAxitRvEYAvD+UQ2UluEq4+L2+8oHP1g2XAy7w81nt9M/gGADCOAAxLo3usrPquvs2RIhP69zxinAJqGe/aIuDaWHqykz3MyPuy8YNMylt5/S0qB0NNmRk4JRA3Vo7NuqbdGO0MqQCaw3fpb3S6uukTy/pbnp9iBzDo9AtyS34X8Sv5jh03Q/5qZKomLYNOu7NV2Avz+K6etfLHHiKvdAaZ4vjMo82Svxp9yylUXkdj47zxZGo6epYzJHJomY6dx5ea65shd03v1bTAEWlr5M5TJgI5lfT6ojFgM+yoI0o9/Wyjx3l63fhKTxszgTPzqCYtfuEt/cI3XTZ2s4fpQ+P8sjGgxo7aKu2mP7M5+ETz5Iy/gGf0t0m/ZXZG2mmZDK86dif/8rjR40WNTCnHVL1YK3/Gbj4xPwzvF42ZNfKnzBvdZ+WJasZ5/ZuNxR4oAxy253TE/6nppsjzEbnumjM6ndQyEscAAEAASURBVL06RkcacDb28qPKNZ3ikY98ZO/oQ8WPbsTpscyEgy0qZRkRqEXAmedFhTBURS1A2kP2IzB7xcnRyRj+lpeWlGctlTQrGwJSGRwtgwQaUZpAN+UuSXsFOHv84x8/6qRxtrJpt/IluqIEuuRpwLastiRlWAU4q83vLne5Sx/B5Dlh6eWXA1MeDoPQdrx4wxvesEPEVNIMj1Pab/hM+9040DjQONA40DjQONA40DjQONA40DjQONA4cHTlwCLg7NhH10pvdL0gyZaWHh3JjCcUWSj0RpFZImQJS0mixpBwb6AZcA0QFRKV4JqogXw9xz0zgRBvZJlwlpyVH5Lob078ZyZZtAjathb9NCRh1MjysKtc5SrD29W/a/MzexTeZZnj2EvNsJvhQJYJNmocaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzYOA404GwCL4W82hwPSHF0JPux2BA44M5G1NG+CEgYcAggJsoM2U8km6jmyy+u50t2ltmV4a72WRJVZRmGJUGJYhP5BlSrJcvd/KGxpWneYZ05sv+EPWXWQ7X52e9KfZE9shaRfQwsiT26AruL6t7uNQ40DjQONA40DjQONA40DjQONA40DjQONA5sJgcacLaZ3D0G552PMdiLxZpuZMN/YBBQTARfvjxpP5FQzn0tsiQbWqIvf/nL/dEm7tb6yy9fF+tvDP7ZlNrHCvIHgPN5Z8s0lUtkmy/xjdHzn//8PipOOh8lWC/V5JcoPFF5y/ZZsR/e1P0M1luH9nzjQONA40DjQONA40DjQONA40DjQONA40DjwDGJA8c5JlW21fXI44DNn0WHiQbzdTsRaJe85CX7AgQU8+UbX6cUzSWd9JZGogBvzm0+mugwX2JDQDNrkG1+LmJtXlTW9a9//T792D/gnS8pzttU20anb3nLW7ob3/jG/SaBvlhoiemqVJOfPdjQGCBmfzdfrByS+vjKYqPGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBjeFAizjbGD62XEY4kC9V+vIj2nb4XmIAMwRIytdD7eflyzj29hJlVS6LBa4haX1l0Rfn/PnCJDrBCU7QfwWr/zH4Zxmm/dAc5RuyPPXhD3/4Dp/Nzr3y6AuZAdYs2fRVmPXQ1PzyZTD7zw3J8lQfdBj+ZU+0Yfr2u3GgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWA1DrSIs9X41p6awAHAlq8+nv3sZ+8/A/wv//IvO4FiPmEN8LGxfT79Wn562lLMLN8EkM37qilw7etf//pOpXruc587+6qmPdZ8NdMHCERtXeQiF+k+//nP7/TM8IIllnvttVf/0QJfH331q189TFL1e0p+IvaQSDxfCS0jz5Q5y1+l8WnffHTB70aNA40DjQONA40DjQONA40DjQONA40DjQONA40DG8OBFnG2MXxsuYxw4DOf+Ux/1bLDy172sv15PgiQ5D4SgOzpFYBMNFjIc/nKJjBp+BdAyUcIAEyLSMTZU57ylBkIdfvb374H0RY9456PJhxyyCF9MuDeFa94xWWPLLw/Jb/wRUYAvpIOO+yw7o1vfOPs79BDDy1vt/PGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBDeJAizjbIEa2bHbmAFDLH0DLhwFQPgiQ1H7boB+4duITn7i/nKWcfuy22279NVFoT3rSk/rz8p+8n/CEJ/QfCRB19vrXv768vdO5vdF8ffKBD3xg/4wIssc+9rE7pRte8DECHyg405nO1H9oYHi/9vey/H7wgx/MeHeTm9yk/8Kn6LwxEsnXqHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWDjOXCkAGe+aGg5niVlIoZEyHzve9+rro0vNN7tbnfrl+S9853vnPu8zejtR2XD+K9+9atz0w1v2D8L+CL66W9/+1v3ta99rQPi/PnPf54lvdSlLjXbwH52cXBi764PfehDg6tH/BRZNfYlSKCOva1+8YtfdO9///v7r0/mKRvo3+EOd8jP7oUvfGH/ZcrZheLERvp77LFHf8X+XgcffHBx98g9FR2VtvfmEhTzW53JhP3NRJbh9W9/+1u3+g8CnOY0p+nPP/rRj/bH4T/AnCg2yy99fGAZcOZ5oJRy+KgAXvmAwFvf+tZh1jv9BrhlyeZON1e4sCy/Aw88sLvXve7VL8u8z33u00e9ffazn53tC0cmrnvd684+urBCERY+MqU/LMxg5Kavmh7/+McfudN13/rWt3ZYbluTVob6piW49ISoxU984hM79N3ypdpeWst/v/3tb3eiI3/0ox+VSVY6P8UpTtFd+9rX7sjtb37zm74cibwcZmi/PGl9EMOXZn0xVvvSPSH72Enz/e9/v3vzm9+cy5t2pKvxBv/m6bBa/Qr8ts/h+973vl6nTil8k70pXNoxzVTZI/NXu9rVdny4+EU3GoNK2qgxvMxz6nmtHki+U+2FpM+xyV44Mf24WbJnQu0qV7lKd+Yzn7m3Ez7+8Y/vsP/p9BLumNIWEmTah4fYEJ/73Of6MWDHVF2/5+q1rnWtfn9WE3y2g/jUpz7V7886TOt3zRg09rxrItptXWFCzAeQ9Ec20qpU038udrGL9TwZe5d9YvG/pJr62pv1Cle4QkdW2Gxf+MIXJo8H5TuH56v2V/vH3vnOd+5XFBx00EHDbFf6XTs23vrWt+5tXx91yn6/JpDJ/DJiK2T/3WVpp95vsjeVU9vTbSXZm1LyVdp33ji60XK6StkW1bnpvXHubITes4XQPe5xjx6v4MdOJRiQVWTG3+xl/p73vKfHAYZ51KQdPrvZvzcVONNA97vf/XZYDscAEn0EPKtxBO11ZX8qziYDZh5whtl3uctdehBGdNBU4ExZ9957735PqTDdFxuvfvWrd/vtt98MxLrmNa/ZA4BJM3bk+M5zOqUHzBGcReSriZbrvepVr+qTqXf5DAPkAx/4wGgWFEbScsp3JXBmcM+m9QwDoOKQtFG+EgloCwEMEHBtCLgljSM+3Pa2t+3bjmx985vfLG+PngPYLIHkRDJSRIAxDBeRJZZvetObupvd7GaLkk2+tyw/QJIloje84Q178EzkmT9fz/QRBUBjCOBa05/y3Lzj1P4w7/mx64yM613vemO3+mv6a/apq0lLiYsgzBdZZXbWs561B0Tf9ra37dBP8OyRj3zkDn1Yf/TFVODper6aCuS65z3v2UcyKoN8L3zhC/eO0DOe8Yxeb7mODB43v/nNZ2ldA+QBQkVWcuZQdIX+v5Ht22c+8g+oH90xpsNq9asl1De4wQ36emoTkxHLqMneMg7tfL9G9oCYi4AzejDA2UaO4TuXevmVGj1Q5jbVXiifcd5kb8iR5b83S/boSGNt9vP0caGLX/zivW40gZUP6Cwv4Y4pbnWrW3WXvvSld7hoj1AA//777z+7bluGhzzkIf1Ym4vqanL1Oc95TldGgNeMQclr7Oh9wLyQOuurJkozNubelGNt/2FfzNvygv0W4Ky2viZ9s+pAufkB2kCdahyvYZ3X01+tNjC+lhNVw/xrfteOjSbPAY+IHRrgTPuzR5YR+d9I4KzJ3jKO73h/K8nejiUb/7VK+y4aRzdSTlcp23gtt19teu93c9mzEXrPpA4/ARYzlej7W97yljv4PMZT4y7sJT6P/GrSTn3/RqY79kZmNsxLdJhIIMAHh/Q1r3lN/4VD6TCLUzWFRAX5AiLncRFpSJvHl4DCovTlPTNPjAGD1yte8Yq+rAAKA9oDHvCAWVJLCxlMw78ygq7c3H724MgJvnz4wx+e/fniouiUGIQiqER/jFEG3LF7pYEydv/IvJalmN5ZfimzLEMJipXROUAHJCJoUQf1TIwfoGGZtjwv3+n6y172sv6SwUH7o3np+5tr/xiO+Vporg2P2jW03vzIByCFvCUvSisyLkLvk5/8ZPeIRzyij7DKe9d7nNofat5DSSL8AZAO/0qQuyYtozygGV74IIS+hF+AOgB4SL0YuO4BdV/+8pfP5FLkYd6b9FOPnLs73elO/aBAPg444IDu3e9+d/8exvltbnObWVaiKAKaASmAuMBf/Z6uM9mwFalWv+KJQVr/qqEmezXc6npgYarsyVnkB6I7hn3Q7zLycqPG8P6FK/xLf5yiM5L9VHsh6ctjk72SG8vPa/Se3KbKXnSk/BnUr3zlK/vxWhQvYKe0yZaX8ogUAImAZj6yI6rbBBUycWPiInTf+963B83YgVYA2FfU+429IsBLvVYzBiX/4ZFNzBkl6yZ8XvKSl3Q//elP+/fc/e537yPQhs8s+13bf9i7aEwvlHvP1tQXABqbFK9Fd4nwMwabqFnPROSq/dVErXF5o6h2bGSvZGJ4WAaTFkP/Ir/JIsK7KRPEw7zn/W6yN48z869vFdmbX8Ij7qzSvsvG0Y2S01XKdkTNxs+a3hvny0brvfG37HzVijITVsZMYD+/S+S2qDP+LD8hVJM2zxzZx02LOANyiTBAvkIYQIRjK4ILWn/lK1+5N4gWVZqhw/llQM0jhoyN3gO0zEs377rBOx1t3333nUVFiY4ws8mIA/JZ4veWt7xlNBvvNxAb2J797GePphle/Otf/zrbdL68R5AgsI6AM0vOhmQpmPsZSHMfKJC9wnJtVx4BWsuMXGH797///XcqpsigqSQasaSx/Mr7zhlxw3RPfepTh8l2+l3OSpc3H/WoR5U/+/P15JfMALHPfOYz+z6g3U996lP3zsRha05uCdIl/XqPNf2h5l3yRZwBEViLaGpaM0uJaOTcWJ6NtC2Hi3HKwHnYwx7WK+3kSw8lEpMRb588Dpn+FkdqUfmG925605v2S1D/9Kc/zaIWAIEiLDkGHAfRowxe4K4BRNvRN/QAwhdLN7SviMKAwcN3Hdm/V9Wvd7zjHfuJh5ryNtnb7sTX8KxG9uSbCSvjyqIl6hs1htfUZZg2/XWKzvDsFHth+I78brK3dWSPTNORdODjHve4mS4E3rAfTciKFB+Lik17jh3pXuSL31meZ8mgLRHYgJZvvuMd7+hn0xN5BcAKSGGs2GefffqxWOSmfGrGoLEyuSYPEzdIlHm2prB839gE0LJkdJ792T848q+m/9ALeM6ZWTQ+19Y3yw4tNzWphYCW7Fdjt2WpU7bYGFZv1f76/9m7C3DbjuoO4Bt3d3+4p1ghEALBNRQpUDwUKBQpabFgJUCwEKABiiWFQIEGb4sHC1YCpLhrcC3u2vfbL/+TuTtH9px7T969ebO+7959zt6zZ89es2bJf9bMsVT0Vre61bC6pb4vYxvxuMxMHz4YIMHfGxIgY//99+9Pv/nNb55M+A3L1X5vsjd/X+Rp/NwMsjetXdPOLdO/Y+zoRsjpMm2b9o7Dc03v7cArSr5spN4r6x3zWaIAsg2T+FkcxJYC0ehiE0YwHjFRTdkxz15FmZUBZ/YYYyBkXwQ0ywtwRMy42L8hZHklgIrDcNRRR+V0v1eZegTBZsIDcE0KbP9gkAQ0k/nF4bf/0zTifFgSpdwrX/nKvkhmIH/wgx9MQDMX7DvkHLBCWrV2TyNtsjcEmrf32LR7p50DhuENp0IQXRKBwwvrzq0JBxaUZGkp0naGttGJhwOUChBtbEbjsm9eMx7MhJMz41xGacgYNKaN3SOOOKIPMCzPQGV2ZsoPj2PLBpw3LgKapS5r5413DjpA2bjhsFuCM1z2SIHLTKODhgTg98MQDA+QC/85+oL5UMA79ZcEnLj1rW/dZynQEbLczB7TKfgQ0Mw9gjDAGZ4Z37IbShKAWKKda/TktMCRblWWXlLWkmBZnfYYK8n7ejc81A7LZjx7SDX6NfcKKulZga/nk4cx1GRvLZc2WvbUHhlftGy21oarmy4ws3je8563nxzjKAEmpm2tMEZOx+oBz0Zp8zx/YUfJ4/9vsreWJztT9sgGoi/LCQQ28NOf/nQ/EWEyotR/i+SJjgZAqU+mWUlsFx8yv/pNH3oOfRjQTHlyJVPTPp10OKqxQbP2KrNUNIFDlkOqOwEGvQ/UA5ytyuYm2BzaHe0oqfZ9rTpgi9jjktgbfTJtz9MxslczXsvn3v/+9+/7lY+dDMjyej4vkifllrGNbDw9af9h/v00m5s2lEdgGxkho3yqkvg0tvQgkybOyY1+BALj/zxqsreWO5tB9jbSjtb0bzixHjs6T05Tf441bWt6bwfXVq33amRPi+h24FdiLHpVUgJbjeg3ZdBznvOcXjf1X7b/szLI5AO8QvmasqnDccyYLcuv9/PKgDOKHAkGBU2WHW7bvl+DAFXAN8zqEMRZKinluQTOBF0cCUaXQz6NLHFiJMzUed68X0lk6KCbpbH2HdmAe0iyehijOHLD6zpdRguSYTJrOeLwvkXf06ZhRpn7vOMee+zR748wBM4ErCkDWGvUOFDLgcjemPFgfMjqco8AQ6CDLGWhSMlvlmAGBKYgZfoJ4AXWApShUz22bMqVAFTeV9ZWnFIOsiwzy7CHpO0XvehF+9OC/JJkS2ZJB2cUCKcuS8cFW9KNUbI8vUtJ7vGOQDfPwB8TCcPJBO28y13u0t+KZ8Pgxf32CkPqpF99txkxYxRy3n4R9BJS1r2WIOmPZNlxWCzDK4lBVn5INfrVvYJTGbhIlp09gcZSk73jOLUK2WNj/SEBsGBLZo0JIna2lP9aG06m2MJS9thOk072k5RdGfkaK6cZ32N0hnca4y8oN42a7B3HlZ0te/HPpgFN9Dqiz0Jj5Cl63CSPMQB4I7Ocdnq5BOGsLjjkkENS/eTI70rb4qdGRsfYoGnvo/JsJ5AAYvLA7R/YlEyYOL8qm8v3RtogSMVT7+R59naNL1r7vsMJLc+w4iTZf0M/Y6zs1YxXz0T2cGMP2VdbKQDRptEYeXJfrW0Uf4iF6DP75FnRMoZkAIlB6M+hXAJUbQURX0cZn72DJfziIlt+zKIme8dxZjPI3kbb0Zr+DSeWtaPz5DR1l8eatjW9tyPWWKXeq5E9/UjPJHuW3hEf2UZKPTKzTTKx08rRlSYLTJaYpOInsm3lvtI1ZSNHY8dsym/EcWXAWQJJ61Wl2kepY5qsKIY46eheBOAkkB5mozz60Y9e+J5AuKQwLyoMDeX4GIQhRhwJcIdEgaA4S8PrQAP3ExrrdjeC7AmRDA1O3pCAZRw4DgxBjUNj+YKg1fKwRRvdD+ts3xsHwoGa8QDwEhSTPcC2pSWWVlOAxgRQJwEF2UTlvn0yQ2WVkPknPvGJk/39xpaNvhAIJbsq71E+J/oo1xyBBsCijG11+ZGIEMdesOU9OJ9mR4y3e93rXj14dvvb374HGhiELCX/8Y9/nNsnR8YDmckZEj4JUoxlOpJDbWn7NKKLnvGMZ/S/sAYIo0c5HjK7LGESUD7oQQ/qDRIg5JnPfGZvuOgoDo13TeZPwH66035snu29MjNUPr9Gv7qPIdUWGU0y8GqAsyZ7Ozi/Ktkr+7fsF/bGNXKUzOqMmTE2nPzb94gzBByw1EgATjb32WeffvLJrCS7P1ZO3T9WD0Rex/gLKTs8NtnbPLLHF9P3AXPKvkqWkGU+aKw8kWMk4+zxj3/8BEB2DjBFV2X5pnMlAVi2bZ/49SwEiMgPHa3HBuUZbBey1H9ImUSJD70qm8uGo2SUpR0AGD/KYAsS77qe95V1bcInK0LsKVxmqtfovZrx6l30n35mz4FW0+yxcmPliX6qsY18FEAWsl+e+8dSdLWJjaFvb4kv2bB0zq9zyoS3/NVG3CZF2P55wFmTvR29sBlkbxV2tKZ/I4/L2tF5cpq6y2NN25re+1gfH61K79XIXtmHYhY+owSJAG/61XYL9iaNXREH2Ys7mdrqYFesCnr605/exyo1Zd1fM2aV3yg66UZVNKwnnStYw1jZHmb0pLpT8pYvlVlcflnHjHSZpj6scyO+m/3ynNJYxxnStiHFOcos+vC6gB/ZwD4B8rDMrO+CcPtX5E+77IklIEf2msivapZ1CM4ZXXwss8oIERouFyvvbZ8bBxZxoHY82LeErLrPr1tyTpGlgXGyAVBx/Dl+r3jFK3pgxzp3JHC3JxaqKSuD1bORzZyNKUT/lL8cGH3UXzz2n5nfgGZOCVDSRt+z7NmMONAMAak53Z5JJyhT6oYEOX3hY//l17KS6VNeA1YwJHmuoC5AeFmOsx/QzHlLMMLbOCtANM+gb+kRugtoaR8rMz3aTB+bodZmgJ++Uyafpz27bMeiz5aI2mRbPQFgFt1TXm+yt4Mbq5K9S17ykhN2yxDVR4BT4wgBugI4Z8yMseFS5TP22PIEhYC4gNGpd6yc1uiByUut40OTvR3M2wyyl1+QlKGTjeW1jixGZ2eyYqw8ZYky4I2eBOyzUVme6Tl77bXXDiYU/+lK90Q+6OJMqCq2HhuUx2TsTPMhywndgIUbbXO1I0A5W2CCxT5nsrJi22Uos1PreV82N6CZZ5Z89H2s7Cmb/hjjt+vD+9znPn372U4g0ywaK0+z7p91Hv/Ina0aauIcMp93He5JSUez3+RGJpptJMgnvRuwLHp8Vrua7O3gzGaQvVXY0dr+nSUni87Pk9NZ99a2rem91em9Gtkr+9MP2WRVkWO2hIndDlBmosJnW24pk61CnLOtD6opq3zNmFV+o2jH9NlG1VbUk2CSEgcMZZYEk5/61Kf2gRtEUhC6symBd4LXsj0xWILRIZkNzQaytZu2pq7cn++OeEa4XvjCF/ZAY3ktn4FjBF1aZJZrmmVCvkNyGzUOLMOB2vHA+ZUlZamh2WnEuU52k+/GPydcFoH9xTKeZJpy+gTVgqTassaKsQeIp3SBz9pjxkNg5bpxPW0mX4CvHWZkZc1JH7ZBsh/mcE+MujrM3pYE4KLjlM+7uB69V5aNDglfy2v2H5FVqh6gk3fg4AM0OL8hwdNwmY9losCFBIQBRTjPwwBMNl8oRgrYPySGLwZveG3Rdxkd2eCaE49HtRQejdXFTfaOs0tjZO/DH/5w3y+ArSOPPHLSPfr9UY96VKcPLa20N1/qI/+LbPi27dkcSP+VgLVzsXHGgcB/rJwK5MfqDM9ZLzXZW63eq5E9AIFAjL2QyZhfXvSdvqXb019j5Sn74JIT9irL7AEpZsLZLtm55bhQ1nNk75JfY8OSexm/soTVsx4bpH6UrOwd39b+D1jmbHTqRus9dfPNTWbL7Ejm9DHbV2b4Y5OMYz4mu7SszZVphY8mb6yaYO/onYc+9KH9u421udqb/h9jK9g8ABKQNP6yOqbRWHmadu+sc/wb4Kv+sw9yDUWfanv6Jfez87InEd7J+JDBz59I5mD0eO4ZHpvsbR7ZW4UdzT6hw373fZpumVZuzLl5cjrr/iZ7W1P20p9s39Be0q+yyOgdOjc63T2WZpZLzcVtfumTbuTv1ZRlA1N+UYyW9m7UcWXAGYUueLU0MqCZRnN6BMvJtNioF1lPPTJCdNq01O2cmzbAOVlIUGtPjFpiRMusDN8ZRwKxiN7xjnf0wFmWa3J4CFGy0Rbd3643DsziwDLjwR4xQJPMJueHN/IM2U+C4GkkC5RjyQG2fh/oVlNWNhgn2lIwTrkxK+OJAgdGCbamjamMWfue+Wzppv1PBAiZfddezoy/aQS0orQ9n6GQOZcs1ZRPhkQyz3Le0Tl/wDMb+D/lKU/px7GsvRI4m9Z+WWTILDbiLKMyQ6E/MfiXWZ1kBZWXh+BceW3R5+x1QN9LuU7aNf4jjrwMWe81LTNPmSZ7q5U9ch65x++SZIYZA5H9GhueZXDGgB/SmEV+NGCsnNbqjFnPHHu+yd7mkT06FVjrZ+rZBDrEOcvQAHC2AwiINFae6BzZsOQ6oFlkA1DMfvChgHLlZIgy2btWlqZlmxz9TFS6vqwNci8KIBJduePsjv/ZtsP7+wtttM0dBkB5juw/9hRvTBbT38u+bzK92Hz6Zv/99+/tvgngciJnkc3VtrHjVRatvUCRpY5ZpZFJPv3tHLtpImusPPUVjvinTyUJIIGjrLshAYk9XyZa2cf6PvbaL2lOI+NDtn508LQy88412ds8spc+3Eg7ukz/zpOXadfGyOm0+5ZpW9N74/CKWr1XI3vBdDJ5UfYtu0yHief4e4lTlBmuoBOjAc4Qm1pTFlYSGmMvUnYjjisDzqRPA87i3JSNZTwAZyXaXV4/oT8LIBnRBP3l83NuGLgytjoalZvblfcu+kzokq64qOzwujZzBCksgpdZBU5lo8aB9XBgmfFgBjljxbPtoVX+SAcl6rpxE6WbNgpkQoCvmrK5j8PpD2jmGcaGerKM0bJG54E5gIEsBcr9AgF7r7iHY1uOI7Prlq9MowBNrgPIGJ8hMBHgLGUBczLFgGVD8EomKeBh2wCo40gNKQBHQP3wcdHSDLqMMz6tXEC44bPGfE//Ax9twjwkOt8fKve3LMs12Vut7JEj/SPoHC5xGtq4GhueJb7GtkmdWWTvvbFyuowemPXcMeeb7G0e2dNf5M9P10dvy8LnkGcriwRdY+Updmeas28rEcAZogMBZ0A2en24z6yMLL4fQIQuzzgaY4P6B0z5F0AptqIsYh9MNGz3Rtpc9WfCaNqvdrMxgLPSDo19X+0HRpU/POJ5bDSb6Los5/L6GJs7drwm88ozM9ntc4h8sVey0gFnY+Up9y868tE9A8l6LDMfc2+WHJHD2HPX0l5tmvbrmGy4zbH5TcaGvrMUVMICOR5myud55bHJ3uaRvVXY0WX6t5SPMZ8XyemsOpZpW9N74/CKWr1XI3uxA3y0ITmX84CwUp/lGbmHDneObYEXHbM9uzm0qGyZoDDGXqTejTiuDDijwIFREMch5dw0Az0se0J8F+iWQV35zIBj5WyY6xyBCMesILCsZxWfIe/SYymSZL7MmjVcxfNbnSdODtSOBwovv3hrZp7CBmD52fVsdH/nO9+5Xz4MSPerj+WsapZo4qZn15R1D8dRG+zJQhGHksUm4KBrLGOwz4hna0MJ6sexda8ASzCUJUE+DzMUZDzI8oyzLwgQ9Jg5LvcY1K5krSaDzAb+wAvlhptRMyAoznv/Zfs/5YfkfVAADzMwzmWGuiwvc8MSFJsZ02UyBzKzXpYLAF+eG/sZGOl9hyQAZWgZOryVUTiLmuytVvZs+iuQAwYAJUrKOASqoRobTvbIFPkVUJckMBbA6Xs2a6ycGsu2HBijM8rnLfu5yd7mkT0ZtyYE6atXvepVa3RGgAfZX2isPMkqUyfAiz4qgahMVHDW6V6/UmgCRRaQH7cqKeAWO5ItAMbaoLKe8nMy2tiKEoxTJhmcCTKd22ibKzshPxhjK5VSRwNgtAllX82a97UUk69s65Eh+APwQWxDrc0dO14FY2l3/7Bj/+lHtlI/2scUqI/GypP9lsaQd5v2fPeaQEOeCaQtZdL5yGVk3bmS6MeAZraYCJiszLTJq/LefG6yt7lkb6PtKFABjdUtkYua4yI5nVVXk72tKXsBzhxhD6XeSXYvvSo2YVN9ZgP8OGRpA9yfyXrn+Z5jy9bai1kyuMz5lf04gMGKAbIQbnSjG03aJgDM/lvZUM5FAR8AyNLDVZJg0XMop5AZRG3lUAl8Qzb+d04APdwXwT4XiFAM0/pz/6qP+fl0ACXhY/gT9Kz62a3+Ey8HasfDAx7wgD4QMRY4k1lmaflFMowCMHHySn1gxlSQgmQUGEs1Zd0HwBKcZzmEc7Kxst9Wxonszihl+7GVlI2PXY9e0h7EAS0BIbrKD3FYqhNHN/u52VhYe0J5N8AaRx8JBpEAsATETCjYaw0NHWWABLAu5Bl+oQyZpUbJkhPk2D8m5Lt2IUHExz/+8f6zZ+ujEOcnDlDO1RztXeAHDIZ/yfSwFNW1OEvT6m6y95meLauSvUwAATOzlNYDfY9djhzV2PCjjjqqbzdZs/9TSZbw0gOAOaBE6l8kp7V6oHzmMp+b7G0e2bMaAahh8oOchPbee+/eJwMy6C80Vp5MVAC6OPBsVojznswcAAYC8CL+K38xxF7d7na3678KFuL7jbVBqWd4BGSzEciEU0i9mbyNXXVto20um8f2obxf/2X7P7+2jGcA7NiOmveN/mdHywkqIGaylNOHNXpv7HgVkA1tku+HH354/4psuO9+AQ6lLYv0U194xD/A2bTnOxee27LF98hTqjXJgdjtaZTrrgXE9Vn/GDtI382jJnsf7tmzGWRvFXa0tn/nycqsa5HDWXI6677atjW91/V2j95YhFfU6r0a2Sv7MxMuzpVZriYjkIwzGbCIbStXukiSoJ/Yc5NUNWXVVzNmld8oWlnGGWNpSabgUhonQMo5gBXjafasBKP22WefnukyO5KlslEvWdbDeG/bHiBqixkaJOAnZAJZmTPZA0EwgezHwPiVlMyMODvltRPqMzTXeyT4juCfUM9vzzlxcqBmPNgEMnuFvOhFL+qBJA4tQAaYxel+5CMf2e+LIoOFQ+ceGQW+G4vANM7rS17ykp6hMrPGlnWDjE9AlvH72Mc+tg9AkuVkTxzjFzE2RxxxRA/cKXvggQf2M8EAqwRnNqVOJhpn1nJToJV6GQKAnPdCnIRkl2mzDBmzejY6do0zYWx6brkh8Gtf+9oeIFOvDCDpzAAFOoUR4QD71dEheUfAGl2Eb8rSAd4J4Zl2yB6wmTY9hq/eL/UqyzjZX0YfMGR+ZMQ7R6cNn3tCfm+yt+MHFVYlezbnJvvG3MMe9rBe9shbZI8M2yQc1dhwoDB7b0bRsiMgHNtoHHKmyjFQI6c1emC9ctpkb/PInuwwfiPZ4aeRLyBLfgjl0EMP7f02fT5WnpRlC+hG+tMPpsisIvt0MR2YDGDZs55popVzbxKGLZFNnQwfbQiNtUEpP+2obZ5lDLE7xo928pdNNgRIXoXNZSdsV8AWyIJiG9k74Hps4+te97qJbax5X4CUX7zWf7Jc1SvoS2a0oCqTVTV6r2a8TuP3rHM18jSrjo04z2aTS5TJtmG9/A/6Vlk+iokRfI7NVx4wTGbj1wzr8L3JXtfvNz3W31uV7K3Cjtb07zTZWHRujJzOq6PJ3taTvbI/+Xj2IzXpxJbG3zvssMMmxUxSyDwGmh1wwAG9DUh8pFC591lN2Rp7MWnMBnw42faX3l890t4CwGxAvX0VH/nIR/rlSwwxQ5n6GaanPe1pk2wNhQWFlDsE+lOf+tTUJgDhdBKHYh5IlHR8WRvD7AYAHudLZhZHIcQpAQBwFAS//pBAVKA7JL+uJIhm9OPQDMvM+i5bzQw8Z6UED2eVd157MoOUmVbn8dWMpMCEEGXGioNnhl+QPG/PGXXUUgnW1d7byu98DozpvzHjwTgy28BwGif5GWJvKIg2Do1pwYcZ/KOPPrr/bB8wypU+EBSYabA8JEsl3F9T1jhXF2eRYqaQ1QuQetaznrVmfzLj1bUEQEAwbeFU+kEDgFIImGWGnd4xzryvo/c10+FXOY2vkMwvZekFZS0FMR45BvgTMu7Va3zKZPOnzeo9ZjvgZskpxwxppyWWmVlRrz9lBX0CkdIh5kh7Pp470hvK4i1eZFmnQEV2r+DFs/1pq36jB+09M1yemvbP068pUx4BeIIvvPe3iJrs7ViquwrZI69kkeyRZf2i75GgizzFhjhXY8OVNf5i78mpoI18AuPKMTBWTmv0gPaWNNZfKO9psrc5ZI+OpNdNAkSn06fO2yQ9WUHpu7HyJDDlf5INoAT9z0aZjJAtXW5OzGbZeD0/wKQs2wFcVrZczlhjg9Lm4ZGO1w5LXWJv6G6TzJ5n7BpTq7K5wEK20PjFEzbVd9kAllkmC0+7a96Xv+7dZGrLJKRv6B0+qyzkMsCqtbljxuuQz/mOl+IB+i6TT7k2Vp5SPsda2yj7Xh+/+93vXpMxpj6xiMktfLJceRqRF2NCPKHPyCqbbzJEn+222269DlYuGSDT6mmyV6/3ViV7q7CjY/p3mlw4t8iOjpHTWXU7P6ZtTe+tjTV2tuyxC9e97nX7OIVeETOwF/w9WNLBBx+8xpbSR+TaxD87rj8dxTlWNpSxRk3ZWnsxTw6H18TJ5Uqj8vpJtu+f0Odnm01LSnhZYKM+C+QYSwa3dMw3qv6NqodjBNhi4AWxbenj8Tkr+F2lrBz/ie3MRnKgpv9WMR6MrW3bZ9IdtYVjPotqylLmqVeAJMCZRZxVIDyFbzPdbNw/q7y66QVOLEMxTy8I9JRlAJQtga1h/XSiscSAAC5KIG5Y1nfGSTBHjwZcm1bOOe+nvMmDAGbTygLntBHPNhM12dvRG6uUPUEy58MYINvzqMaGG1eAbCCFv3l1j5HTGj0w7x3GXmuyt4NTm0H2gAFkCQBDRy3yH8fIk7fjvCsrm8cYmEV4kH2H2CuBwSxSdqwNmlWH83kHE8kc+PVSzfhhG9kYtoNeALrMotr3NWlG5wAovds8vVAje6sYr3nn9MUiO5ryO+OYCUqTZfyIeTK6qH153yZ74/y9VcreRttRfb/R/btInmqub3Tbmt6r4f7asjWyJ+axHYy9HCVCzKPERzLUFtm2mrI19mJe+3KNrceDaXSCAWfTHt7ObU0O1AAvW/MNT9ytbv134u7f9naNA40DjQONA40DjQONA40DjQONA40DjQN1HJgHnJ20rqpWunGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWDX4EADznaNfm5v2TjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DhQyYEGnFUyrBVvHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNg1ONCAs12jn9tbNg40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA5UcqABZ5UMa8UbBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB3YNDjTgbNfo5/aWjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOVHGjAWSXDWvHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgV2DAw042zX6ub1l40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40AlB05eWb4VbxxoHNhFOHCKU5yiu+lNb9pd6EIX6v7whz90n/3sZ7sPfOAD3W9+85ulOXD961+/O81pTjP1/i9+8Yvd5z73ucm1mrJuuupVr9pd/vKX705/+tN3n/zkJ7v/+Z//mdnWa1zjGn3Z0572tN2Xv/zl7uijj+6+/e1vT5697IeznvWs3U1ucpPunOc8Z/eTn/ykb4e6F9GpT33q7h73uEf3i1/8onvxi188Ke6dzn3uc0++T/vw85//vHvXu9417dLk3FWucpXOO+PLorKTm4oPd77znTvv9h//8R/dD37wg+LKaj/uueee3eUud7nuVKc6VfelL32pl7//+7//W/qhNTJ1pStdqTvf+c439Vk/+tGPuve///1rrtXI38UudrHumte8Zs/T73znO93HPvaxfnytqXCJL8uO2VnyV9OEU57ylN197nOf7ne/+133vOc9r+bWTVt2o+WvRqZqymLgRS5ykW733XfvznWuc3Vf/epXu/e+973drLGy2eRP++94xzt2Zz/72btDDz20+9WvfuVU/07G/yJ685vf3P3+97+fWQxPbn/723c//OEPu5e//OVrytWM2zU3HvuFzJP95z73uXPbMO3eRecucYlLdGTwTGc6U/fNb36zt1Nf+cpXFt028zpbvttuu029/uc//7l7wxveMLlWU9ZNZznLWbrrXOc63QUveMFe7ujHWW0985nP3NtJtu1nP/tZ96lPfar73//9397PmDRgyQ/Ljtm99tqr583b3va2pXXxzrKTS7Jq4W3L+jOzKq6RKb7Z9a53vVlV9b6A8Ryqkb+Tn/zk3Y1vfONu27ZtHbnnd37oQx/q+FLrpWXH7EbI33Wve93eX+L7jvE7vWuNz0AP3fCGN+xtzMlOdrLuu9/9bvfud7+7Pw75RhfQrcb6r3/96+5f/uVf+rE+LJfvy47b3D881tjPmrKes5Vt7SxfzztthK3dqrHGUH7mfW/A2TzutGuNA7soByjXxz3ucX1AEBZwCG5wgxt0T37yk+cawJQfHhnom9/85sPTk+/nP//5J8BZTVlByz/90z+tAZgufOELd7e4xS2617/+9d073vGOyTMY+0c84hHd2c52tsk5zhyH47//+7+7d77znZPztR8ufvGLd/e97327k5zkJP2t6v2Lv/iLPvDhNHDQZtHf//3fTwDKssyNbnSjNW0tr+UzUHMRGHapS12qN/buWVQ29ZZHwRY+n+Mc5zjBgLOHPOQha4ArTi5H+vnPf/5ETso2LvpcI1Pqus1tbtOd7nSnm1qtwD7AWY38qexud7tbd8UrXnFSr0Dzale7Wv9O6wGc1jNmZ8nfpJEjPgA3OV/z5HxENZumyEbLnxcbK1O1ZYFOgpQQ3SMQMxnxr//6rzndHzej/AnG0n4Bc4Az48LfIjIWZ4GE7qXvySYgPMBZ7bid1YZLXvKSvc5X3zzwbtb9s87/1V/9VQ9E5To9YfLj7W9/+xqAK9fHHIEFl770pWcWBUD+8Y9/7K/XlL361a/e3fa2t+1OetIdi1jo6itf+crdL3/5y+6xj31sD6bnocre7na3m9hJ50143exmN+ue8pSn9PekbO1x2TF7gQtcoMNvtpvvYJJwGdoZdnKZdo65Zz3+zKz6a2QKL+cBZyavApzVyB8QnZwAz0Le1STxs5/97H7SIedrj8uO2Y2SP+OI/NKFY4CzGp+BPckYCV88y2QNn5L/HKLLlQ2ZLJ8HSi47blP/tGOztdO40nWzfL2NsrVbMdaYzqnZZ4/THLPLtCuNA40DuxgHZD8JBAQwr3nNa3on49a3vnUnqPnHf/zH3hmuZQnnBP3pT3/qvv71rx/v9s985jOTczVlBYLJyvrgBz/YOwwcCDNYgLpvfOMb3Re+8IW+bu8liBLcf+QjH+k+/elP95k/gipAm7KCzVoSMPzt3/5t73jLDHjTm97Uz7wDvgSxd7rTnbqXvvSlU6u91rWu1ZeZdvETn/hEJwgZkuepF5n1O7ERJ02QS1be+MY3dt///vf7vuT03vve9+7222+/7re//W3Va9fIlIrJOjrmmGP6Y/nve9/73uRrjfxx8AOakTMzw2TVOQ6H4PNVr3rVpO6aD8uO2XnyV/P8E1PZVcgf/oyVqZqyHN6ATl/72te6t771rb2OE7iQecf/+q//6rtnM8of3S1LdxoZI7J3pxG9COiQ4ShrqZZqxm1t3estjydkEMmEYav0nYkYWbP6WfZwLcVOAh2GPGMT6dvQ2LIyfQKEAcpe97rX9QDirW51qz7bhL8AEENlWW0wqSWria0+4xnP2D3wgQ/snvjEJ6YJVcdlxyxbKpjMhFfVQ0+khdfjz8xjyViZUoesWGSFwzQfJysESplaJH/q+4d/+Ifen+U/yMr96U9/2gN0MqMe8IAHdA960IOWmvxZdszuTPkb6zN4t4Bm9Aa/Bf9kzfOnTTzL2ouf/Zd/+ZdY3YPghx12WH+cNaG27LjtHzDnX7O1x2fOPF9vlbb2+C3Z2mcacLa1+6+1vnFgwzkggA/I8KQnPWkyU2QW1uwxR8UMGZCphtSLgA4ysObR2LKyiC572cv2VZUz8YwAJ0pAxjl42MMe1jvGqRfAdvjhh/f3CUqe8IQn9NlFZvTdW0t//dd/3S9BlZJ+0EEH9bcDAs2yAUMAIy972cuO55AJGgQYsygB7/D6Xe961x4447w861nPGl7e0t/1KRATCcI4t8hyHv3EITJzPYs3feEp/9L3Y+SPfAukZF/Mk9Ua+dMkyxeQGeFkAn30ox/tl6KSY6nyywBn3m2ZMbtI/vrG7mL/ViV/Y2UKu2vKZqx861vf6p7xjGdMessyOTP5ZO4973lP9+Mf/3jTyZ8xVmbpThp/7AdZE9MyJ+hT2Q6CMeOoNtOrdtwO27Xq7/e85z17/SMQTYacwBTAI8NNdswywBlwClkOG+Bh1ruMLcv26UeZz495zGMmyy21T3b6ec5znl7uZKUA/ZQF0PEt0m90sqxJGc0ygdRVQ+sZs3e/+90ngHbNM0/MZZf1ZxbxZKxMqYcORECaMpupP1n8q5E/E6TJIv+3f/u3CdDDBzzggAP6jEmZbh//+MeLJ4z7uOyY3VnyV+MzAMYybo3xgGDG9IEHHthPsrMzAc7OcIYz9EyjA+b50+sZt/N6pcZ+1pTdyrZ2ka+3Kls7r5+26rUGnG3VnmvtbhxYEQeyNMZeVmV6tZkm52QAWFrD8ZD9wxmSZWX/q5DZuzgSRxxxRO+IWGqCzJYvorFlBU+IIS/3Z3FO5gWAxRIys2KANCCFrKX//M//VGRCgk5LUYGCQ7r2ta/dp6MzPBx9s5/AjTLrKOCd+kvi9MnUs0TUPgr2cSnp/ve/f++Q2L8rM6zl9WmfASTqQpYtDjOvvIc24432Crg4PdNIthOnCH+k7QOKvJe+LN8v99o/TpvPe97z9sGP4EtgZz+3kvD4lre8ZS8rnCj9o4zsMVl08+gyl7lM78AKrrIcUnl1cHDNctpHAXC2KvkLyDZ8r2G7a+QPWObd9R/ZLEkfkaFp+/+Nkb+aMVs+d6z8GeN3uMMd+n4nJ2bp7csms3IaeUeAcMYd+QZUlxkt7jPbDlhW3jOMURl+5E8mUUm1MjWGb2X9+Vwjf3vvvXefKaitlvlkmZu67H1F9j//+c/3gd9YmXJvTdnMrA+BZDpNP9mXhny85S1vWUr+9I3MXfuP6S9ybB+oMrhcVv6AJfpdRinQZJaewpMQ+ZPBi4488sjj2RPttKUAHgrM2Kah3q0dt57luZaOCazZN22eFdCzOZYKyQrGf3yzf6XgZNaY8QykrHGD2M6S9CHgDBjFptAZspoRPazPQ4I8ZdlwS8C1wz306CLQrKZssoiGe5QZ6zK6gZz+BNn23uM/8AECmmkvWSIL+p+NKfXuGN1TM2bDH0d9yQYC6vSPfp1GNXrK/WPtpLJj9VStTI0Zt54/jcb6M/pwFfKnTfHFFi2brZE/7SWT5Cwgj2exOzLb2N9hhusY+asZs6WNGCt/2jhWTpTVHuOJDrRyhC9nGTY/I1Sjs/HAeMYz+qMkwBh54R/ri7vc5S69LlfG2AJMySzluw2pZtw2W7uDezX9VvJ7rK9X3rPI1tbEGrX+21gdWqsXy/db9nMDzpblXLuvceBEyoFsiD5tOaWglmMRZ8V3WV3uYVQToEh55/wDdbIEU2CEGOB99923d4wEdhyZIZAwtmzKlU54usXMdQIxoJQss3//93/P5clR2y960Yv234EBJVlmkiWRHAZKWl2WCgruLaNBmWHzLiW5xztyKjwj/FFGYOW8IEFAxLAtIoGPDDqEr8PNlzkif/d3f7emGoZ26OwoYCNjANSQtm3b1r+f7K7sIZIywBM8VZ8jw/noRz+6z/pIW9QpsA3vU5bhtJxVFpkNZWeROhGgdgi04C/gjFFFq5I/M9NIG4Bz2k7GPO+1r33tBKyskT+AwxDc9QzOiUwMNBxzY+WvZsz2D9r+b6z8kSkyR/aQ/gx4foUrXKHPHCnlS78ni8h5Y8ZSQvWY1c+Pi3DGjaMAP8oKlvxxxJ/61KdO9tOrlamxfOtfaPCvRv7oC8Czd5YF+qIXvaivLUGLd0pm61iZUsHYsqWOA+QMKWMwExG18gd88X4h72MvSlkSdF+yoZaRP0GdJT3GONBRNvMYot+8t2B3CBbSC2TKdaS9+hOIUFLtuHWvfTQTVKtXQBg5L+smv4961KPWjBf9QNebcNq2Xb8+5znPKW9Z89l15T0D4FwS4Ml519kloAJda4zRU//8z//cF8+EiC8mK5Dxh2y/8Dd/8ze9jBnTJoLsBVouh6spG7CffhtS+iFA4LSsBu8i2Eb8hRI0G6t7asZs2kjvGLNINrgsvmlUo6dy/xg7qexYPVUrU2PHbdo7PI71Z9jyVcgfsMcfAnKbhJMpZuLWZFrpp9XIn1UShxxyyPB1uz322GMyaXXUUUdNro+Vv5oxGz9prPxpzFg5ScPtL1gSXWEihy4w6YFqdPa8vVdNoiL2B/Ccep3TZ/7oymnAWc24bbb25Vg64e/QV3SNf1rGZ86hsb7ejtLH/Z9na42NsbFGrf+mBWN0aK1ePO7N1vepAWfr41+7u3HgRMcBgTwC+AzJrCyKswLwEjybAafoLKeTncBR5uALEAJqJUC2HDJkZltQZ/8We5sky2Rs2WSvcbIETWmf+svnxBHMcx05Y0ClvIu63ve+902KADM4HN6Dg2i5kwDlXve6Vw+e+ZU2xlybzfAhy6GGVAIFucbRAgCpW9DIOR9DQEr9476XvOQla25RR0A1s4AveMEL+gBLe4eBI/7GuZIlFzAIP2TICWZs1lz+wqeHOc/x4oAJBO0HwjES+CTw5bQrB3SzJMhspyWIgjVlBeLzgLOAYpa9DilBlfrRquSPPKNkpvRftv8DoOGb5bHkZT3yJ2vQviHGABLQllmbNfJXM2Y9a9tI+SPX9oISYAuMDz744B5MFJjvs88+vZMms4z8lAQMkZEK3E3woV8tq8lef1miYvxwzAUUsoWA6srKWLWcC9XIVA3fyjbnc438CcTI8l577dXvPwXwoouy5xhAPEvax8qUdowtK1NGBoP+wWcZriE6L3olOi7XHBfJn8mRLC2WEUTf6Fd6AY+9oyBTv9XKHx4B0ZE9NAHUY0iQFlDxla985fFuAW4Ba2RZPf3pT+9BGDoTYFVS7bgFhApI8JpOA1hFrgMop37jwTl9IyMY+AVsJPvsCV1Mf9Hh0yj6YDhpkLLOq5/+xXvtud/97tf3NZsmS82YRd4zgXIymulgG3qH2GrvIgs7ermmLD+BPUm/pF7H1CPzb0ieC+wDYuKH9woQq2yN7qkZs2kH0JOs6Es2cBZwVqOnUrf3WWQna/RUjUzVjNu0d3gc68/w7VYhf6W/UvYLYMY1SwDZF7Ss/LnXZOW27bYwAC/5zyqLGvmrHbOePVb+auREvSF6h/5Blngbn3hp6ws+aa3OTr3lURYc8AIBNGW0sd8Pf/jDe19JphlQfhbVjNtma5ezteR7mVhjnq2tiTX0fY3/FlkZo0Nr9GLq3YjjjmhvI2pqdTQONA6cKDgQJ8KM/pDiVJTBgj1mBBTuE7hQ0ojjmABFwEARIpsCv+IVr+ie+cxnTmahOEQcVFRT1tKUpL7b9BWwhTgF5S8yxUnoLx77T7ZDGVACZdJGRSz3QWZ2gGbIjHiWZOGBMiUvAuz0hY/9l1+Iywyq8mb/PAsANczqKu8dfgYwoi9/+cuTzJ2UMaujbkBE9v3J5+FyTsZUsKovBMT6VVlOVWaypi1b4Ywlg8G92V+N8yQ4xGfONMfMzK4sBgEiRzdB2bS+yDs4pg8DOJbXSjA3wdhGy5/nBWjFE0ur7HMGBImcm2nTf+uVvzjcnlmCvr6PlT9la8ZsjfxxjNMfwK0AHPozIHMJUGsL4iwn09QxwbslW8h7B5S0N2Fm4Y2FOPtASkFgrUzV8K1vzOBf3nes/AEckm0DpA4gZIlgmUk7VqY0p6ZsMs2AJskEVgegOhTdk++O9N88+cuG78aATDqybyzLWkt2EpAd1cif8saPNsmgKpdjuzaP8qvM9Fm5VNQ9gjBgEqKjTWJor/Eb+eovbv9XO25tyo/ogCwdI9fTMvjID9DfGDCBoQ2Cvkx00BvAq1mUSaNZ+3zlfGyXZ8imRsaryQz9od/KzDb6GWkPnuMRoJ590ibyk8ywmrJZ/iWDMOPbc8pJqQAxzocAIMBIz0beq7RTNbqndswChL2j5wWASbvKY42eKu9bZCeVrdFTNTJVM27LNudzrT+zCvkFyWl/AABAAElEQVSzxDhkJYM+MhGYpcgmbmJ3lpU/7wnYje4yLkobXCN/tWN2rPzhQY2chGdsl/HNF/PHT6MPjLUsrc17j/XzU3eOxjqdgfRB+iHXxxxrx22ztXW2tsbXG/bXPFtbE2vU+m9pxxgdWqMXU+9GHFvG2UZwsdXROHAi4kCAqDi05avF2Jaz4ZwNM8WyjgS7iINT7uUCZBF0cDAYv9wvoGHkOUEcb1RTlrNjuY4AjhMuQ0V7BFGcdde9x7TsJUCAdsiAkjUnOJJS/rjHPa6/J0ZdHWUQqo2cfEZJ+byL86XT6TsKz8JXmTSMyXe+850OaDCWzBgm4BouUVJHZvyBakMS5JVBje82eUWCA8tIZf75S6p92l3WFfAr5wTQAi/9qg5gweMf//j+Mv7JZFAnPgUomcaj1OfI0ZtFActcTwC50fKnbkEvEALwkSzCY7anwfsDeuoHWXQApGXlL79WywmxVIT8WuL10Ic+tH+3sfKnvZGtMWO2Rv62bQdYU38JRDsXWSQnZb8YL0ceeaQiEyLn9hvU92Q/sqqALLPh+Mq4VU4W1ViZ8v41fJs0sPhQK39uFZhYJhcwQ5AicCmpRqZqyr7whS/sHvGIR/RAlB9BMR7wADAVPmrPkObJHx7oFwRcGPZPgJAAVTXyR9cLWI3fgKTDtk377pkJqD/wgQ8cr0jsh0BwuD8jvVXKHL6MHbdkNjL1zne+c81z7duV2fRceO5zn9t/1F7PNBFE/zmG1KefplHAo2ljWfmcj/5zDgB26Utfus86C/jFvqUuZUx44J+s8IB/ghMApGX53lNGiu0Masra5w1IxgbIdLNnIfKdbcSHyEd/4dh/9ntkL/AGkED/0a1AEnq1RvfUjFlyrc+QyZ2Sj8c2bXIoZWaRngqY7OZFdpJ/FJkii8PxpU36A29QjUzVjNu+8sG/Wn/G7Rstfx/+8If7fjFRU9oSfgsb6R1lkeaHA5aRPzJp8pb9Uhf/RTatDEj+bI38ZZxlbA5YumbM1sjfsvZs2pYhfG06MqB4xuS0Nsf3K2WhfKe9tmdY51c2TRrJOlyGasZt6m+29o89K8b0W42vF/46LrK10YtjYg3x3Vj/rWzDIh3KdtToxbLu9X5uwNl6Odjubxw4kXEAEFIu8ylfL0t/hgbP3l2c0WQwDJfRyGgCnE0js/aCKYYAaMOprCkrG4wTIG2XwddGjoxgnUPOgZ8WpGQJlUDBZ7NnAkGAQDI+tJcD5W8aybTi+Ho+R1eWVrLyUj7BNL6aKRXgIMs899xzz/5zAEcGyzmZVcNN9C05Qq6l7f2JY/8JPFCygo493R+SEVOes9QKz9K+8tqsz8C+IQU4y/P1oezBOPDD8ou+B6jSb0NKFhye+wtttPyVznqe4WhWlWwJejgPArxl5S+ZhuRf9tb+++/fjwH9Ujoki+RPu8aO2Vr5Sx+S7XJ5l2eWJK0f4I3ikJfXBYL6yxhXNuCsMlnWWJbP5yxZHCtTGUfuH8O3PKc8LiN/Mk2N5/xoh8BvmLFWI1M1Ze37IyNStpsJA7pLHwBI6EPB0rSJg3nyJyMp4099s/oeCIrGyp86LVlEgjtZb0MSBNNx2lCOcXqRHDpno+shBZgf8l25EtTIfWPHbQAvz50m284l0FQ3wFJGnUB8WnCT5886BvTzrtMozxraNFncZADJQhxm2bEnQ5uiLH7J/ANmGpuopiy+AN4sBzNO9bFz3sM4kLkwDZwiM/4EQIDQpzzlKb1elbFOr9bonpoxm33p6Cv6JTom8k6OyJo21Oqpnnnb/y2yk7V6qkam8h5jxm3aWx4j52P8mfK+jZQ//s00H8fz2Eq+Wny0ZeVPXRkjstos2zQGTIihGvmrGbM18hf9qj019iy22H2hjJH46GN1du7P0ST1ta51rf4rOX/a0542dXyn/Lxj2hSZLcvO8vWarR0Xn9X6eiXvF9na+PpjY42x/lvZhkU6VNkavVjWvd7PDThbLwfb/Y0DJzIOUIYcuxjY8vVybui0y5rJNeXttZWNin0XQLgOGBoa9TLQERTUlFU3EmT5A5p5RpZdZn8MS0ad5yRzuIZp5YKozKAJ5jj8IZkJ0zI2XA8g5ToAirM1dPgCTCmb4M69AcJ8DmmjjTwFumWQ4zynDg2zHnKvPmHQSmcr1xiYkgTT9u1B2u5X1jiR+MJoMrrTKLPk5bXUrV8920a2+pFDK2AVlMlsUG44s17Wk88J6MO3nHdMNsUwgN1I+fOcgKfTAm6gMT6Uge0Y+Uv7ZREAWUoir+TD+8kMLK+Pkb+xY7ZW/jKTrm/tVzKLgDfhxzSwwLmcF9QHjNCPw/3RymeQyRqZKkHrMXwrn5XPy8gfAF3WasiSasB9AgPna2Sqpqy66TeZGORSWyK3zqH8iiL5GiN/5DmAEIBLpsc0ygTKWPkTDNFlSIZvlkCWdWdpkuWHqd/1BGt0SeSyvC/9VmY/5vo0nejamHEbWxf5TZ05Ds9bKhlgxLjRN4Bw+2gBmBZRlt4aT3hVZn3knDqGgYU9N0PsAJ1oM/4Q2dD/ZKOs0/XY4PRNTVn3y/I76KCDJjaWvNH/aVPGAbk29oFlw6ALfwC0AAKUPh6je9L3Y2xG/BSTZGztkCwhzR5bWcY2Rk+V9eDfkEo7WaunamSqZtwO25jvY/2ZlHdMX/u8Xvkj5/oHuJM+UC/KeNzxbcf/sfKnD2RceT8yWJLscj4WEIec1sgfnqOMz3J85ZzrxmyN/JUb6tfYs4DrnhnyTij8HKuzc7+jVSXZG1fWak3GcFlPPteM29zTbO24+KzW1wt/HRfZ2ppYo8Z/K9uwSIcqW6MXy7rX+7kBZ+vlYLu/ceBExgHAT+k8lq8X8KbMiJHl4IcBkECXwgZgAWay2a9fcGRwzTw/5CEP6Z3q1JslNr57dk1Z9wBqtEHWRemMJ4uNUyNYsGxQJgCHXhvKWfAEDOrjqHMuOD/O+5xfz3QdmUW3lC8AB+CD0y5bQgAQ0q5k6ZnBZvSz71vKOLqXs6lt9hgDRJQk2E2AZh+yaaRPZEFleUdZxnLJkvbanmqPOPD777//mkyKBBMlT/rC2/9Z6pN9Rpxj3DLzq936OKCZJa8JmJRNvT7Po8wC4xtnL46ee5L5EofLuY2WP6BhfmTBLzuW78vxjgOafhwrf9oKyNCPlteVwKhrcXb1Sa38jR2zx2xfapp2e2Zolvxx9MmUfgYylCQIB4QaLzL+BAjIkZyXfZ8sS/LN6UobyJgxVPYx/tgnzPislalavpXvk8+18uc+vyLsXegf7TcmnDMGUI1M1ZRVN90ru8nSeIFWQDNtSNZEArCx8qdeWV90Ehr2PZAQOEie0Fj5I9vp+/7G4l/AVDInsE0wmiJkCk1bpum8DDuTH97bOFVHKL/elu+ONeOWLBqfbFU56WIMRO7VKUgIaMb2lXYj76fcNHDPeSRg9+7qxeOS99nj0vVSB7K19KXxZaLCmCUX2mq8kcknP/nJ/TGb1u942o7/yawC2tWUdbcMMYGWva5e9apXrdGXAUZl9CCBDkCEjRz+8EwCpYB4NbqnZsziCZsxJIAKnpNROq3U+2P0VFnfIjtZo6dqZapm3JZtLj+P9Wdyz0bKnzr9UjeQHbgFkC0p/iJQDdXIn2XEJkbxKFtVpO6ArsaQicsa+asZszXyVyMneQ/HUtfkfPzCZMeN1dm53yqCjGdL1AF566WacZtnNVs7Lj6r9fXCX8dFtrYm1lg2JlikQ2v1Yvl+6/28Y+pvvbW0+xsHGgdONBww88Z5MPMGCApx2p0DKJV7czFkHE7Og43as8zSEqzM3AZgEnxkc1L1Un5+zhyZpVZ3TVn3AbAEMFkC5JzgKfuYMPLIDJn3EhiYOSspm727nuyKZGkAfEpHGwDnl46k9Ce4y35uArRyg+68G0eUowIsecYznnG8v8MPP7xvjvpczy8Ppo0CY4TH5Wxmrjtms2yBSbn8bduUpaZ5H3XlHdQBbEzgVwaEriF9F+DId3va4KcglTFNer1r5fIwPFE3Un4ecZbxCyUrzmd1BLiNjDm/0fKn/8kBstFySZZCab9AOvweK3/qEZAhMlUCkwLPzEQn27FG/saO2Vr586uJSJ/b/6UkS06Mb4FMgt1cD/DoOxAjmYb54QlOHZnBy/xQRu5VFnjESRfg1MpUDd/yzPJYK3+WrhhzZMYMvE2sfXYum+fXyFRNWe2m+wBk2ajZOXz1i4UIABCe1Mhf9DB9B4gJkQX6k2MbYG2s/AElpuk/5zLm7HHle6nn6KuMl4AwaU+OJhuMS+9e6nftpa+HVDNuE2wCo0rQKz9ok7rZnVA5JtyTH41wvawj5ctjeE/fpqxjbGd0j3uMweh7e5P5QQB2gv6mGxHeGkvIpucB6X03iZJAiV2vKet+7SIH9HtpG/bee+/eXzDOyQcCriFj2/gIWSKajM30b43uqRmz9jWbJoMZG4BZ1wX1NXoq7+K4yE4qkzG5yL+olanIzphxqx3TaKw/496Nlj91ZmIWmJmltM77noyn2Mka+TPBg9haGZkhfmhsvQkfuqdG/tQTvi8aszXyp96xcqJsyOR1fG/nTDhmjGfvqLE62/1kKaAZ2dgI0Ey9NeNW+WZrd/ySPB29KD6r9fXwF42xtbE/Y2KNWv9tRysW69BavZh6N+J4ku0Dqo8QOAYJSjai4lbHiZcDlko0Wdm6/Tum/8rZJcYNcVqQzYCzXNCG3/6QzYgzG2/GUEDHaX7kIx/ZL7nZb7/9JqCSWXCZEdu2gzqceOCN7J5kS9SUlf2VwEhwAHTJ7DG95gcDQpZHJvjQNtkPnPY4/ByCAG0cDctNBWLaJ+inrL2Xc5xqWW4h2SVm/TldrjEYCagPPPDA4y2tyX2OAlO/Cirwe/CDH1xe6j/vs88+fWAxfJ9hQeBhftVUwKw+2WbaiwQDNsTFgywVDaiHD94tJMjVj0j7ASBInXihbMAe+5sIONRhc3skeOMAK+N82uCad1TPLOLUyjxEAhpt3LZdVgTPeQfXViV/9n4TYCJy4n058JETmRXJBKmRPw6tfsYLcqJeDlAACGMzm8rXyt/YMdu/1ODfPPkDFl72spft70hfGF/kgQNn3HLujY1sAquw2XKz9uQvZcssRM58ADUgA2AZmBGHCAggO6ZWpmr5NmBF/3Ws/JnZ33ffffv+tPfOq1/96v5+7+X98IeOoGdqZKqmrL4DkJApY864J6syKIwxOizZGTXy50UOOOCAyaSBesms7CTj0LP8KEGA9/XIn2cBK7yDZ5bZVK4BVehAz5K1NIsEeAG32QJg2rZjbYx7tNkPKKCacUsmZaiwVeowbsll9IH62DkyH12prfodzzIGlEP21UwQv+PM2v/qxYf0qeepQ1aW5z/84Q/v6wUaWP5pfMkWe+ITn9hXhA8mg5BJBn8mdYDdiFyqM5mmzsmIzi9M1pTVJuNeG7yzcUznJ1AH5AUMU5YOcNQGbTb2Y6NMtrA5sQ1jdY/2jx2zyk4j/Wdj+COOOGLNDxuN1VPqTN/7PM9Oul6jp1LvWJmqGbfaMo3G+DOrkj820fMD8JITshE5kdlrPJKhGvnznvRHsq/Uw6eiFz1LfbFnytbI39gxq95pNEv+auTkgQ984GQ7EO9CZ9PVdBWy7B5wFxqrs0t5Uu80wkdZrYh+MgFrewc/dDOPxo7bZms3xtbO8/X001hbOzbWqPXfouu0ZZEOTdmxelGdY4kvniSC4T0n2+4A7+8kg1/OAA0Ltu+NA+GA4KnJSrix9Y5j+s/sGaUhAAMG+UOc6+xJxKDLLOHcOy/IDTHQsmg4I5wUQYL9VnyWHcHJJkOMugCHs1IuT6wpyylXFwVt5hBgpV7OVn6GO+2iDF2LowT80RYK2g8aCH5DHHozKzLLOHLe19H7AgoAheUePPbkUVaQpaygRNBk83f8mUfKy+5TnuM+JL/6pF7tz8zmsIzvslU4/4AYfPCnTv2hP+2jZfkQQIsTqmxSnr0b4PKwww7rMxg4pJaFekfApL7EHw4MfruPwQKaZXYW2Oac2U7l1U92OL2WJ+62226T5TACt1mEv+qyxC+8x3cBtaxGbVql/AGAyYV39R7e13cgmvcog94a+QMAejeBraBD/3C4OaOAR7wP1crfmDGbuofHefJnmZ++xov0hWwWY4ZTHNnGH79QG2CBvOGbsvyLgw8+uB+TeTZAnrONF2TNWExA7V2iZ2plqpZvaU95HCN/ygOJvbcAjFyGjDd7BbpG5k001MhUTVkZEiYiZP6RKX3pCCwDhGQyQttq5E954DBdqR+NY4CIceh5fkmNLQmtR/7UAcxXt4yIMlvVNRlV2uG5mdhwfkhsiHu3bQfLzJxrtzr9+h59RzclW7pm3Br3MpYByMZr7AYQ33cyLkikl47ZPmkiOyN9oQ3IDxqQB2OebUjGzPAdfFcPXe7HJow9z6OHjC1ZjeG7wB7YTH/YpJvsI3wwXulf2cp0viDaeKNT1RXdwz6Qz2Q9u5+dGFsWT/FSlqi25v2c987lezrHppJV/eNPeX2Eb0Bm7xgaq3uUHztmU/fwaLzqSzbWX2isnlJ+rJ1UtkZP1cpUzbjVlmk0xp9ZlfyRf3aFnLA5+oWcIL6L5ZvkFtXIn/JsN/Df2Ijd4RNGhxsnoRr5GztmU/fwOEv+auQEyEv/G2PeD8/4jfQDHTBcHj1WZ8seNUaR47Q/OjLZbCZvjW36kW6YR2PHbbO1G2Nr5/l6+mmsrR0ba9T6bzU6tFYvzpPD4TU2lgxPo5ZxNo0r7dxcDnAqWsbZXBZt6os1/ceh4HgLAiipZC6s5wXVtW17YOOoLQzuLKopKyhJvWa9OUKziOGXNSO4tyeMQGEeqRsfOCAAn3l8EDwoK4BTFsCwM0ig6dl4MYs4pQIs76UvxrSVTFimhb+lk1k+Q2AGJOW0cXQBJ8uSfhJ8CmASMC5bl/tqZIqccLI9n5xwAmZRjfypA4CM9wBe76YPZlGN/K1izKZdxguAGhjjb16bjQOZK7Juskws9QyP5MRMqHJkShAypGVkqoZvw+fl+0bLX41M1ZTV78acYAnoUQJmeZfyWCN/6gY8Ac/ok3L/urJOn1cpf8NnzfsObBI8CtwSZE8rXztuybWJAcsOA1QN69VvJp2MFzZgkX0Z3l9+15/0hEmDWfq2LD/mM97Qa3TPIjmpKYvf9EOymOfxHRjCjwSUsRHTxnz5LjW6Z6PHbNoxRk+lrHGwyE6m7Bg9tYxM1YzbtGV4XIU/UyNT5IT8G2ts8DybUyN/eM7mAM/4Pot8lBr5W8WY1S9j5KTsP/YASO/95o3FzaKzN3rc1tjPmrL41WztcZI2JtZYxn8bo0OX0YvHtXz2J2PGmJ9GDTibxpV2bi4HCFQDzuayaFNfbP23qbunNa5xoHGgcaBxoHGgcaBxoHGgcaBxoHGgceAE5sA84Kz9OMAJ3BntcY0DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DW4MDDTjbGv3UWtk40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA4cAJzoAFnJzDD2+MaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB7YGBxpwtjX6qbWycaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgceAE5kADzk5ghrfHNQ40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA5sDQ404Gxr9FNrZeNA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOPACcyBBpydwAxvj2scaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2BocaMDZ1uin1srGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgROYAyc/gZ/XHtc40DiwRThwilOcorvpTW/aXehCF+r+8Ic/dJ/97Ge7D3zgA91vfvObpd/g+te/fnea05xm6v1f/OIXu8997nOTazVl3XTVq161u/zlL9+d/vSn7z75yU92//M//zOzrde4xjX6sqc97Wm7L3/5y93RRx/dffvb3548e9kPZz3rWbub3OQm3TnPec7uJz/5Sd8OdS+iU5/61N097nGP7he/+EX34he/eFHxnX79Kle5SoeH+Pyud71rZnvueMc7dmc/+9m7Qw89tPvVr341s1x5YaPlTh9f73rXKx+x5jOZ/uEPfzg5VyN3JzvZyTrlL3rRi3Z//vOfuw996EPdRz/60e5Pf/rTpL7yw2aTu4tc5CLdzW9+837cHXHEEWVTR3+++MUv3sv817/+9e4///M/R9/n2eTofOc7X/fLX/6y+8hHPtKPw2kV1JSddn/NuY2WP8+ukamasureanrvzGc+cy8v5z73ubuf/exn3ac+9anuf//3f3sb433QsjK14+6N+2983+AGN+ht4OlOd7peT2jrpz/96TUPufOd79zR/f/xH//R/eAHP1hzrfbLnnvu2V3ucpfrTnWqU3Vf+tKXepv7f//3f7XVTMrXyNOVrnSlfjxObi4+/OhHP+re//73F2fqZO9iF7tYd81rXrPn03e+853uYx/7WO9TrKlwiS/LjtdV29x5NvJsZztbd/WrX72j1/74xz92X/nKV7q3vvWtU+1Gjc2oKTuL1RstfzUyVVNW+/Fv99137851rnN1X/3qV7v3vve93ayxstnkT/uX8Y/cV9IyuqdG/uihPfbYo9fJfHd+8pve9KbeVy3b0T43DuwqHGjA2a7S0+09GwcqOMCpfNzjHted8pSnnNx1iUtcog8invzkJ/cBz+TCyA8cXEH6LDr/+c8/Ac5qymrjP/3TP3UCsdCFL3zh7ha3uEX3+te/vnvHO96R051A6BGPeETHcQgBBq973et2//3f/929853vzOnqo2Dvvve9b3eSk5ykv1e9f/EXf9FxRP/lX/6lB1ZmVfr3f//3E4ByVpnNdP5Sl7pU77Rq0yzg7IY3vGEf1CsDvBoDnK1C7nbbbbe5wJmAMMBZjdwBfB7wgAd02hy65CUv2d3pTnfqnvGMZ3Tf+MY3cnpTyp13vfe9790H6MbQssAZORfAAETGAmd3uMMduqtd7WoT/vhwmctcpttrr726gw46aM35mrJrblziyyrkr0amaspuRb0HLLjd7W430ZG6yGTHzW52s+4pT3lKD6A6t4xMuW8j6QIXuED3wAc+sDv5yY9zky94wQt2gvtjjjmm1+l5Hh2jP85xjnOsCzh7yEMesga42rZtW6+7nv/8509sY5455lgjT+q7zW1u0wEIpxH9HeCsRvbUdbe73a274hWvOKkWH41/E2XPe97zJudrP6xnvK7a5s6ykd77b/7mb9aMAb4D3cfnMomAanyVmrLzeLzR8udZY2WqtizQyaRBiM7AQxOw//qv/5rT/XEzyt8y/tGalzr2S63uGSt/qufj/MM//EPvI+TZdBI9fsghh3Sf+cxncrodGwd2GQ60pZq7TFe3F20cGM8B2U+cY87yv//7v/cz6b/97W97AOQf//Efx1dUlOQcItk4Ao/hX2mEa8pyigKaffCDH+ydpve85z09UAWoA/iFvBfQTHaQzIGXvOQl/Wyv64C2PDflxx5PetKTdn/7t3/bO8Pf/OY3uxe84AXdW97ylv45HDpgyiy61rWu1QeKs65vxfP6Q+ZdLa1C7sw0I5mSQ5nzvcw0TP+PkVGBl8BNxsAb3vCGPgD8whe+0JEFgFoZgG5GubvXve61xiGu7atlywOpA5rJzjvssMP6YEd9wHMgSqimbO5Zz3EV8lcjUzVlt5reO8tZzjIBzYDVr3rVq/pJjd/97nfdGc94xh6kWk/fbeS9Jj/ud7/79aCZ9skkffnLX9597Wtf6x8jeAQIbCQJ+gWqdI8Jn3/7t3/rvve97/U2JQB37fNq5EndJjjQND0puzhUI3uC7IBmQA0Z1bJL2WDg0m1ve9tUW31cdrzuLJsr+9pEAPn61re+1fsfZIsNkdnDpoRqbEZN2dQ/PK5C/jxjrEzVlGU/ApoZk/yt17zmNX3WKpn/q7/6q8nrbUb5W9Y/mrzUkh9q5M8jApoBc1/3utd1b37zm7tf//rXvfzydxs1DuyKHDhuKm1XfPv2zo0DjQPH4wBnNg73k570pO7nP/95X8ZSzcc+9rGdAMhsfJlRc7xKppxQLxIMyMCaR2PLmlG/7GUv21f19re/vQcwfOGgM/bAG07lwx72sN7Yp14A2+GHH97fx4l/whOe0AMdljq4t5b++q//ul+CyqlIxgwgEO8EBgKHl73sZX2wUNZtec+tbnWr8tSW/ywoKDPvxr7QquSOrCJLd2UVzqPIxyIZ5bRbEoye+cxnTgJqGRSAZWDp3e9+9+7Zz372ppQ77S8B5Xk82ehrlo6hj3/845NlyZZtASroHUuc3vjGN/Zlasr2N6zj36rkb6xMafrYsltR7+lLugEwxK78/ve/73vLWJM9IltLdpdtAXY2yRxNJulzn/vcfhmYNgE5Mr5lngnWN4L0p4kbJEC15AxZxso2AR9ufOMbd//1X//Vnx/7b6w8qY+e1D9AnHn2uUb21Hud61zHoV9Cl0wggDmgiO22LBWIWkvLjtedaXNlW6Kf/vSnvZ8APOR/ANH4AYBTEy8BFZVd5Kvos/TzorLqm0arkr+xMqVNNWUzVvBNdnfIkldZc2TO5OmPf/zjTSd/y/pHecf1HMfKHx1tWbUx6jOfFi8RGdt///17XS1LvJzwXk/b2r2NA1uFAw042yo91drZOHACcSDZIPZqCWjm0fajcc7+XdLMzYibCZctIMvK/i4hy7buec979o64JWCCZMszUGbtU3bacWxZSzIRR1PWT0n2DBFsMP6yzABpHHb7YQyXlHHAAAlAwSFd+9rX7vfR4HAL9r773e/2jr6ALxTwTv0lAWtufetb98suBFqy3Eq6//3v3/PIXjbJjCqv5zNAQfacGUOOtX083va2t/V8TZkczWZyHN0D3LFvmn28lC9pbDl8wQO89v7AIc7fLBIEk4nvf//7fTA8r2xZx1i5e8UrXjGZmfde73vf+ybVACgDtlgCRH7Tp4DfRTRW7uxphvTDUJ4t9wWabduelYLIX63c6WOAqz7ES7J7zPbsOGNMBkxoGbk7wxnO0N3+9rfvq9B2IN8sqpE7dZA7Dje5s8ffUUcdtWYpr/YCAYAjMs1K8m6WNtv/CNWUTT1j+Zby5XGs/DW9t5zeswcR20HmAprhP9tAZ9ATkZuyXxbJVMqO1WfGk6yf8573vD04BsQA3Nq3JwTAACCZCNHukixXNGam7dWp/XS6ugWcslllqhkL80gASnbdk+WQygdYMaYAyoCzVdncgC+L2lpjc+nHT3ziE70eY49LYkeM8Wl8HGNza8Zr+dyxNneMnKTeMTaSfNOn6DnPec6aSTQgj+WWfCwyALQdazOWsS9pd4418rf33nv3QB07ZGLIOAnd5z736diXz3/+8/0k1ViZcn9N2WSxDYFkfpzxfKYznanPapb1v4z8jbF7y8rfWP+oRv7wb5HuqZE/9cWPMiEd0Mx5+uGlL31pB2y1r2lJY8ZtWb59bhzYihxowNlW7LXW5saBFXJA0ICGRtE5wbvgR5CS77K63GOZWoAhS9WAVZZ3ZkZKRgHiGO677749oMHJsdHy0KkeWzblykCsf8j2f7IXAtoApcyUWXY6JG0PECKAKinZBc4JYjip6tpvv/16EEMGAuIsouGm0e7xjkA3zwh/lLXUx3mOCAePQz+NzK5ashZSpyVtwJksIco1oKAZV044UtYzLH/TH8myG1uOQ/13f/d3qb4/chjVO404v3/5l3/Z9zGnWobiWBord4BAwIvgFQ8tIcJjfWCjXO8u2AWaWW7sDwn4bnnLW/aZhQBgAeqwvyNPi2SUY460ZUjhfeTPEtEaueMwk68EB3gN/PMn0HzqU5862UdpGbkjZ9oo00HbZgFnNXKHB+QsS2S0mYz5fulLX7oPFJXJswDt+gXQScYEjMZGuV9eTVl11/BN+SGNlT/30YNN79XpPT+SMvyhFPr5Lne5S98VbMUQtBkjU26u0WcykDNGyWkmgq5whSv0mXDO2Rez3Buzb+Cx/5KpCcweUpbhqcO7KfvoRz+63z5ANswsSp0BTspybErAaOdXJXv2KUTaAJzDU3bV81772tf2ttz16MgxNhdwNpzQUgdgKIH50M8Ya3NrxqtnorE2l04aIyfqHGsj2V8yAXAyqUSXmyggi/q43F+1xmbUlNXeaVQjf2wmf8S73PWud+1e9KIX9VUChNh/sh8/Y6xMqWBs2dhV9+DjkLQLZRKsVv7G2r1l5G+sf1Qjf3n/RbqnRv7UCYhDJn9lA175ylfufUg/psV3Go7/seO2r7T9axzYwhxowNkW7rzW9MaBVXAgS1SAEUMKUJBZYoCXgOM85zlPP4tvaYnMKEaaA2VmNQY2QIDlkCEABAfHPhRPfOITJ9k0Y8sm20cQztCnfeovnxOAIc91BKIAgfIu6iqzlzj2gnfvYfmMWWHAmb2hgGeydjiRHGHZAqicmetPbP/HsUUC+9C27dlIgiF1A5jKaynjmCwKn2Vm2JMNqAMI0z5L7mT1CMo4lA960IN6RxwwZAmhZwvws1eUrAr71o0p55mCB2T5qn1EOKXePzPn/cVj/+mD7HthCZMArIZq5M7yqQMOOKB/Z7PcNhYPICQAdx2V7fQLsSEZka4B3WQQhcbKnaxDTnCCyNzvSJZDZLLM2nR+kdwBRLWDXMma07cyDYHN6pPJaakbqpU7PBAQC/qBebP2F6qRu74hx/4z/iydETCTUb9IKCizATxeew8E+Hz84x8/ATWdMx5kWeRXZWvKur+Gb8oPqUb+mt5bTu+F5+wDcMb4oVPoNJlZ02iRTI3Ve+qxNxeggnwefPDBvY4im/vss08PoFkuBySaRTLJBJCo3PMr5b2LzF5LjYF+9Ky9DoGD8yYREqTKcBtSwMQAAquSPTYcJaMs7aAvvPOznvWsPltwPTZX1jUwPRMPbFGZqV5jc2vGq3fZNtLm0qlj5YTdHmsjw182+eEPf3gvb+Ex/sqOf/rTnz7xF3LNcZHNWLZs7quRP1t0vPvd7+722muvHvgDeLH92XPMJGC28cg7L5Ip7Rhblu2Q5WYcA5gAOyF+Xnyp+HW55rhI/mrsXq38jfWPauSvfLdFuif8HSt/+Ms/5T8mu87z6Evn+PbRBTXjtmxz+9w4sBU5sCPa24otb21uHGgcWAkHBCJo2ox6QABGNWTfEo6M+/y6pQAYCSBiWAFQcfxtDG25HWBH1gsCZAh8UU1ZqflZKpCNTNXBqbne9a7nY09xcvLdUWZU6VwJUNJG1wX9yIw40AwBZbI8AQ+UKXmRIKcvfOw/wQHiOCHlgT2eJcDKLzr2Fwf/8gt0QBQzu96VM2MWFXiDLAVFAArPEIQ+7WlP6wEboKV9vczMupcDNLacpUHa6tn6WF35jA9Dkpnm+bK9yuVGw3KzvtfIHecvM90cwgc/+ME9IIQ3wCbtRPYqCsmIBJIBAMkN4gQGYK2RuyzJBXCZ6Q1pS2bOnSt/ICBl5smdgDJBhiUSyVIhI35ZDwlkOfi1cifoJ694BFh0nEU1cpc61BfQzDmynfEf0DLLZgHPZMXyWXoiyzNloAnIUE3ZGr71lU/5VyN/bm967zgbMEbvlSwHWsv0ir4VDE/TKWNkaqw+s4zIxAeiIwLsA8AyYRJdULY1n4F97Js2s43T9uWy5D7787FzwCZElmUJz6K0i14bUjmBZXkU2mjZU2cml+hOEyz2OQOCxLbT7959vTY3oJlnlhNdvo+1ucrWjNcam1sjJzU2krwjwI7PlvHSfdlCwDmTItNons0Ylq8pm3tr5c9WF8BnZCItE2YyicvVA2NlSj01ZZNpBlBkC0N+rTQUfyvfHfFmnvzV2L0a+fPssf5RjfypN7RI99TIHyDVWPfHZ6TvZINbtUEfkBc/ZJGJu5pxm/a2Y+PAVuVAyzjbqj3X2t04sCIOMIyI0RxSnAXgTIjzK1vArLqgHnGuyz1jBBGccEADpyv3AwYECwIWGTyopqzAyj4XwCOOgUwc7WH4GXXXvce0mXzBk3bIxpI1Z9mElHQ/Ce+eOJPqKB0ybRToccaVz7s4X4IZvqPwLHzlHAPygAXAkXmUrBtB5bANcVpkNqCARAC1YQAmmy+UHyNYVC5OvNT8IVl+C+QI6T9gCL4E4Mm1scfwZ6zcWd4iC0+/JSi13KXck+jDH/5w3yZB8pFHHjlpivY/6lGP6jOg7Ndn1rpG7gBCgDhBO8cSL/RRZu7zoGFg6Pw8uStBN30/7PPIs3LlHnuL5A5Pbb7viEcCnHlUI3epR4CfYCrnLCUGSAYES+DvOp2Rpc7ABlkY9IfZbH1VU7aGbwGc08Yca+Wv6b3jbMAi+QuPc7TvEBmkP+1hRnebTABsl5lcY2RqrN7LxIx+LidVtCkAN11N7pIlnfaSYdsPuE7HmfQxFockE6cksmbihN2zVH/WuBs+r6yjHAeejTZa9tTplzyBEICPZE5bpulP3+CRjfz1z7I2VyYyHgKc9thjj77f6eGHPvShPV/H2lztrRmvNTZ32/bMtNS/SE6id8bYyLybutmuQw45pH+Of/wXv/TJhgKQMkGZAvNsRsrkWFM299TKn/uAwv/8z/88mXwEuJpULKlGpmrKvvCFL+we8YhH9JMvfvjJeMBfYFlsZCbPyvbMkz88qLF7NfJX4x/VyF/Zb4t0T438lWXJon6OvuMrWZbJhyWznpvyygx9ltJXLvuifW4c2KocaMDZVu251u7GgRVxgKPPeUvKe/mYnCsNtuv2J7I3RGbzXvnKV5a39Y4g4GwayZ7iWAjq7RsBdKspKxuMEwMQ4pRrIxADKCUoE7RMAzCynMCSNZ/NXgKhBAiZ/dRejoy/aQQQ4Cx4vuBR5tzQ6U1WG77KcLLnE7LMc8899+w/B3AEhjknYBRcajvSnt13373/PPyXbDpBKCozFIZlfR9bLjOUycwo6yoBEm30q6JIQGBWd0jAJe2SiRYHbFhmGbnTd7vttlsvO/og2R6pW7+mn3MuR1km+jx9rd9q5E7Wlmwz78th96cOv54q0ETTsjbTnmlyl31T3JulLz4PSWZbjdxpJ7nGI0Fx5C7PwwPnZFcCBWvkLm2bNsaSGYA3SGYSkBOwG9As93PIjWFOuHFQUzbvoa5FfMvzhsdl5K/pvXF6bxqv8RuQ5Ac+LLXW77KVS+BsjEyN1WcJiunpWbpUO2VmJlPSd/oF6MY+sSv2GJyVJZzMSfeFvKfxFH2a8+UxQFXGXXmNTUHGe6k7N1r2jiwmFsrn28Tfe+sfQJH+Wdbmhm9sPv27//7793w1ZkvwaZHN1b6x47XW5tbISfp0kY3U3uhCn9mIkvgqQAgEPEtGc8rMsxlDG1NTNvUvI3/0Mx/G8kdkkmo4YVcjUzVlbUUhI1K2m8kq9otts1UIH9Ak7LTJ0nnyxzfJ+Bvjb42VP3XW+Ec18lfqqUW6p5xQXCR/+jVENku945n6md8pO55+CI0Ztynbjo0DW5UDDTjbqj3X2t04sCIOcAIBOQHBysfk3DCgMYOca8rb98MsVUjQ4bqAWGZPSaWzxempKZt6OD3+gGYJutWTJWKMvfMAB05RaezVIRCw94p7ACGcwJDZ9Wmzl64HQHIdQMbpieOa+wOcKZtleK7JrBmSNtrAmNMHOOMM4glAKj+yMLwnIGb4GCBtWC7fx5bTxwKDafUFCFGnwE67kewvf0NKKr9U/7R3WGYZubM5sT5DAuJ9tu9VZDY65BwwlJM7DDCGMlwrd5xJWVP+jBfAIN5mXzUzrcrUyF3aqN/n7bWUJZxj5U7wiPCDfA3J2HReFp29S2rkLnWpe0jGGkqfZ+yrf0hkI9mQ5KumbC3fhs/2fRn5a3pvnN7DX8CtzC1g2RBoABQAs7YNJijGyNRYfQb8QeRq1sb/rgvKQwCdbLptfB944IHH0yMp65jMi/JcdGXkubyWzwnoYyty3tESUTQcMxspe+rPhNG0jEzj17uV/THG5qpX+4GbZTDuPOCFTXRdxm55fYzNHTtea21ujZyMtZHet8wQzjOcR94l4GSy5cf6KnyXsWVngVPLyJ9JQ5n6IXt7AlkCwjlfI1M1ZdXNp5OtSC61JXLrHLIUFo2VP/JcY/fGyl+tfxTZqNFT3nOR7qmRP34LXhjvaY9nhOhwAC99VU4Ujxm3qaMdGwe2KgcacLZVe661u3FgRRwA/Aj+AwCUj2EsUTk7zNHLPk8Ceo4qAMuvLAEVkF87tAEuMMGvPpYzWFmiqZxn15R1j7RxbTADWQZkyWLjAHCq8utXnq0NWfaijgA/PnNYBOKWYDrv8zA7xg8gWNYSZ18QwImQWVXOFmtXsvTM1HPwyllCz0PuBVJpm6yfBG+Ctcxqc+xK4rRyVo/ZvpQGmXH0jinfnzz2n/0oLGmyyb/6x5TTxzIMktFR1pdfrHKO4zTtnVwTKCNtA/IMg7/+4rH/auUOGCQbBGXJpu/OJ2vFL9pxXDl6Bx100LFP2nGI3AHVUI3c4Qn55lSSu9IptVEuSh96jv1NxshdwDxyR7YCCKkPsGcvGXKrD9FYuVPfNPAX0CtgUadMz+y3UyN3fUO2/8uS4Xx3JGco72XWW2aFWXhOeSkPAU3wFBhSUzbyN5ZvfaMG/2rlr+m98fKH1TbKJyP0Y34AIl2QoC8gWM6Pkamxek85+syzhrpUcG2JEd0vkwuxH/Y8QsabHxMo5bW/MPhHxxpHIc8KeJwxm2vlMUA4W2E8luM+2XEBN9y30bJHZ5rsQjLqyncA/GkTyjgba3PdA8igu0xomAwqyaQQYkNqbe7Y8co+pt3ls2fZ3Bo5GWsjPZcPwgbghV/ULHlBFwZgdb7GZtSULd+//Fwrf+61dJm+5XN5J3LunK0uUI1M1ZRVN3/T0mfbgdgnN6CZNiRjK/vnjpU/9dbYvbHyV+sf1cifNocW6Z4a+VOn8UgfqdcPUJUUvfz5z3++etyW9bTPjQNbkQM70gS2YstbmxsHGgdWwgF7nHDwBLeAoBCQxjmAkpnFEGcpM1M2Lc5yN7P1Ad8CMHGUb3SjG+XWPpvpTne6U//dDKG6a8q6EYAl8Ek6vHMcKEtHkU1NEVAgjqv92ErKxseuJ7MrM5YycQQqIWAAcMR+Lwmksp+bPa/KzWrzbgAOjhan2Abqw7/8fLv6XHvpS1/aPy688Mzsp+KCQMY7cGoClCVLzjX7x4R81y4kiBhbDhiFOEnl8jcAh78Qx3D4PvmOn8jeRc6V+8Hl/hxr5E4WnF8+QwAWPxSQ4Mj5ZMkF4LU80Kx8yPf8Ol74EV6PkVEOtvrwodzrTRCD1947m4fXyJ3+AXAJRPIDEmmzwF6AIaPP89FYubM0Kn1SHtPHnGrnM1bCizFyl/YBCQDKIeMgy3hkkyGgiWxK70dvhOiP7I0icEA1ZWv5lueWxxr5c1/Te+PlD7+A9oj8JvDy3dLIZK7IeCxpjExl/C7Sewn+lPOLryXd97737W2V8Qu8Y+eSmUm2/NphdH153/Az26b+EF1E1o3p6KJcK4+AfTYCAeRDxlAmq2JXXdto2WPzoqsDFqYNlsN5B+B69MVYm6uOZNrhJ6AlBEBPlnr6sMbmjh2vtTa3Rk7Cj0U20jvL2rOBO9LHsVG+m7SJnNDtNTajpqxnTaNa+bMnm3cmM/Y09YM7PjuXHyuqkamastrP3wOQ2WohhH9+qRcBfiNLNfJXY/fGyl+tf1Qjf3l3x0W6p0b+1Bcfn24uszb5O9HfmSQOr8f4yupu1DiwlTlwku2zWX1kY8Y8Bnorv1Br++o5wPg3WVk9n1f1hDH9Z0+XLLnjVKFswO5XGm0wjvyEuj9kU9osgZTpw7kRMDzykY/sncb99ttvAirJQBGsb9sOPAAqBCVmujNzWFNWsJ4sH06nAERbBeP0mh8MCFkeGeBO24AtArcEO1LNAx7IDrPclEOmfTIGAHLeyzkBlWyjkJlWM3TAIddkOsW5tMQngEDKl0egmF8FFZz4hciSDjjggAlwxyFUvz2dBCGyc2ySm6BO1gBwBeGl895PewEW3ocDNbYcMDK/kurZ2ifbTH3ITLWNsmcRMEZZ71BmTMwqP1bu/Lqddng/8iXY1Yf6QL/rV88WADuXzAb7zOBD3sFs+WMe85hJ0Fgjd7IW9YOAIXIroPS+ls+USy1r5E6GSUAk7wVwVS/ZQ/Zx82tsofXInSBZZs20jLyxcmfslYCZfhYsbNs+tvHCWM/yGW0G6t72trftmy/bj1zpDyAJ+XrCE54wWe5TU7aWb+FfeRwrf03v1es9/UtWHY0ZY5F8Zywal+wGGaiVqbH6DAgk2wcJqGMrZPtoExskCLSZvIkR5Pw0oodl0SH6PRlD2s9WsBMBhvyKtL3c5pGllwAUlLYZQ/R8qWdXJXuWSftFP8Q2egeTA7GNJgKSqVdjcwXdbBtdgGfqpZcz4cMfyabytTZ37HjtX2rwb57NHSsnqqyxkWQiGVCRk/gJ6jIBFDCuxmbUlPWcaTRW/mSR77vvvn1/2qfu1a9+dV8dm0UHGy/8Ija4RqZqyuo74DGZ4gOxIWRVFiG+8nOSSV4jf15krN1Tdj3y5/5Z/lGN/NXonhr5077wQp/Si8ZvViAA+DLhWztu1d2ocWAzc4Bdyt7Tw3aebHuAub+T0jKDIg8Lte+NAyUHOHVNVkqObK3PY/rPzBulwRkBBvlDlsAFEGAsBSycF+fLYN6eXGaUARYMreUvRx99dP/ZTKEggwwJCoBdApYsbfOcmrKyFNQFIDKLmz0lBGZ++UkQEKIMPZMzpW0cCW3hbPlBA45gSFDHiRVAcfS9r6P35UQACoFQIVk1ygI4lOXEcTRk+2TpYMoOj8rL7lP+iCOOWHNZsKK93lE/CMa0ASBx6KGHTmb03WQG0PPx3FF5ZfEWL7Jkbmw5M8HnOMc5+iAHX/1po/4lH/aoGS5jLRsPpPR8v7wkMF5EY+RO5mMy6iwFFoghfeH9BMbaSa5ko+G9TBJ9Jwh0DckAsXzT+4Rq5M5SEPXqD7z2h/BsuPlujdwBsQC+sr2ADGQ0YAP+ZPylzeuRO88AXODVMLAfK3dkU+ZjZp3Jsj/9LqDBY+MrBAgENBoreT9jEdgmY7XcRLumbC3f0p7yOEb+vFvTe/V6D8hNnxozMnj9GYvk5JhjJyEEwahWpsbqM2OWjRD8R6cD2smnX/WLnrbfZTKCtG/Wn6wTBOgjw+yHuulq93tnoFmySPrCM/4ZP8aAH49J2zwXEG1c0G+rlD2TXmyh9nsX7+A7+2mZZZawan6NzQVOeje6xi+E6nN6WDBO5xx22GETjtTa3DHjdVL54ANezrK5Y+VElTU2kg1Utywe/oE2OJJ7k3alLa2xGTVlB2yYfB0jfwr7BVRyYdKJXIb4BH5gxjVbJphcrZGpmrL2UWPr6RIyhY+OwDLZ7ZnI0rYa+VN+rN1Tdj3y5/5Z/lGN/NXonhr50773vve9fQY9/sb3pNO8d+nj1I5bdTdqHNjMHBAnlyuNyra2jLOSG+3zKA4w0i3jbBSrNmWhmv4DMtlLglMiuMks3npeTF3bts+kO2pLCWwN660py2FLvQLucr+zYb0CEllpwB/gSjb5H5bLd3XjA2cfUDOPDwImZTkpypagQepb5qgvgBwcGO9XbsI7rT7vJ/iRrRDAbD3lBLPexbNXTauQO8GaoJCTp8/15SyqkTvGVb0cSmNE/bOoVu6AcWbXAVtAqBKoHT5js8gdmZOJJ7gOEDJsa75zyMmpZXzz+KZ8TdkavqUt5XEV8lcjUzVlt6LeMxb5EOQDgD1PrvVLjUyN1Xt0v8kWQba/efqglI1Fn8kOIJn9MWaXobwDMJgDv16qkSc6yvjFc3oSmDeLamRPHSbN6ErguHebx/Mam7uK8Zp3rpGTGhsZfS0TfV4f19iMmrJ5v2nHjZa/GpmqKavfTRICY002lYDZtPeqkT91j/W3Nov8acdY3TNW/vDR5AI/hK+eLTGm8de5mnE7q452vnFgZ3NAbEr3T6MGnE3jSjs3lwM1wMvcitrFncKB1n87he3toY0DjQONA40DjQONA40DjQONA40DjQONA5uUA/OAs+N26tykjW/NahxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNgZHGjA2c7gentm40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA48Cm50ADzjZ9F7UGNg40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA7sDA404GxncL09s3GgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHFg03OgAWebvotaAxsHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHdgYHGnC2M7jentk40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA4sOk50ICzTd9FrYGNA40DjQONA40DjQONA40DjQONA40DjQONA40DjQONAzuDAyfZfffd/+zB3/ve93bG89szGwcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB3YqB851rnNNff7Jy7NXvOIVy6/tc+PAVA589KMf7ZqsTGXNljjZ+m9LdFNrZONA40DjQONA40DjQONA40DjQONA40DjwAnEAXHyLGpLNWdxpp1vHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNilOdCAs126+9vLNw40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA7M4kADzmZxpp1vHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNilOdCAs126+9vLNw40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA7M4kADzmZxpp1vHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNilOdCAs126+9vLNw40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA7M4sDJZ11o5xsHGgd2bQ6c5Sxn6X784x9veibc7na36854xjN2r371q7uf/OQn3bWvfe3u4he/ePfBD36w++QnP7kp2j9s46Zo1IobMewHfeLc17/+9e6II45Y8dOnV3+ta12rO/WpTz314le+8pXuS1/60uTayU52sk75C1zgAt1pT3va7kc/+lH3iU98ovvc5z43KZMPu+22W3fuc587X9ccyeSHPvShNecuetGLdle72tV6uf3Od77TfexjH+u+9rWvrSnjy/nPf/7uMpe5zPHOO/HnP/+5e9vb3ja5VlP2NKc5TbfnnntO7h1+OProo/v3zfkznelM3TWvec3ufOc7X68TvM+09qa8d7vUpS7VnepUp+q++tWvduobo0tufetbd2c961m7l73sZd2vf/3rVDf1eJGLXKS77nWv233rW9/q3vzmN08ts+xJMnLSk560+9WvfrVsFVvyvmX7bdbL1oyLmrKed6ELXai78pWv3J3jHOfodcpRRx01U8aMzWtc4xp92d/85jfdl7/85e5973vfrGYf7/xNbnKTXvbf/va3d8ccc8zxrq/6xKJxMVafaGdN2TOf+czd9a53ve7sZz9799Of/rT77Gc/23384x+f+ro1ZadWMDh58pOfvNtrr726C1/4wr2epLP++Mc/9mPy29/+dnfkkUd2P/jBDwZ3jfu6UXJ++tOfvttjjz0WPpTdoOfRxS52se4617lOL4t/+MMfure85S29XVlYSVHAM694xSt29DJ5fsELXtD9/Oc/L0ps3Ee+FRk45znP2T+DOwmeZAAAQABJREFU/fM+2j4kfXb961+/t1uuf/GLX+x1/29/+9th0TXf+QY3vOENO+Pr85///Jpr5Zc73elO3UlOcpLupS99aXm66jO+eZd59Mtf/nKUfthImScTpzzlKec1q/vmN7/ZffrTn+7L4MMtb3nLjh1kZ3/4wx/2cjC3gsFF8nOLW9yiO+95z9vzVd2vf/3rB6Xmf9Xn2s5PQvr8ox/9aPeLX/xicuMy42SV9mAj+23yku3DLsOBBpztMl3dXrRxYDwHbn/723eXvexlu8c85jF9gD7+zhO+5KUvfemO8WYMgRSABo4Ap3qzAGfDNp7wXDrhnzjshwte8IJ9sHuGM5xhpwBnZOQGN7jBTEac5zznmQBnAKJ73vOevVzlBucuf/nLd9/4xjeO56De7GY368G1lC2PAKASOLvxjW+8JthS71WucpXuPe95zxogTB0c0ktc4hJldWs+v/Od7+yDydqy+mYecFaCfdq2995790CS53CQObVApYMOOqj7/e9/v6ZN97vf/daAiMp71kte8pIJf9fccOwXQbKgBgmSFwFngELgCed/I4Gzq1/96t2NbnSj7g1veEMf9B3bvBP9Ydl+m8eYmnFRUxaQFFnxfLIAGAN+v+hFL1rTJEE/wF6gGQJekMmDDz54FDjKFpIzzzmhgbNF46JGn9SUFZDvs88+E755d3zYfffde/0HuA/VlM09s44A69ve9ra9Hfc59Kc//akHFkxiAPLooO9973vdC1/4wlF9mHo2Us4BsiYUFhEdCTjjo9zlLneZ6FL3lXxcVI/r5F4/hoD8JUiR8xtxpPsBK+XY4cuwo8961rPW8B1487CHPaw7xSlOMXk0kNbYU3YWsGeCFiBmoooNngWcmXw0GTO0N5OHjfzApnrmPAL6LQLWN1LmtUW7Sj5Pa59JogBnd7jDHTp9ESJbtXSf+9ynA2qFTne60+XjqKNJi/ve975r/CRAN51rXJokRbXjxD2rsgcb3W/a2mjX4kADznat/m5v2zgwigOXu9zlRpVrhRoHtgoHOPFIAMYBHdIXvvCF/hTn9e53v3vvDHLSP/WpT/VZU1e96lX7wBkQdPOb37wHVlIHoAcB1YZUZkWY6RbgI7OyZu4FJwJS2W1mlGV1hM51rnP1HwFZw8BDwOVdQjVlObdIJsD3v//9VDE5fve73+0/Z0YaTwBlb3rTm/pMAxk4rt373vfunv3sZ0/u826cZO2SPWAWXJDFwb7rXe/aHXDAAd3vfve7Sfl8wBfZY5uB9IMgbleiZfttEY/Gjgv1jC17pStdaQKaGS/vete7+kAYmCAocpTFg8gi4AmRxfe+970dAP8KV7hCD3STyec973n99c34b9G4qNEnNWUBVgJz4152F4AewI+XALTb3OY2fYY3ntWUXcRjGTf3v//9+/6k3+hT/StrNRlOdAkwho9C5z30oQ/tnvvc5/Yg2qL6N1rOZSEHHBg+W9uASd5DhiMCdOCXczLk2YbSPgzrmPY9gDF9fPjhh/cTDOrbaCp1P/tj7ABnAKfAlnvd61498JznAr+AZiY8TDqYqLrpTW/aj2t2wiTLkEx8ABLn6Vv8Im8mrTaCPvOZz0yyo8r6PIdso0V9spEynzaQkWkZZ3gtCxtFjnym65AxIktM5mENAVwDmgEJZb3XArCZXGTTZfz+7Gc/6yckyM497nGPbv/99+9lvXaceI9V2INV9FsNz1vZEwcHGnB24ujH9haNA40DjQONA3M4EOCMU2xpyyxSTsCDDjvssElgBOgSAHCuOfGCAySgFGBaRjSvXmWz1ERmzGtf+1qn+iysu93tbv0SHjO1JXAWx9bSxYBZ/U1T/tWUlZGJPvzhD3dvfetbp9S245RMM+8maH3qU586CV618ZGPfGQfuFo29P73v78PlGRqIQCbpdLI0p6HP/zhvSMMHAuo0V/c/k/9gErHRic8BwS4y/TbopbWjIuasmmrDJ7nP//5k2YAMGQ/kMcPfOAD/dJCSzmRsSnrBaD7kY98pA8ylcs4mFSyiT6MGRc1+qSmrHEvsBaMA6WQTCCBtWtAq9e85jV9UFxTdhF79913305GMvDl0EMPnQrq099Apze+8Y293pClJEg/8MADJ/pp2nNWIecAgUMOOeR4j/MOD37wg/vzQMcsa0+mk2XrJk2WoWQF0cEAxVVRsjSNGZmZAS7xX8bn2c52tl7nOy+DM5Mxz3zmMycAjGV7D3nIQ/pJFmM8E1aAMlmFJozmEXDojne848Qezys79trQ/uQ+GW1su8kksjePNlLm85wXv/jF+bjmiH/IJEG5NQN5Riaosgy4PzHyX8A4oOs8H2BWdUBP2Z/o5S9/+QTUs8pjv/326wFime0y5GrHyarswSr67f/ZuwtwS44yb+AHFj7cYXEY3N19cPfgFiRYCHkgQIAQCBAIHiBAsEBwggV3CRbcXXfQXdxhWXbhu7+e+d/UdPqc03XuPTN3knqf594+p7u6uvqtt17511t1pvGnnT/ucqABZ8fdvm1v1jhQzYFNmzZ1KdK5USBipv6II47oTnGmOVT2pOAEmmkCArz97W/vHA6FODpmJjkCt7zlLbsZfkEAw/+2t72tc6qc52ypjxN55JFHrjpVebbZWcsgPEc5wQ+nTVnHWrK8w7IDM9Zm9syOmX0snZHUOeY9LaGTTYMHQ85z6pp1NIPPuTBDZ7ZWXVtW9tABqviMLnShC3X7hthbRqDnuRwRM/GcZyAPPuEXXuHnRz/60Y6/ynNqyv2lLEUTUOojTq+MIzOWY3mKfzKO8DNOvP1vOHD4WUv6gjxwxMwymkknK29961tXZaqsU/s53LJJIhPkal77M5us7lkkEFOvoLGfTWDJpXrIR4gcIzyYRWY7Ezjpu5J8V48sBUGF59tbxmeO7TzQrKas55I3JLCZRbJUUH9PG8GUDD0BNBARcEZOvaNrALmQ9nOmZexd6lKXOhZwJggD+tEzgrFFADQgiPrVox+AI1lqoz5LUrTNuOoHGcaCLApBKP2XYADIp04ZSd4JjR07sghlt+gX7y9jA/B69NFHhy1Tj56hrbPI+Df2ZT/KoJpHnm1cDVFNv9mHiO6nmyzFIach4C/+2ydQIDZ2XLi/pmwyEfrBnn6VlQm00KcAiywb0970oecZ12SGbPjTR6ErXOEKXUYb2Vdfucw6ZXLcPEJ/p6x23frWt+70Fp3N/shm/dCHPpQi2x3njYsafeL9anQPmUD9rQ6Ma1lE9JJxTy/UlN3uBXtf2FI8sr+U7CS2yfjhb7CRvnuefiVr3gmo95CHPKR7N0Ex32AaLUvOh563+8oSV/1jKelRK3uxIRlabCfyXpaM8p2yzBzwZHwBNNwbO56leeQR2ERHIvYewCATLCDcWP3kfvygp+gQIAy9yRfKHp7GCBsvownvQ9pDNo0bupIcB6D+zW9+swqaKW/8OKce/cgXQXRIQDP2GD/89clzjBV6hk43NmfRPB5Ou9d9yWh71atetep7TSs/VubxzRJM1Lc9JgDoPWD0NNDMWMMX9dC3SJ/Zo88YRHxamfHarS/IjnPeiT4GQutDz88yV/5WMtbUQRbZhyxzH6OrtImskIMyE049wEc+kn6fRUPjRPll2YOx/ZbxpC3zxokyaFHZ23p3+78rcaABZ7tSb7W2Ng4smQMMHUAi5HOCbM7VXnvttergcFyBHvYaASYcfPDB3W0ydpzn0CZzxwWAjwCWUS0NKrDgvve97+QpT3lKZ+SV3W233bog2+eQIIiTpw3Pfvazt9u8PGWmHS3vsZEqI4+0XRssyxLMqy/O4dj3BLDgj7oWoX322acDr8p7vSOHklPz5Cc/ubuU53C64yw5h+eC5wc84AGrzrS24KfZU5+9L8crwFkyplTsuv5h8PFU0CHAnkWWOMVpTh2eoW2W9sg0ku0xlgRze++99+p7aRNAz5/9TMz8ZpZanXe84x1XHe68H8CN43fQQQcNAm1pSwIOATR5I9cce5kUJZAlEPE3RMlaK0EIz0ccYMtOBDgcVM6qrIgAoGQXr7R7ywo4WpIgJe+jX2UoZG8zji/Z9Rz9LxgT4JRLLGvKkjHjEwFdgKDACGAMkEAwHwpAGPnJecfIYgLy8AZPS5BCWQ42ECpAq3OIg0ymlBcYZHZ969Vx/8lK9vzBQ7IoKBGcCQScEwj4E8QkeEvtsvxcE8QaV6HcIxDRvrFjxzOAbiXRE/iMRwKcWSRYmkeAOGPfs7znPAIqZylyv2xNvwmagfTkWBBvqRiiE/QlXge8GDsu3D+2LN3s2WgIKM+1ZJL5UQpBPfl2/MIXvtB9BtIgfNTmEPCFnCLn2Yhp/TFWf6tL8EU/Zsyk7s0rwBvwWZZO2Y4x46JGn2hDje7JOO3LjDYa38bcphWQWYBZU1Y7hggAY8859dMDbDE9Sq8bfyH2mn2ky5VR3t6JbAjgI7KX8uVxWXJePsNn40Pbta3cxD79pQw5phMir/SV+0LupYvYOzYZ6MGOuyeEZ/7Iun4Yq5/cbyywKSVps6xEIA7w29I9fyVpr3GP+HGAGpR2DU1KsYPGkfpDbCKgDmhIpxhLQ0TW2EYTe0DCWcDZGB4OPcOY9N6IL5DswKGyOTdW5vlDlpmyt3wDWZEok8I+m3AcIn6cCR0kuz2gF5sdfrsWv0abyA1fLpMLvns2X4ePAGg2QWUsxG7rU/VlEmSsruKXyYTvE/0Zv4G+nUbTxonyy7IHY/stwNmYcaK9i8qeexvtehxowNmu12etxY0DS+OAQMPfE5/4xO4Zj33sYztj7Iv9DDhvnB7OFYeG0w+kMUsqAC4dLaCMDc9lo3AABHkJRgV9gn/G+653vWvnHJstNZvM6APjkCAe+MBJA37ZMJShF/gkC64rOOOfNpthc59g67DDDutmtdXnPKeCc5P6xr4nAEN92lZL3k/wgTiFZlM5wQIDs4l4IEDIbLNyHDyATbKegDuZgeb0AAQEOkA3ThrnvCR1A2U4Uxw6e1LEodu0EgThBX7HQSvvzWcOAj4CWThN3h/AxQlP8FMDnJmF9154SKY42QIBS/c4XxzaOJsADjwp26+s7APP1od9UCTtdowzWTrfZINDzkGyHGXWu3MuI5eZlVdvAoJ+5g8AR3nylll15fugknPIebwwJjjv+gR5N+0Luc65FXSE1zVlE0CqD09DxgEQVZvDR4ERvsWRTVlHZVHkTDvR0F4rCbDITgiAB0xAAgNB0qIkS8WyPQCfX3WT3WAcCFDoGg48kKIPMtFbdBKZOmolMwRPbXBtpl4WI12IasZOAmABL92lbuCLPvR8/ASGTiOyUvJpqFzAZAFu+D5ULucARNMo94/pN88lc7LpjEVyoR+z7xIA2p5YaOy4qCkLLKHrjBOBYPpHHfrMH8pYx0t9nwDI3oTuxV/16J+Q9wloxg55F2MNyJ56U7ZGfwN+2EjPJZ+yk8k62fQjPEAFoCqdjMaOC7oLjdEnCYrHlKV7AlYNyU3kxLhBNWW7Gwb+sf36BHBhQkD/AYLULdv28JUl8+QJHwMUBMwHBABh6C/9NG2fpmXJef91olPZ7pJ/fCl6wLt6p+c///ndrd5L9iNyzxvf+MauT+kxesfYIsPG+v777z958IMf3E0OsP/5deoa/cROsfUIQACUwxu6GLjHjj71qU/truefcWA/QLzXT+SoBCkDlAzpcLoZpYzPJimG9jxzraRyKXZ5vv95LA+HQDGTHHxVehrvx9BYmaevgKf3ute9OjtjnLMz5BjRT1YHDJF+wGs8LQEo/eXvgAMO6HSKJeiZRONPGTv8GP6U92Xj6DCgkdUgQPpnPetZnd9jIptuUBfyXjW6qrtp2z/vSH7iD7AT08aiW6aNE9fG2o5aezC237Rh7DhZi+x5TqNdjwPHTOXsem1vLW4caBzYgRzILD6ACWiGAC1AH8ZX5lZJlj4Bx4A9ApGAEoy8wBzoAugBQCFAA2J8OQuczje84Q2d8XWvPZMSMCZo6G6Y8w9ox2AyspzVOHLarV1I0BQa+57AE07IWOcu9TuaTdQOdQDCtA24UM4scnj6ZMbQ0h5OkQBG8IXs25HsAE6p/Wf6xLlGeAg0Q0BQwI7+E9gBHKYR51J5IBdnUD9yNgETAXCUGUt4kOBLYAQ0QwLvLH11HUiEALMIOJv2K2u/G448UJCjOUTlNXIFFJHNlqVIgq6AOEP3CxwEcuonz5YmhxJUk1F9Y58zQTeeciKBmO5L4OD8EJEBFB5GDvHYEin9JFgBuqiPE65dqKZsCZyRFWNRf2YJI5myHAQJ1pAMmCxl8R3onPeJM5p2DwHJAc7cG8daIAa4pUvKpZ3K1BD+BDRznxl8AQmKcy7IVM6zM2ZcT2YYWZ4FZtWMHcAHIpt4YcxYdiwoBvTkeldo4J93sTx01h+wHVn6OatcriWwGnjcqryN7TfgYkALkx4ZN963zNwcOy60qaYsoAQBIRJg+R4wwOeSx/Savkf6PzpCPbFjrgE0ED2U9/CeQ/sW1uhvICM5R7KjAi7Qm1kGWoL5Y8dFxt8YfVJTFr9CAKk+BTjzTjVl+/WU3+M7ZNmqAF/9bGRsNn+C7IViL3w3kYPKjNHuRPGvVj+NlfPiEZ1ejH6btpdWWd7n/Gol+yGDU3+S1zKr2IThLKrRT2TNGGAHAUWeRyb5FGyQscNelmQSAsCbsaNcqS8iB2Umdu4PeJIyOb+ex7XwMLZuy0oWePlO09pXvse88aEOcsvXRHQMsIqMsAtZGtldLP4BMpNVli0HisuDH/nEmbwzYR2Q0NhIljMfu9SZ/YpqdVXuxxMTaZF98hs/O2XKI/8hZYfGyTLsQW2/jR0na5G9kift867Dge1TEnaddreWNg40DuxADgjMOU2crAA0ebyg0F+fYrhznpPB+UpQm/NxrGJI1W8jcsQZ2LRpUzc7JjMozkTKpo5ZR0E/4sD0gwyzfYJr78ahUG/te8569rRrnGJ/yHM5PN6Pw5p3S7CVOjjW5Qy2YCgBYoCklAVoWu4aZ8E7JXDg1PSXaXCEldWGacSpzHJcbQNoKU824rAFRJlWR3k+QAxnuy8TAn3OPOBMVotMhIBsMhhLsk/X4x73uPLUsT7jG5CUQyogSuYFGfVenOfISf9mfWLzaf2CTwC3BOLKmvXXh4LtONKCOkF5suFk5XGUZ5E+QpFRABCQC7iavcg49/aRs/mu/pKVIPipKSsTynvgb7nnFh5bXg2w3bySnQXksY8UJ9cstploS9mQ73iov8NL8jmNIqeuez/83rQyrrUDmLAWAsoFyEk9AhX9lqxOvAcYOycjDGiBAqL1ZSr1ONaOHYCMAEWwq3/0mfaYcCjlpnzGzvxc029ppzFgeVUAGXVkD56UGTsu9EVNWeCCrBsy5RcYBWh0m+/4q7/yTvoA4O48WQfwy5qx1xjwy9LgJz3pSZ0csk0oQW7eg2xFF+Vcjf4mc4jcJxsx9SQLim7xp61jx0WNPgk/8tz+sdQ9Gc/KxH6U5XPO+9SULevof2bn1RfwPnYoGVUpb6KDXdNe+iqUQHsW+D2LB339lHrHyHnKOlpuikwGxhZ0J2b8ywQZO9S3y7Gn0WND1dTqp0yy8BFK8vzHP/7x5anVz2wOYFy/yI4zVtg2ky7Gr75DkaPVG1c+xJ8pZaW8vh6fF+Uh3yJjsL9n4rR2le+RsVCWzbnwxDUTXrKN+TBpK7s3bQzHJ1VHJiXLZwx9LrPCPaMvS9GNfLVpEyk1uortDmmnsaKv+Q7awvbRtbLj+jRvnCzDHtT229hxkv5cdPz2edO+b3wONOBs4/dRa2HjwE7ngBlHxDiMpb5xTtDYn4kqHYzUbXmCjIIEZjm/yFGQj6Y5sp7P4JutDjhT856LtMk9QAhLiOJozaunHxQIsNAQ/5wXKKTuZPM5L6PP3xDJvJpF2XclzsKssvOu5VnJYOiXB6jqD0465zbvIlOxltSVLJL+vZw0QA6n3/sleFPOptT2mXGNTLzgBS9YzW5IPSX4lHOOsrXcI6jnSCa7Le9RlvU5AUbGh2Bl6McW8AsoZ4ZX8I9qygqA/A2RrE6ZbAlEjVn7/+2+++4daG0sOacPZP5ZMh0HOqBuxltZfzJE3WtMB4AT/CaTsCwPrANWyMiJ3iivl5+HshySlVQGxGbhZbKQfX2AfwBg46dcYl7W7XPt2JEtKavAGNH3AGZ/HPejVpbpTJPDPPcJT3hCPk492nDb0mKySUbnkWyScgPnsvzYfiv7AVgpgy7gNzC2rzPHjguBd01ZfSsLTLab5UdkVR8CmI0hAHh0ZTKJ1Z/MBu9Lpv3Sq/4nf0DiyG0fhMWr6KKSb2P1d3QlmSuXXJd1+UxH1IyLyPgYfZKMzzFl9TN+KktHZ2Ir7Y1NxuOasrl/6OhZ0XuyaAOC9kFMfRRwppzAw2Nt6U/AlM9alpznGXRcwNdksufarGPkjhxPk49Meg3VU6ufAsLRIWNJX/tjG42VxzzmMZ1us7TZ+HVN+2M3ynrjU80CLsvyi3xelIfJSDY+spph3vMXlXlZ6tlLzdjtTyyXz00GJrvueWMok8rKZun80H1luf71sbpKHf2xlvdhZyzbZF9NGPZpzDhZhj2o7bex42RR2evzpX3fdTjQgLNdp69aSxsHdhoHEpwmuO83hDPbN/AJqPtl530X+NhsGnG27AfCKAMi7P+VLJF59eR6gJlpzmeCCe+YzzXvmefUHIGC2U9HwMAhBYYI4If21FF3OWPmewKBzEo7V1L5DmXwI4Cc5sRmyUtZTz7jn7apV18DRmVWyeQT6PRnOHPftCNgBJXtLMumv/RfGZQD0cr3cc+Q/JV1uc6Bx6vwLdfLutP/rnE+Zaa4V1uf97zndcs0c1+O2iNg6APFrhsD3kO9Afx8LjO1lMs5n7N0WVDPkVVvv+/T5vR9TVnPEpyS94xrz0X97zkHMPQsQJ3xqP/T3wGkIztxJLsKt/2LQy4gF4yn3YCNgBtl+SwZtrfLPD2Susr7E6yV9wrwyD1ZBZTmufRKX3eVdZWyNmbsCCJlZspCpK/sxZV3tjQUMDMLqCufPe1z2pvjtHI5P6vc2H5LXY7eJ/zzHT8Bk5EF58aOi9qyygva/JgMuRfgZOzJmETGkH6OHPQDMXJB37I1glRgANkwVrW7T30dVaO/k1VC70z70RHP06bI8phxUaNPIsPRM6U+yTltiO7BC+eN2z6gkPEduakp6xl9Cm/TJiAzKsdu7gmgwCbkOn9AW4d0V+5zTHvT/vJaqZ/K82PkPOUDwtDNQxMeKdc/0ol4YBKhBAPLcng8jdK3ro/RT/hm3AzJeWlHXZeBxDeJrU4bZKsB+TIB5zoALxMkKeeYMRhgtLy2Xp8X4aGxtmnb5OPY5ZBp7yIyXy4lB7AC7IfsAJ0U3ydgf54768juILzIfolD5QNwDV0bq6uArvSrrCz3lJON6j1qZYIIcGaskaNybI4ZJ+4Z41PlHcbYA2Vr+m3sOFlE9tLudtw1OdCAs12z31qrGwd2KAcyu8RY9g2h4NCmywx3fglyLY3LRrkcQhvIMkyh7PXBwRtLZvfMkCUNvbzPudRVLq9Y9nsmy4azXP7ylrbhL0oQ1X0Z+Ke9AmLtFzwABkIypxKQOMdxEZio02cZIiXJHBLo95dvlGVsiK1Oz3zmM5+5XZCs/2spWU8cJPwugwOBkKAF6T8y4M95Sx7K9nPWH/awh3XtAm4loCzbY3mP9nOGZPSUQEK5YXzaBAiIoytwlEVUymHqxndZTAi4VDqQkSHXAIwczLyDoKPc18vzkOsCPH1qVt/RcuL+L2/pX4Q3NWXdA1zAW23V5pI47ShOuLFtj0B7odlXq3y/ZDolkykOuf7s6wgZZMi7GdfRJ93J4l/GKNkmDwmmiyLH+ph93soL+ZXRMqh03RghB9pD3tG8DLCasYOvfkBDwCDLK3uR+W5po4AIIDsUMHWNWflnE/GxlB80GVt+qNzYfivv9QMqdEn4q8+doxdQzbioKatuYPamlYDXflg2Nw9opg0BQGR3kp/oR0vMknWlDpTMqQAwZNO4og+SHbq15FagMJ8da/S39skmA1ZkT7PURXYB0EA1AG7NuNDusfrE82rKAkDxxzgpeZEMP/XF3tSUdV+fvId+Sn8ANFH0Z4Juei5Bd3S8/vYDC6j8oYfuRO/fMuS8fERApOjD8tqsz0CnACV9+fDjS2zFNLlQb41+Ut44oJ+zvYJzIVmY+oG9sccTUFr/22u2JLKMMoHDblqVkJUJZVkACgr/y2vr9XkRHgJ9yBSSaV1DtTJvIph+Iufkmz4w7m3B0AcUk6VFt/Z11qw2Zs8/epkfl3HjHu9pL0pjrQ+El3WO1VVf+cpXur356EFtzNYqqStj2ftm4jrX5o2TZdkDz6/pt7HjZBHZCy/acdfkwAl3zWa3VjcONA7sCA4kXZlBFfgzwPk58jx/8+bN3cd+Jk+u1x7zk9GMLmc/BFzIsgQAyliSTYAEKQEofAcCWeqEOBkMZc17Amw4DpzbWhIUoBIs8p1zE8AsZZwfojhhrnHMAjRxim1I3yeZQgj4GB77bobf5sIctllARep3T+kMASDC1ziiyswjs+v61/sm+Mk92Z/NOwZgisMHmCh5A7TzXG1PQJV6cgwgqM8jr65x8JLdiD/qAHIECOSM2mC9lMPU6egdtBHZJLak/CgAZzUZCGmHzKMAm2WbUk6dcZptKJyy6reBb/pClk9NWfcngAESZKmn874DlVCAHc81Oy6ICqDr+g1ucIOOT+Q3wBNQjWOKwlOfyUeCJ2UFBDbAH/oLL+2d4/oseVQ3EsQBfkOeVy4hzHnHAJCCJu9m3EeuUi5tCI+drxk7ng2AzGbz7gdERobCI+c3Ao3tt7RVthW7gE/26bHxtM/OuYZqxkVNWXUDS/xZUhwy/m2qj7xP+ktQgyyBLHWGwDX7aAUAyj6Cspqy5M69ZCtAge8odY3R37E/xo99f0ry68HABmAd4G5oTDgXmeyPi7H6xDNrymaTfm0LwKyOjGsynEmGmrLqGCK+BZ6yS+rN+0a3G6t77rnnKsBEz+vDBz7wgd1kinuAqLNoGXJePi/6IgBGeW3W5/QLsN87hcgLO0iXlPKY6+Ux8j7GtgcI9byyb+l+ts9YMiZMliDZj/EDfTcJGZ8nZeh1feb+TJIoyy9wjh5nq5ZFi/Bw0wr4jsjeGDvTFd72r0bmjaEsnWR/Dj/88O6ZfFh7p/YpSylrltKqg9zRR/rP/nOOISAdQEpfxq/ItfI4VlcBTIFniB+cTFDfgcDxheiJPm/njZNl2oOafhs7ThaRPXxqtOtyoGWc7bp911reOLA0DgjyGPa99967W1pkPx2/+Mhx5sjtu+++naMv+FCO0zT0AwGLNJDTJoiX4i+TiFPMmSgzS8ogft4zzAADCzilsogAD4J3bRcQedfy143GvidnSCDk3WM857Ul1zm6sms4Mja4Bjr6XgJapbOa+/pHv55po2wAkIwWTqB3Kp2m3CPoStaLTbEBBmZBw1eOV5yF3FMevaMldOp2/5aVTc85QcCXPI8sCHKARfNIWzkyQBjgnQ3vZRoBR7O8Q18k+0l2DXnAIzPj2u/ZHHM0a18ZMgUUFSgArWRSCb7wXHvJQH6Knox7D+T60J5TnMEDDjigu88PMwBJlJUtJDtAuyKjfowg/LCEQoCC796XbANxOJucUb8QGrKh8O4re4vpT/V6X/2cIMo7JROhpiyeyhbz3mRP1hog1HjQj8ZGNuUGWsryENQ+4hGP6NrLUY7z+5rXvKaTubTZpumCPf1J1jjOZpiBo8Zg7ThJvfOOgF/BnLZ7nvcwk3/UypKRkiyT1Ka0P85/WQaYRv7IOjnxq341Y4cM638gkk2QjW3f9SNd0V82WD57Z30e229kXAYiIhvACGQvKpkxrgEwyGXNuKgpK/gUcBpDsjLpUvw1NoyzUpf7IQHLy/Xnox/96G4MkWX2hYwAq5LlgweCS/pnr7326t7BGM546150278a/W2M23/NxASZEmSSQW2ITCz6Axk1+qSmLN0CdMQ3gBXbYMwEMM2v9GFHTdmSh+VnWTjAQ7aALgOy4BUQWp/Qx/rL+HEEzgSgob/KPi/r7X9ehpx7hjbpS2TrghpiA01MsGuyNo0p78nnoTf7dmGo7hr9hLfslfFD/8cPiL9BT7ONMvjoVO/10Ic+tLMT2hI7wWb4dWrElpv04c/ICI2OIOMI3+nmZdEiPPQeyFispbEyz8aW+5oFPJTBZ49GuoY/UvouyZqlN2qIT6LPgGR8iv3226+z156hr5HnZDJhqO4aXSVDNv4Lm3+jG92okxv213uTYf5BSWPGCV9sWfZgbL9p89hxsojslTxpn3c9DrSMs12vz1qLGweWzoHsxcJhFThw4CyPA9QIThhiqf6uc6Ze97rXdUZUwxhM1J9p6k6u/Mv1ad/NXuYXszjrHGfgDuDjsMMO627z/AAT/XpSf46uu09w5xxHQgaMAMrsmxn9zNgqO/Y9p72fOqZR2iTwEMRzJAR9gAbvw/k8aluwnyyd3JNjWbc6nvWsZ3XONocDAMOJ5SClfUnZ5yBayuio3/QfJ00bvH9/yWj5HJ+VEbRqh+doM+eeA13+QpQAGqW9/WPapQw543yTKYGD2VnBGrAMwJOgVll9ZWkhB1zfbVqZMc5sNp5FZpUdIhuKc/aQZ7mfg6dem4WTL8TpDOHN0J/xEBKQ2iPFe2mXftOX+oHjWL6DviBvxgweKhvQjEyUvAH4crCV1QaghbGojHcVLIVqyuKtLDqgAVKnutGWFTDU8uj0mXfwa4lAKDLDKTYmyRpZ6C9LAkRZnqiNAjFgNV55Vt+J7h7Y+5fn5ti7vN3XlCGXZIIsk2m8Enzq0yFKBqP7M9bKcoIP7VcP+dq0Iic1Y4ec4QuZJst4JvjUjwL8UteUz92Zn8f2W7Io8TvLULXbGCbbeJZs15pxUVNWYO/Z0XdkDGhGrumwAO3aBfCxZNY5Y11Z41s7ZZgBRUsyLsiO62TJ2PCuwFCU8Vmjv91H9gGmZC7jgkyQEfsYzZOJyHqO6kQ1+qSmrLoPOeSQLhA2fvFNu70/UD38UA7VlN16x/b/Mw5lcLPPdAu9Sf/oN203sZHMJnfrU2PZpJ7JPIBN9g3dvvZjvi1DztWezC39Mwbw6PcjGx7ADdhERvGd3iVrZD2Ue3N0vkY/KU83GhvknO4PaGZMBITE+0MPPXR1OwD9oqx73GsfR/IbAlJn4s07BDQDVszbQyzvkmPqLI+5lmN5zecaHipvbKO+LHcnR/wbI/NWEBjn2nz4SqZZiH+bDNfNKys2MpHjevzaLFnOPdOOJT/sCcpn0HfxL/jKypi0su1DKPflmPM1usovacZX9xz+jPEKnOv71eofO06WZQ+0YUy/KYfGjBPlamXPPY12XQ6cYGVmoItyKY+kk+66r9NaviM4AFhosrIjOL2cZ4ztPwZfMApY4LiWxNHiRHH4zfgugwStnsGwC+hLJ20tz+PQyZoBopQO6VCdy35PDpN3lJXC4e47MUNtKs/JiBHYyWgo30UgacYRHXjggV3QXt4X4MnzzDiXwWZZbugzx4hjz1HSL33ZGLpnzLnwwvKPee0JoMFBw7cEtGOeo/3ADDwQrPSXW42pY6iMgCKZkZxe/TKLyKB2AJWSuTOtPOCY3BprAfjWoywnnfwBdbR5lvwJNIC8grQxPNde7VZ2vZZxT3tn541VzwRaCRymkeVMMiL4PIDkIdKXnHygsPctqWbsAByAtGRaXbsCrXe/1YyLmrKABXqI3pAhM29c0FfGG30lI67Ul/1+UbcMKPat3/9l2eisGv0NgKAntNffrDFXPmve5xp9UlM2ATidzFbMssM1ZfvvY1kme0YXATRNCiF9Uep3MgLETztknsmycR7IZKnXGFpvOR/zzHllvKtMKIA7Xs+SvWl11egnfKSjPBcgPc0WshNAEX0zzxdTlzq1A8A2z5ZPe49Fz68HD2uevRaZr3lObVn6Tj/w3/kXGS819YzVVfras/CCT7VePuEy7UFNv40dJzta9mr6spWt44A4mb87RA04G+JKOzeTA2OBl5mVtIs7jQOt/3Ya69f9wZYfAZHMzpcZIJZ8cGQACE984hPX/bmtwsaBXZEDAkDLfTn6Mljm7Yu0K75ja3PjwK7KAdsOCNaBifZZlHkm8B8iABOwDfCPZFnLSGvUONA40DjQONA4sBYOzALO2h5na+Fsu7dxoHGgcWAncsDsuj1zLFGxH4nMB5lBsgWR5Y6NGgeO7xywBCwbxJvFlg3YQLPju1S0999oHJBpZm9Am8pb1eBPxpLMK1ksxq4MTpl+MkaQTM63vOUtq7/yudHeqbWncaBxoHGgceC4w4EGnB13+rK9SeNA48DxjAM2kweS2XhaNo0/JNiwF41Zk0aNA8d3DhgPAZNlYdp/pVHjQOPAxuKA5bMyp20m74d87HkJILO0tSTLzkwS2bfpfe97X3mpfW4caBxoHGgcaBxYGgcacLY01raKGwcaBxoHls8Bm8Tbg8GeTGbjt6zjfnDLb317QuPA8jlgA2PZLMaJvYMaNQ40DmxcDsgi8yvKIaC3pZmANeN31v50uacdGwcaBxoHGgcaB9abAw04W2+OtvoaBxoHGgd2MAcEEvM2mN/BTWqPaxzYUByY98uFG6qxrTGNA40DqxyQJZpfnVw92T40DjQONA40DjQO7GAOnHAHP689rnGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWCX4EADznaJbmqNbBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzY0RxowNmO5nh7XuNA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOPALsGBBpztEt3UGtk40DjQONA40DjQONA40DjQONA40DjQONA40DjQONA4sKM50H4cYEdzvD2vcWAX4cCJTnSiyfWud73JOc5xjomff//e9743+fznPz/5+9//Xv0G5z73uScXvvCF5973oQ99qHuWgle72tUml7nMZSanOc1pJn5l68UvfvHkT3/609w6UuAEJzjB5Fa3utXqT9r/+te/7urI9bHHTZs2Ta50pStNTne6001+8YtfTL7+9a93vCjvv/a1rz3xy1+z6Kc//enkG9/4xqwig9fW+h7nP//5J9p3pjOdqePte9/73slXv/rVyTWvec3JSU960sFn/vCHP5x8//vfH7xWnjztaU87ue51rzs54xnPOPnDH/4w+da3vjX5yle+UhYZ/Vlf+2XQWfSXv/xl8olPfKIrsla+qIRs3eIWt5ic7Wxnm6hP/7zjHe+Y1YTBa5e//OUnF7nIRSYnP/nJu181xYNyM/pTnvKUnTwP3lyc1C+L/MjD0Hu8853vHCX/V73qVScXvehFJx/96EePJddD9c7iT5OHrZ05Tx6KLp9c7GIXm1ziEpfo9Mvvfve7bmx+85vfLIt0n+lh/TRE//rXvyYf+MAHtrt0rnOda3KFK1yhG5s2V//5z38+Oeqoo6bq70tf+tKTC17wgpPTn/70E+345Cc/OaGzQrXj032nPvWpO/1gXNPd3/72t7v3Y0/G0nrKlGee7GQnm1zjGteY+ng27re//W13vaasG/xqK73K3umTL33pS529+Oc//3ms57Gv9PI5z3nO7hr7qvyf//znY5WtPXG+852vs1n4T598+ctfnvzoRz+aW412X//61+90P1kJLdO+LSojJznJSSZ3uctdJmxC+QucafNaj+stdzXjt1bu6OmrX/3q3S+PGruf/exnp/b3RpS7aTZomXJXygc5Yv9f/epXl6dXP5PRa13rWp0PZYz/6le/mhx99NGTX/7yl6tl8mE9/OYme+Hm+OOiOm+a7HnyLP9HTCCmoYf8cAm7YeyVtNtuu0305Vvf+taJ+GMMjfUH1LXecjKmfa3MZNKAsyYFjQONA8fiAGOw7777Tk584hOvXmOYOA+HHHJIFYClgste9rLd32plUz5w+BgfQdqNbnSj1VIAntqA4k53ulMHZqQSRqaWbn/723dBbe47+9nP3r0HUOkVr3hFTncBEMdrFv3sZz9bCDhby3t457vd7W6TE57wmORiAR3nToA0jc561rPOBc7Oe97zTnbffffO4VSPwIDRv/KVr9wBlJ5TQ5xk4OQsEnAHOFsLX/KM+9///hOgVugUpzhFPo46cqL33nvv7dqND4KY973vfattPctZztKdm1fpP/7xj4WAs6H3GMMf8myckV1Ai+C9pKF6y+vl5yYPW4GTMfIQvt3nPvfpQJZ8B+AaQ0COl770pTndHY0PwNY0+vCHPzzx67pIINifqNA/V7nKVSavf/3rOwAr9dD1e+yxx+TMZz5zTnUB+MUvfvEuOHzPe97Tna8dn8BDoHSpF4HL9A4b8te//nX1edM+rLdMeQ7wcRZw9vvf/74DHmrL0pn3vve9u0Aq72PS4ra3vW2nD+n/kEmMBz7wgZ0ezrnznOc8Hcj4spe9bPLjH/84p6uPxrNJp5Axri8+9rGPHQtcTRlHNuHud797NwHE7veBs7Ify/vyeRH7thYZYXvo2hoQNm2dd1yG3NWM3xoZxcOb3/zmqzYeEHvJS16yG1/PeMYzJmxKaCPK3SwbhGfLkLvww5GPR1eWfCqvA1ZiI3OereTTmlxg50Pr4Tc32Qs3xx8X1XmzZM/Tp/k/e+6554RPFzLm2JRXvvKV2/nN7J0JdZNRY4CzGn9gGXKS92nH2RxowNls/rSrjQPHSw4IvDjPf/vb3yYyVzjVN7nJTbrZ+vvd734TDlkNyWCSlTREjA7niOOSjDLAGRJcCfS0oxaIYVjQT37yky6LSNZaDXFIZYIgM0qf+9znOiBOQCkg4qwm++YHP/jBYMYZUIbRRMosQmt5D4YbaIZ3b3rTm7qZUrOlQFAkE6IM6NK+7373u/k4eFQnYEa/yWQRtHNCNm/e3AUzgkXPqyFZNsm+KO/zLAES0vbQWviiDmBsQDNgnKyMRcBZYB/+yhbDtyte8YodGHLDG96w4y3ZkcEyLRgGWHC41bGIjEx7j4c85CEdq6bJP77e4x73mBqYTKs3/C+PTR62csOYGCMPShsrMnwQIF62Eb1Htzh/netcpxtXXYGVfwG2ADvRk7lGdpLVBDQJaCZD9tOf/nR3r7rJ2R3ucIfJU57ylNXMM8C6ut0vi8IYEwTQ14JGck1H1IzPzNTTD9r78Y9/vJsdB6obc4C65zznOWn+4HEZMuVBACokc3ooY6TMFK0payzhL/CSPqQXAeh0LUCNzQxYKEBiU2UC6p8//vGPHc/xTdkDDjig2t55J5l9+gyRJ33HjgFjZcLJIJQVPETkYFrW9DLs21pkhBzFJgy9y1rOLUvuxo5fbR8rdyUPyda73/3uDki88Y1v3GXL8NWe97znrbJjo8ndPBu0DLkLMzybnxIfL+fLo/EU0IxvwAc0Ztl4et74prtjt9fqNzfZq5ts1VeL6rx5sjfN/6FfgWbs5Qc/+MEOEDMZBJQ28XDggQd2MlLK0ZjPNf7AsuRkTDtbmZZx1mSgcaBxoMcBgVsct+c+97mrYIJslIc//OGdQwYkGQJcelWtfrV0bWgJH6fF7J3Az0x7Zo+T+cPJBzwsQgITxLgtsvwts/ZAD21DX/va17rg4kIXulAHogU4K7PPuoLb/uEXErD0l1JtKzL3sJb3SAaXLD5BVCjAmSDZEthaAhpyLICRhx56aHf7d77znU5WXAMuvvnNb64K/iwhHSIzwoIkgW6ZhbMWvnhOAE2yV84aD7Vh6BxQ4AIXuEB36Ytf/GKXju8LPj/qUY/qlm1ysAOcveQlLzlWNac61akmD3vYw7rzgu0xy6n6lUx7j3n8ueMd79gB4f368n1avbleHps8TDoAcqw84J2sEGQMRn+QnQc96EEdkOU6mQgF5H3Na16z3TLgXM8RqIDMcJdBs2zeBz/4wR1gY3x+4Qtf6JZS0r/oLW95y6qOJs+PecxjOiBIlhoQvGZ8ykw2PgQXALLode96m9vcZnKGM5yha0fOdw3o/VuGTHmErD4kCJ437seWBUrGZtFRWeIqqAZe0F9A1cMOO6wDRS3pRq997WtXg2625ZGPfGQ30SHjaJFl/VlyZqJKfyJtAOqx65bVDwFn2p+JiO6m3r/IZ+905w84t4h9W1RGZFGbxFsWLUvuxo5f7zVW7rTVODOOnv70p6+OM3283377dXqEHyMzChi/0eRung1ahtzhL1m/853v3Ok336cRYCx67GlPe9qqP2Oi7bGPfWw3uQxIAZyth9/cZK9+K5NFdd482Rvyf/hUJkQRkPozn/lM99kWBHw+S6xNeE2zlV3hKf9q/IFlycmUprXTPQ60jLMeQ9rXxoHjOwcud7nLdSz4zW9+swqaOSHLwTmZCJxeTr8ZFsG/2fUjjzxylXX2hLjrXe/aOR2WfAwFAWbnBVFIpgPn2+zR7W53uy6wcl7GFJBOxkKAH4GcNnKgOYyyBgBYgjJkht3+A5bRIcuFZLO96lWv6mb2zdY4BxzkzMpmk5Uj0CjT9WVYcLDKJSvqAyACzrR/FnHu8UEbA7yV5c0wCZDMGsvuM5O5ZcuWrh0+z3sPwJV+EKwDyNwjYHr729/egUyyOsyCIe2QXu66pVeZrU+AV7ZrzGfvjwR7JQlGvTfeA0U5lMlqsgdEHA33uC4LAnAlxX0o20sfZUZY/43hy5j+veUtb7ldoIg3Zuxf/vKXd69Dpm9961t3M4v6WUaIve3swReSneEcHvcdJVkrZEffziLLjbRXZlBfztR/s5vdrOsr/aec/XxkxqUdQ++B97Jepsm/9pA7Y4tsqrPfzqF6S/7036nJw6QD1GvkIf1jb8CS6DPZKWX2j/5X3lgpM6LK+/KZc6+cTKaS6Ed9DeCxrBDd4AY36HQ0/d2f2ADQ0cd00jQaGp/Kyq5kJ+hVMhZiB+h8wagg3riaRsuSqch6f1nyUDvGlt20aVN3uwmKvk4VZAvSonPxQ6CFB8lUcbPxZXKAXu9nZ4/RacpkouQjH/lI1578811gT66iH3KNDTTekbannbk+7TjPvuk/NkwWMZkk5yaPvDtaVEZk5OEduQzfh9pIdoFG9DBZw182yF6Os2is3OlHSwkR36GcnBNc4zebBvypGb/qGyt33hHxjcpxBrCW/QwgZz8BZ4vI3Rg7uKjczbNB3YsN/Jsnd/xDGZaygthBeu9tb3vbqn+oSjoo2aEmCewFOUTeHy/5TXRqScauzN6MuRq/uayn/Nxk7xhu7EzZm+b/mHwi72SCrxsiG3xhE6WXutSljuUPsrn3ute9VjPV2HA6o7R/Nf7AWDlJzJR2tuP6cKABZ+vDx1ZL48BxhgNZu98PALygQIhTH4fNdzMs7uFIRFFbEsChEAjIRBoiS0M41BzaAA8czDzfPRxef2Zg1Z3Ze9cYK86P4G2vvfbqgDvLU2RQlHXIbkCMl3uUNTOEfBegclTtHyR7KnsRDGWIeR6wCs3KuBOgJPPDUtcSkHPvPvvs0wF/PoeAZxw+jv6Tn/zkme+BZ94Dv1Dew6yVwOfggw/uAhYBBsJnPMn38IQDcN/73rd7f8CovuoHXV0FvX/Jrugv6dQO9QA1BTX6jPOqTTe96U27bAfOgmDNMgnOggBqCDRzzWwi0q5kY61H/8q4i8OLJ3ijnYhTIjskjox3IvObV4BOgYgsTOfI9hvf+MbunvIfUCJL8AAp08hstnGkrv6mxOTRUsuyDdqJr9qBB0C+ofdQX/rZs9PX+gzvyT5wGnHeZKH0aaje8Kdf1vcmD/XyQKYFefQXmRF4kweAJipBlextBuT3gyfki2wAXOmpcsnhU5/61O7+/j+6iy5F9DYKsG6SgC6ReWSscuxlpM3K9p02PtUL3PVXEpmM3Bk7ZdBQlsvnZcgUHRtA0hiypM14kFUsKCrHa03Z6GHAZJ8yhulgPGA3gJJ9EnQBzRDehwSQY2xWtjww/rf0wE46NnpB/0aXeoaAThsFfjKIXZ9H8+wbEIGclkS26XNAkiy4RWTERAIdCIQzeQBEGyJybQ+58N67u8+PHdH7NuueRmPlTsYJG0ae+DIykpDxS7cj2e6oZvzWyF3kpb8puWfm3WPnauVurB1cRO7G2CDv0Kd5cgeg5kOhyDtdaWLsoIMO6my2a3wUY8KEq4nZacDZtIw3dWTyIf5ifM4xfrP7h6jJ3lau7GzZm+b/SBxA5IfvXJIJATo8fVheMwlL90cm1f/Qhz608+Gii2v8gTxjjP9dtqN9Xh8ONOBsffjYamkcOM5wIM4Y49CnBAYpA2ThqHBoGAfGQxYDZ42ROPzww7ebCU19nIyACzKkQhzq/fffv1tSxPmVafb+97+/uyw7iVOvXk6rjIo4rZtWQBqzRAIfYIC/Aw44oHMebUSdwFIWFqcNkMUpYrSkZAOPGCPp+4CRPgGkzCInQBLAlhl2/fIcaoYSD8sgSDl1ceIRx82Mp4DW+8mU0z48nfUeNi3VFhlY3oMTCNSxrNH7+IU8SwnMzgrOBcLPf/7zu2f65xmodBjVx4kU9Fhe1Qf7uhu2/TPrhuxf1KfsJSfgQ2TAEiSBowxFy8cSrGm/60MEkBXs6+8SoJrFl7H9+6xnPat7V3wECJEV5L3wUNAhGLG8Uh8KhuwNBUATbJO/Pjlvw+CMDQ60YHwaBbCShdPnY7L2ZAnItANgcNSl6BsDwBbyNe09PNM7eY9S/p3ffSXLTV/IuBEspx2uhWbVmzLlsclDvTwI/OlAkwKCfGMhoA4guQTu6TdETySzwXd6VpAhi/RTn/qUU1MpS1rIe5brZfkYnWHJMJlCMilMELzrXe/aLku0rHza+CzL+KyNxj0AV/0Cjlm6M/cvQ6YELKFS7gHVxhTdLJMa1ZRlX2QYBaTOMxzL/mJj+pMEdGGyspTXj2WZLCmaZ7Nim/oBnTqR8/SB/kiwhgfsrIkYOpZ+GUOz7BtAMhlsJk7oa3JrMsJ7urcP7o6REfcKTNkDGdyxL/320m30uncV6Fo6yyZ5V4AWHW3sDfk36hord3SzCQ/9py30/1Er2fXsB6L/k91WM35r5C4TIfGlugdv+0eeEX4M0Sy5q7GDi8jdGBs01OZZcqd/+U2lf0i3yngnf+7N2H7Ri140VP3oc3yqZAXGxsfuD8lV32+e9qAme3U+2LJkb5r/Qxej+LhlP2YiKDa0vOYcXQBI5/s/4AEP6GTSRFL2i67xB8bKSdmG9nn9OLA1+lm/+lpNjQONA7s4BziciDPdpzj0KeO6zBfBWBxWzi1iKIZm31wz84tkHgwt4+wu9v4BlpCZ0yxDEmzaN8bztSnZYL1bu6+MLKce+YWxBA+yDYATSBCRbLruxLZ/0q9jpJ2KI1SWyWdOWmYf8wuQueYIZHQ/kJHTxQFndMvMtOyvUN5Xfs4eKEcccUQHmrkGNATC4YX2TiPAS4w7wMYyBsFFll1yCAU506js+/4yM/fEqQgIoI+1E3l3TgM+c3CBftMAOst8kOwJdcyjtfRv6rZfSdpt+WicYEBDHOQSbMx9jsDKOM++D/HGeSR4S0CTbMutV7b+1wZ8BBpnmYjldG94wxu6AvqPnNWSwJHs4GeCiNo6+uWbPAwv85knD/hWghyRO/w1kxzZ8z3j3ZixRITOAz4B8MmCoB34MI1MaiSQBobRmyjySh61hf6wrIvuV68s0SF96N6x41MGLcA5Ooe+mzeelyVTJShhXBkDwI8sswOQ571qykZ3mpDA6xB9V4Ia/THrPfVLdIH+LW1LjU5LX9L/Q4TvyGQEYqPYS880eeE4hubZN/pRX7MtwDjtIct0vTbIqOpntRz9aU8AAEAASURBVM2TEXzKkn/BJ5s9jdRlLJFnGeR8Fs+1n53sIO0h70NUK3cmNIwZBEwB2OlL4yvL/l2rGb81cmeiEQFts6WB794v8pAg2/nQPLmrsYN5zli5W9QGzZM7+hbJZIx/yGban5EslH5P+LDIEZ/pW4T/6YPIzli/uf/s3O/8kO/Q96ua7B1jv5Yte+mr6M4h+xXgTNno89ynr5J9Si+y34ivncxF/T/GH6iVk7ShHdePA8NTEetXf6upcaBxYBfjQIxQAp2y+TEIpYLn6JtVNnsCEEEynMygDBFHjqOH+tlYQ+Wd05YYLQ5+fxkIx5hBkTE1jcoABjDVr0O9ngNcS4Za6hJscrwBUoBBgaCldE94whOOlVFn5lM9+DiUBSKTJNkkglLP026OXfhbBtFpQ46c8NTfT9W2BGXWMhR1MNw2HeeIylRJXwISOQQCx/RPnlkeU9650oinTM5FjpwHEgJIzQgniAAqypQbIn2VIHPeBt65fy39mzoS0Gl7ltvkWtqjj/wlEM11gSEZcp/39Kdfn/nMZ6bI6tGvFiJL7YacZHUhY8V7ybbAt7TPNTJSBtnOzSIyLxsUAQv67Z9176xrTR4WkwfL2elL/UC/GMuWdAGrBL5kJ5v7c7oF1MZR9uYCKFtiKZvTmDMZUWZmps/sNSkrDdnHrNybJUE1uX32s5+9mvkIsJWxql7L4/p7NNaMT20UwHofe07RnUAQoFUy39LWHJclU5by4zcgx76aIctk6HNjZPPKUmh7YdWUNUFkaa0+0neCa7o0S2rynP54pWdMWtAnnouv+lHmGptao9MChuZZ/SObgTzT53ve857dkR4OcNi/Z+j7PPsW/V4ue1UPfjz+8Y8fqrKT41kyko3c6UuTXrMotosN7we4837JdRG5A2DbZ1TWWSa8TLqU/VEzfmvkjm0kbwBbmW7JGPTduxjf5TuFb/PkLnZmjB0s3zP1l8dS7tZig+bJXTIQAf8l0ZePe9zjylMLfwYo5lc2ZaSXS67xCuV9y4fErxvqi5Qrr8WHyjXHnMtznGuyt3UCCC+GKH2BZ2uRvdQ9bZLXdRMCobKPnOvHAXSTSS/jdNOmTZ3+HesPLCInaVc7rg8HGnC2PnxstTQOHGc4QKFbwpNlPOWLxTnpGxBLMsqljLKYplE27hesTQPX+vcGkHMeiOBviJI+P3QtWWCu2ctnGpXlUiYz3AAvG9z7tUxGmdOaTKSUTbaXgNE7DhEH1xKeOENDZaadM6OO+kHBtPL982bgp+1jJmAGnHm37LvUv987cQy0Hb+ThZhymYEmRyWZ+bUhsLrdnxm4skw+c5KRWbxZe8mlvGPZb7X9m3oS+Hi3colVrufoWf1sSoEfAl5os1lpafkAt3IWWhYJ8ACVv5rYndj2jxNmeZvAOc5feX2RzwmUAacyYfwhzhsC4BqbwIxy9rS7OONfk4etslcjD4LNyABQKnuOAWzIjqWb+kffGEd0ib8+yUIAPstaSn+mjKBZdmfGBcCsXBavHB0iqAPCkYsQ5zz1lro312vGp/b7A854v/xapwmIacDZsmTKGM04zbvkSK8bs7F7NWXVAeyWbSbzxfj1Rzf6dWFAISr1QHdi5V8ynwFvls/pS0tlUfrO53k6DeCCptmUBO/AO+2kl+hh/W7co2Q/sPPOkcW+XM+zb9mGIPsBdRXP+TdLRtyaPcJMvqStyYQk584BQ42RAHdlxuacx69eXlTujKvsxymrLX2aimvGb43caS/Ae/fdd+9khb5wzob4Mq+uf/3rT50gSRuH5K7GDmaPrzFytxYbNEvuyHKenx+JCu/X62hCI/vWAnBf+MIXbsdbMlzrN5dta7K3VedtRNlLP8VGxmfKecesSNGP/koiL30iL+phX+nmGn9gEf+7//z2fXEONOBscd61OxsHjpMc4HBS5jEE5UsGOOvPnAsWyvKcyP4+JqnHkgYkWJs3W5l7SnDG0rY+cJdyAbjyvTwyVIjRGdqjKmXjUFr6JEuiP3MOVPAcBt6+YiVwZrY7mXFDS/A8w/5VWc6KjwJKQSpH115rCdzSnv4xwVcCof51QEvfcJdlXNePAo44ArlegnFxRHOtPOK/63jQB7biVPT7QkZiQCD32jPsda97XVlt91m7Nq3MwqGhpa7dhYF/tf07UMWqPOKL/fWmkaBQO4EVskqyXCPlLRXJzLSxUWa3BHTA6yEwRB1ZzuqztghejRdLwvzs+SKUsSuwlUXUJ4BsQNny10/75Ya+N3mokweAl7Eg+ymgWfhKH8SpBszLoJBdaKyZqS5nnN2TMUseQ3SDDeUThMhMGRpLdAn5HdKnstuAOH09M2Z8CmRlrXiXPoBBnwKlp01+5B2WIVP0Dr3uvaNH87z+95qy6qBzs/8i+8lO6Bs8RPpaGYAacIftw5+SjlrZI0t5OhQPa3RaNojXbn1UyknOeZYgLjrI+SFdQFc4bzlrueRwjH3znuRV+/tU2qaxMiJ7PWRfvT55V201pujTLGmLHe6Xn/d9EbnLnm7qFgDT+ex5qGb81soduX3BC16wao/wi5wloz4ZzTVyF79sjB3kT6HI2Cy5W9QGzZO76EDtIFelv+hcKXe+19Juu+22uv2FzOBs7VHWs4jfXN7vc5O9k1f5YDtC9so+ik8bH7e8FlubNpXXhnQRnYDobXtr1vgDi8hJ2Z72eW0caMDZ2vjX7m4cOM5xwIxnGUSXL5ggIOCSawKvOGnOy1wB/vjJb4FEn5IVJvtgLHEOOWScZJ8zu577zayage6DXLnumABVHcqVgRKjZV8vTn+AIEt3nPfcPsCRYLLvoCVTwPlpWTucamR5UP/XFBNsaOM0SgYAR1j58j3MvAsiBFx+mXOIOIEMtXe11LQE2Sw5CU3LzHCdMy6zTGCf/X2cJwsB/spsEnsHyTZDWbKZn6Mvy7kuqMR3VAPg1PZv94DeP8CELC9OTQmIKibgJueCCUveOPN+TQ3/+kt2y/5TvqQABuWvJpbX8TVZPsZPKetZQqO8/q8hS1bSN+V9skwEPWRWW8tAtSw363OThzp5iHM9FGzic/SLANZYkKXlmA2Gy75IllAyP8ieX+wSpJJN4HR/jOV+8k7WysymXMs5ZUoaMz4BvwBauiH78qWOBAxlsJtr5XEZMkWnsz8AK2BDScYzClhVUxZP2Dv99eIXv7jL+End2ZszGVi3uMUtOmCFfXj605+eYt0x2br6DQBUo9M8m1yRKcBkuSQ3+7a5LvhjewRffRL8sSlsAz3Q3wpgjH3zXurgB/QJ6O8dZRvbC22MjOBBbF5Zn3qAVHjFZoe/5NXS5GRwlPfI/NPPfUCwLFMrd/qdXtUOE2DsBztB35oYqx2/NXLH3suC8j5+aKgEYmNvY2dq5K7GDtbI3aI2aJ7ckevIPh+mtJkmdP3wif6x9L02I638pU6TGNMmRGv95lLm8rnJ3n93k0NjfbAdIXvpG8fEPcZ73/fmC6OAa92Xbf8s4S/HJhvIV0Z0V3y5Mf6Ae2rlxD2N1o8D06Oz9XtGq6lxoHFgF+KAZXycDLMqMQaaz/l2DpBU7jNy75WfhKfwBUJmp7MM0NKSZLDk9RmLgApx6HJt3jEBvZT5GB33cJIFJ5yrcrazXx8HXLDAkc1GwynD0QXu2JcqgFcAD89Lm5XnqCa7DoBSUoLNOPHltXyOkewHLoC7PCdlck955NRyGLyHLK6SNm/e3H1N28tr+RxwUXCe8q4JRAQBCK9n8TJLbPVvlsy4L/cz7AHe1JtfGhNYvP71r18NhJx3vaRN27LNvOOsNpT3+Fzbv/37fQ+YyynKD1iknGUm3pdjTtYFAcaJfgBGlmRfKedd7wefAY4TFJf3+VyCWyW4oL/KH20IuNK/f9p3+7H4NbH+X2TFu7sW53BaPUPnmzzUyQPdF9mxf1NJJgEy/gH2ykUnydYt+92eO5Gn6GTZLwHN/CrsNNDMMwWB6qfPrn3ta682A6ABXEd9+R0zPgXyiD5VV4h+lKWLUibX+sdlyFRkG9hYLm31Pe+bTKGasrJN1AcULzdpB9IImvAYsIFiM/A8y8+cl5UA3ED0J91Xq9Oi22VmRU4co+czAWTLgb4e8D0/1MPG+N7fL2qMfctEiuWVpW3AX/4DvUgm0//zZESbh9qavTwBJq7bCgCl/+jwZHY777u+QNN0r2s1csceZAktMPDwww/vbDN/iF+EasdvjdzpWwAhENL7hexlidd8jPhjNXJXYwc9c6zcLWqDxshdJjxtGRL9qW0mEsmc8VQLmvEtySciF9NAM9dr/Wb39KnJ3j+rfDD8W7bslX0E/KKbUXxdn+m5JBVkvDkfon/L8cnvJZN8XCsJavwBddbISdrQjuvHgZZxtn68bDU1DhwnOECZc1Y5DPZCyb4OyWzgdCfTioOe7BiACCeW4ZBVZfbaks2DDjqom8HGnBgX5frLPecxz2bS++yzTxdg2GOMowRkyK/JcYbjtA/VxXGyCTeQTIbAfvvt14E72h+wwp5TWVrEEbdhpwDH5rKexxnNTLZfyjFzX1LStQMaldfyGSglcwh/H/SgB3VZPr6XYGAZbOa+8ugXOBlugdm+++7bzXJZVsph56gnqCjvyWdBi8CIsdd/Zq05lNrAEdc3Q5uM535HdeCTAH3PPffsAhEBvHZ7frmUYfeV/VdSb847PuIRj+jOAzEFPiHvgeKg5Py8Y23/DtWn3yxRA8L6xTmBrXaQfbOE3s3Gz8hnGUCcIn25//77dzIigI6TZImc7I0QZykZN+RniGQOCXg4/xws2RaehS9lQEAOZ8nZUN3LOtfkoV4eLAUG+AvObPBPL9Bl0SF+OCU60ibQxhHZsWk/XZSMG32K/+SE3CWQJ2t77LHHYJfLRgLkAE3pMPJOF5gocU62EBDf7HkAuVQ0ZnzSswAy7ZX9RqaBwO7VLtlUs/bBzDutt46hN2Xi0Ed0r3ZpS9rFrtnnEdWU1U5BFT1hMiO/7kzHel9LtwOIWNZt3LJBAHdLuulfoJt2GesABlSr02xBAKBiz8hUngOUw/+AS13lC/yLbM7SO+QFwKsNeBw7HZuGD953PWRk6BXYNuPBRJi9RNk3Ng34Etk7amVJ7DQaq8v0VbmvWcaJDEsTJ/wKY4pPMXb8alON3BnHlt2yC+ypfuGvBEwnR/w5VCN36hlrB9W9EeTOL3fLLONHyWwkd2wxnw1N20+0uzjlX/kLufrSX5/I8iGHHNLxeazf3K8j35vsTToZ3kiyl77JUfxDb7Pb4hH+Id3NXtJtAfJS3pH+TczCxhujKD/ORc/X+ANj5aR7SPu37hw44brX2CpsHGgc2OU5AAQLCCUYCGhGYWevHM4Z0AA5X+7zZOZVACBwKrNkEnSVywu7Cnr/3Ity9JmBkmrvCCAS3HHkOcOCzv6yR/egsg7BKMeWM8mYAfI4+MoweICQkOwoy5wEHIwi4wisUJbDWu79knsCmLh3GgGN7Gug3Rw7Bth9nK449AEYyzrK97AUwabTQBntxws80VZt7gc25b3qtJwo/cXR3LSS5SUQkNlis2HBxzziLHIa8UZqvcCIAyDwzGaoshTT58C87D+j/wUICGCXpUS+B5hMHc7NovLdavo39+WYZwg27EnmvHfKsgG8NuNM1kJm/vSZ9ybr+k1fKut9+xkbycBQd7+PUqcjWSaj+tTz9Q8+y2pItkR+vCDtz7Gsx+dp51Nu2vWczzHlpx2bPGydCR4rDxxwf4J6Y1A2DH1GlgThJfhtRpreMr7pDWPGOInDbVIBAdJdD/k89FcCsOTdr37pZ/K+aUXW3GP80bf9/h8zPsnuoYce2gFT6gIiaLPP5Pfggw/eDlBOe/vH9ZYpyzBt6p3lNN5Fu9CWlZn/ZzzjGavvW1PW/YcddlgHnnlHet0fEgQCiUryS5qZdKG/6Q36FwBnEqHUMTU6jV51PzmJfQtoxu6Ql7XQGPumfjZEP+MF/gY0k6Ubu7leMjL0PsaDpfZkl87lv2iLTHByWU5mDN0/Ru74NXS+Z/B3QvrVe6LNK/4RP2ns+HVPjdzhoR8XAbCzFXwUz6NT2ArPLalG7mrs4Frlrq9jyjb7PEbu+C6WXwO/6bdNK3oMaEbm6eRpe5bm2TmWz85znSM/Q39kIDTGb07Zaccme5Nu4mCsD7Ys2Ys85Jj+krlp0olcxT/km7EpxswQ0UXx5YBmxicwPZmd7qnxB5QfIyfKNVp/DpxgJZuki1A5SZmpXP/HtBqPSxwQtDdZ2XV7tKb/AopwRDjC2f9lZ799HCNGzcziIu0SsAAlOFyyBWY50wJaAQDHG+DRN6aL8IODq04ZHmupk/FWj2Ar+xyNbY9gjbONnzKg+stHx9STAE0f6ItZfBxT33qVqenfac8U8MtUACT6m9bvHGoZJAJxoGk26p5W75jz6hR8a4NsovWoc8xz11qmycPWIKtGHvSxAB8QPQ+0NmMNCDDW55Wt7UttFixaRrdWkCXPVh9gCJgDnFpEPyxDprSL3tQuY3ba2PYeNWWBoHSq92Qz1T+N6F02yPvRv4LAWVSj0wRo2iGgY992BiVY5EfIxphmX9ZDRqa9n7FizHh+Mjinle2fX4bc1YzfGrlje9gLE4v8iVnjt1buxtpB/NsIcgcw27QCnAGi5/Gi3+fr8X09/OYme1t7YiPLXnQLGcuWF9Pkh0yYHAPsztPHNf7AMuRk2jscn86LkzP51X/vBpz1OdK+z+VADfAyt7JWYIdzoPXfDmd5e2DjQONA40DjQONA40DjQONA40DjQONA48AG5sAs4Kwt1dzAHdea1jjQONA40DjQONA40DjQONA40DjQONA40DjQONA40Diw8zjQgLOdx/v25MaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBDcyBBpxt4M5pTWscaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2HkcaMDZzuN9e3LjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwAbmQAPONnDntKY1DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40Duw8DjTgbOfxvj25caBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWADc6ABZxu4c1rTGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgd2HgcacLbzeN+e3DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DiwgTlwopq2Xf7yl59c+tKXnnvLZz/72clXv/rVueXKAuc973kn17nOdSY/+9nPJu95z3vKSwt/PsUpTjG5293uNvne9743+dCHPjRYz1nOcpbJVa961cmZznSmyX//939PfvCDH0w+8YlPDJatOXnqU596cq1rXaur99/+7d8mv/rVryZHH3305Je//OWxqjnRiU40ud71rjc5xznOMfnf//3frr2f//znJ3//+9+PVVZd17zmNSfnPOc5Jyc/+cknv/3tbztef/vb315T2WPd3E4c7zlQI5djmUV2T3rSkw4W/+EPfzj5/ve/v3qtpqybLnOZy0wucpGLTIz7b33rWxNjyJgeIrpMWWNoy5Ytk6985SuT//qv/xoqWnXutKc97eS6173u5IxnPOPkD3/4Q9cOdc+jk5zkJJO73OUuk7/85S+TI444YrW4d/r3f//31e9DH9wzT2fR294ZXz75yU+uVhN9cu5zn3vyr3/9a/KlL31p8vWvf33yz3/+c7XMrA/05i1vectOD73lLW+ZVXSha1e60pUmF77whSf48x//8R9dn/7ud79bqC431cjUJS95yQn7MES///3vJ+xcSTXyt2nTpol3O93pTjf5xS9+0fGcnVorLTpmp8lfTXtOfOITT+5xj3tM/vGPf0xe8YpX1Ny6Ycuut/zVyFRNWQw0hi93uct1PsePf/zjyac//enJtLGy0eRP+29zm9tMTn/6009e85rXTP72t785VU277bbbhA5+61vfOvn1r3896v7zne983Vjks/3nf/7n5Mtf/vLkRz/60eC95zrXuSZXuMIVOv3+P//zP5Of//znk6OOOupYvtppTnOayS1ucYvJ2c52tskJTnCCyTe+8Y3JO97xjsE6nVx03E6r8GQnO9nkGte4xrTLnR7lO4Zq9GKtzdhottY4uf71r9/5GvoutF629sY3vvHk7Gc/++SDH/xg51uk/jHHnaVD11v+vGuNTNWUVXeNrd1o8rcethYPdhRN8x37z19Efze563Nx/vf1jjHmP/GYEoviNGP9qNiWMfjG+c9//sm1r33tzt+Bm7z3ve+txp2OebO6T1XA2QUveMHOOZv3iN/85jfVLwA0YtA4HOsBnHFWHvCAB3T1CQiHgDMBLnBL2ZDO4HA85znPmfz1r3/N6aojIO5GN7rRdvVyuC572ct2Qev73ve+1foo0X333XfCYIY4ctp1yCGHTP70pz/ldGeM73Of+3ROVk4y0Je4xCUmP/nJTyYvfvGLc7qq7OpN7UPjwDYO1MjlWKYxkhzWaXTWs551FTirKWvs3P/+998OYDLebnjDG07e//73Tz7+8Y+vPpJi3nvvvTvAIifpnqtf/eoT43IeAJV7ho6Myu6777467tV7sYtdbHLlK1+5G5v00DRyX4DzsgzDAFyZRYzGvHbTa/QrCnCG3/e+9707UCr1K3fb2962a69JjHkk0FUvgGm9gbM999xzO+CKMaWbX/nKV67Kybz2lddrZMp9N73pTTtgtawjnwX2Ac5q5M/9t7/97TudnbrocLYBaLwWwGktY3aa/KWNY46eTxZmyfmYejZKmfWWP+81VqZqywpaBJMhuoQfYjLi5S9/eU53x40of5s3b15tP9BnUeDMZMj/+3//rwPgxgBn/LSrXe1qq/wxFgXaH/vYxyYf+MAHVs/7YGIDiF8SnX+Vq1xl8vrXv35STl6yR6c85SlXi5rMmUZrGbfT6rzoRS86Ezgrgf8avVhjMzairfWud7/73TsZobdL4Gy9bC2bL44xBresTMrV0M7QocuQvxqZqilbY2s3ovyRhfWwtTUytdayQ75jv85F9HeTuz4X539fRowx/6nHlKDTanGasX4U2zsW3wAeSoo64QmPWTS5I/3OY556DG/mfpIJ9alPfWrq3xe/+MW5dSyzwKlOdarJXnvt1Rmvac8R6BnsQDMO1pFHHjn5whe+MPm///u/LlhiXBch2SEBzf785z9PPvKRj3QBuZlfzxKgA8ZCnDHGgKP4xje+sWsH/nIe73e/+6VYd+8973nPDjQzoy8zRKD605/+tCsjqLzZzW7WffacsWVXH9A+NA4UHBgrl8Utcz9G7mUzAXr7f9/97ndX66gpe4c73GEVNKN7BIr0E0UKqEtdKr/Tne7UAVGuyQQz5pJdAGg7z3nOs9qGmg8UuLqNPVkIr371q7ux7zmMDTBqGgHWlBmib37zmxPZI/2/jHv3yGZdhGQHcV7oPEEi0EbGrXcBqMnG21kk6KejyQrw87WvfW33nvibwKe2bZGDMfKnbjoY9eXUd5l7oRr5E5Sb6EAy6N7whjdMvva1r3WyykG9+c1vnmqrj4uO2VnyV92I48gNy5A/rBkrUzVlga4BzeiFV73qVZN3vvOdXfY6R5s/EtqI8sdnstpgR5Pn6mfEn6L/ZIYhGTBAuBBwLaCZDNG3ve1tXUYfX02ATgfQpUhGdUAzExrPe97zZmabLTpu07ahY+yY9g3przK7ukYv1tiMjWhrBVuA1SFapq0det5GObcM+auRqZqyNbZ2I8rfcdHWLqq/m9zVaYBlxRh1ragrPdaPqsUs2Gb8EF+J4V7wghds55PXtbK+dFXGWaoHCL373e/O1w11BEwJljF1FlnSgASNMrsEU4JuS7s4SVLsFyHPJwTqe9rTnrY6+86BeuxjH9uBZIRJgCpQioPz3Oc+d4KvyJKdhz/84R3wB4WV+cG4xDE7/PDDu0BaWQ4fgE3gLSDjMNeUVUejxoGSAzVyWd4373McJEBPmR05dN/YsmYrL3ShC3VVlFkCMi2A0QIyDtSBBx7YjcsLXOACXVlj3XIeZFn5ox71qA4ouuIVr9gBGt2Fin8ADwET/XHooYd2d37nO9/pxrRrF7/4xSdvfvObV/VBqjZzcpOb3CRfj3WUfjxEskaMeYHRS1/60qEiM88JtJMF4f4AcbKeok/w7bDDDptZzzIu6lMgJmJnPvOZz3SfZXToJ+CDfp3Gm67wwL+xMuVWepceZx9myWqN/Kk32S2WSb3sZS9zqgPOBHPkmEMwa0lXd8PAv0XH7Dz5G3jUcf7UsuRvrExhcE3ZjBXLDF/0ohet9g+w/YEPfGAncyYSLB3faPIXh9lxR5PAzXPZimTL0n/AIePJioQA5AJeZJIVEBaSdfrgBz+4m9Ck402+ysJFnPpydUHuKY+LjtuyjqHP8V8/97nPzW3DWL1YYzPwdaPZWu0HJE+jafZkrbZ22vM2wvllyd9YmcKDsWVrbO1GlL/joq3FZ0kajjXU5O4/atjVlV1WjFHdkJE31PhRdMBYfMPjswpHQlLttmAjmz+z2ELA2cwaBy5KZQdUURyWFdnnS3AwLVOCcyeANWvH2RPkzluKlMfe4AY36AaxZZZm1aYZSlleyD4VQK4QZ9PzKQJ/nJ8aku2mPs5Y/15gmVnLdHrAO0tbA5p5luWZztknyZJN2RZS5AVxAnNtLInzJojO3lE1Zct62ufGARyokUvZP2RelpWszZD9Yu5617t2Y8hyCDP5yaoKSJOyQ8exZbP80FjrL62R7Wn5BYVszNEJ9u+yL1ffSY6usMSiT2P0V8A72UMlCVwAY7ISANt9JS+zi57ZsmXLZNOmTeWtUz8D25O1JLuEDiuJERLo4U32TOw7NnkWw9PvD7r2jne842p/pW4BIX0kIGIU9Xn/fVIWj+31QocFoKPL7ftiZn8W4aWJD3oU/0L6GH/Zhktd6lJdHy5L/jh2SJtnUY384bV3Z5OMiZJMlnjvOA/ltTHyVzNmy7rHyp8xfutb37rLAtTGP/7xj91YGtoCQf3eUf+TGeNOdh2gurS1yulne0GRafYe2C1DBpAhs7qkWpkaw7ey/nyukT/+hvFgDAJC2egQAMY7AWMAKGNlyv01ZZPF1gdpAGl8CX1HPj784Q8vJH/6xnvqS/1Fjj/60Y+uZmdp76LyZ4kpHgGkznCGM3S6UH19qpU/Oude97rXatYq/U6myC3yHvHD2ImSfMf/M5/5zJ3e1qf0Hf1j37iS+LD2mPQ8fpf9Hkuf0zIV8t9fLps6xvJNhht5osft3ZnJBPWwBTLktM8ydr5k7NiYfRPH2toam2EioNbWjtUFi9haMqZvEHuXd+5OzPg3z9ba7w4gJ+vGWMvy/aEqTYqYWCd3fHVyRX74TUOx0FgdWqsXy7aNlT8xyLJsbfqi74eU7fS5xtbuyr7eGH0b3pA78SpZsULAe9MP9HNJY8uN8R3Lesfq7/Ien5vcbc+RMb7KInrPU8b6eDVyp955OE2NH1WDWeyxxx5dDKcN4kw2Ft6yHlt8qXMMLR04S/aCxjDsnG4dZCklgyFjqiTgWpYWKM8omFHlnE5zPsr7OTGMl8CEoz+NKBeDl9PpaKbQ5+zBZA8Iz6+lWXvUEA6UvTey8fSQwRA8CDopPGSfpnKvpu7ktn+ZsaE0UU3ZbVW0Q+PAKgdq5JKcyv5xD2A4QIq16hxEGVEyr5DACAme73vf+3bOPWfT9X7wMrZsZvcBRH0S7AQwonMA8NJ6+2Rcxinj7Jc0Vn8FHCqXm6qHDvGO9JrAI/xxzdJq54EzAAgGbh4B4GRKIHzLMtPcx1gBLEuylKuvyxgcRF/2yTNQ+Ode70dnO4eco3fKILG7sPLPZrIcqvBeWZ/pcllssshkwEyj6DN86wMtss4AZ+H3suQv8iAItbyHHgbkeN673vWuVbCyRv4ADn1wFw/YRIAk6u8rN1b+asZs96CVf2Plj0zpt8iF/mSbNm/e3GVSypZ2LqSvd1/Zs8/ReTIuuLRH6sEHH7y6kTowhEwF+FFWwC3oV1bmZmxlrUyN5VvaXB5r5M+EQLLMb3e723V7XamL/ANfvFMmFMbKlPvHls0Ydc9Q8J0xmAykWvnje3m/kPehLwHr2S7CtUXkD3/0qzEOdJRlP0S18qcOvl/kz1GfPvShD+18SDrT1ha5btKiJBOT3tN1Qb3yT33qU8siq5+N3SxppxtkqQWQcz++lGDq6o3bPozlGz2tHu2xT55MOCAgQMg2AMamdtNXJoWzHNFenwBsY0yWK9+4b+PG2toam8Hu19jaGl0Q3V9ja4GoeGTixeRzwJp+f5TflZ9la2WAsEWIvNCJ07LH/WiFyZ6S9BM5pAOf/exnd/2T62Rn9xE6tFYvpv4cx8qf8suytWPlr8bW7qq+3lh9qz/4UzKKS7vM1vqhOTooKyrGlhvrO3o2Gqu/t5be/n+Tu2P4MdZXWUTvjfXxauROy8nZPJymxo+qwSxiu7WD/0OW6MsdSQsBZ4wcx3CIAFAJ5syCMVCMimDJbB2DLhDZtBJEmgFixPszy5wESw4EGpSAoEKAZsYmqfNDz3ZumoPTLw+s0lag2a1udasuiKCAdAAHRXC0ngRRzixgZqWSISZA7FMC2pTpX893QuPXt1C5OW2ul8easuV97fPxiwORuTFyCfCyGa7ZecEKGZSdwHAb94evLCsOqJUA2SxtiDNuWZJx6Ac5ogvGlg3ozAllWDJu1G8/n5AAo08CCqBS3lddGZvK1ugvs+UI4N4njjqiN0OUP6cbjwSN5bWUGToCKQVr7usHJuoAcCAzMLLR6DP6FnBYkqxfjk+c1vKavgjhqWDMZteMlM8vfOELu6BNW4AnfWKEPVeg5hfyBPQybelZQaYN/mcBZwlEw7ey/mSMxFAuS/4yYSHwLIkTSt9awkpe1iJ/6sGrBKSyrQKyeGaN/EWGx4xZdY+VP3JtqRLbyB6/5CUv6YBgtth+M4JF46i/dQMw5HWve10H7gb4IEuCzTe96U2a0IEvxrkxb8KJ3yA4Aqore+c733kClEM1MlXDt67y3r8a+QN0kmXbL9CDAC+6CFCIyKfMTDRWpmrK0q38Ff0DbDQhGKLzoveiT3PNcZ78aa9ZZQQgpG/0K58Mj71j/L1a+cOj6CrbSwzJrecuKn/0g6wLGa6cfD8UpV+Bm894xjNWx1wfmPdM5DyesmPxZ7de2f5/lnvqA76pfTPZM/rSuQMOOGD7G3rfavjGlj7ykY/s9LDsH0tGAwjJeHQdJWDxGZAX4n+yA/pdBlEosjHPLtfajNTvOM/WAmLH6oJaW4sH9LaJZTJMB42hWbbWWA9oJpPTOCcrdFfGXJ7hvfAciXfEFoBFvgcQlKyasC9/TVvZMTq0Ri+qs0818rcsWztW/tZia+fJX43NqJW/sba2Rt/yxegYOsrqJNtt8JfIuokOsmUylk8xphy5iD6e5zsqO1Z/KztETe62cmVXk7uyL+fhNDV+VFlv+XkIs7DllQkKmIpM8uc///nlLTvk89Zor/JRwC9O09CfX/MJEQrEuUyKOwMv6Iizl5n23CMgDGjmHMcnCrN0AlJ+LUdKx/NQOXNrlru/HHItz5FKT3Ejyzb8Ic9HyRTrvmz7J0BFKbPt9HYHhhpazfCq4+1vf/t218svNWXL+9rn4x8HInNj5VImqPEcYx6HUuCSsQtAD9gBXLL0hLHP0kZOfQx3TVmK07ORLLfMtAOXon9c871PZmtjwF3rL8vL/fP0V/g1VIdzAYDSNuWz7IZ+AzCNpYCBW1ayJDjfJZnRVjcdq08E1UAJwFy/bPjOaS0zcwGgyXZRN+MHxBB8InXhE73JMesHlfjs+Z7nBxIEW8oKKgOWDfVFV/m2f7neb7PLAc58Jm9oveVPnQl+8M972udM4BA5B0aS57XIn/4KaOaZJejr+1j5UzYyOGbM1sgfQChyaxlYAA79GZC5DLi1BflBh2SaOmbpiH2gkPcOKGlvwsiRsQDwRYJdwUStTNXwrXtQ71+t/FkiAFREsj2jxwBmZCY0VqaUrymbrDzOZMA5dWRpms8CnT7Nkz9LaMm4MeBXI8m+sSxrzbhGAn9UI3/KA37IFT+rXI7tWkmLyp+lwXQrYm+yVyM7I2Muej+2o3ymz5nsiSz0r/tOb2ZCosxCHSo77VwN3+jDgCv0NDDQGNEngGf9hErgTBAMJKOLLd1FQO/YkRpbW2MzugcV/2bZ2hpdEH6pum+vnevbWoGXGAOPAIuOYyk8GrK1gjYkCytjnA4Y2g8TcEJvkkM/BsO311eW2ybDuLQDad88HVqrF1NveQw/x9gN9623ra2Rv7XY2lny573G2ozwyz1j5E/5sb5ejb6VSEJ/AldlZpMpOstyffaAXgOejS1X4zt697H6W9khCh+b3I3DSMIvvNyZcpe+pEfn4TSxnWP9+NSd40bGLBbKODNALcsaooBCHK4wDpNlG5SkDsJgdq4kgVGc0JyXdkvBJnjL+bUcOXyUqbYdffTRXVAn+4IzxCmxbOBJT3rSqgO16LM4fvmVTe8lAyMUpy2AQs47JiicNiOKH/deWdqlHF4CIaY5BTVlyza0z8dPDtTKpYDf/jFm8znyiJMDcAhxGs3MAmIEmpFrATPFykmVAYVqypJ5+5UZz7Jf9ttvv9U9Z8wMum58xaFOexwFG66boTOD7I8+euYzn9ndM1Z/5V3UWRo431HOha8yadTt19mABmMJoJVZnP5+RuoI4BUQoqzXXjcBLZwHaNLhgiwOFnBfPwDJStK32eSZk9NfDgYMy3Pdpw7L8RDHTrYRnloqFqAkM7ZdoYF/CQAHLm0HAISf6y1/nitoAUIIiuKoCJIE+hxh/SCLDoC0qPwJtvUjpxXYTH4f8pCHTJ7whCd0ju9Y+dPe8GKMLamRP7Yj9ZdL9pyLLLJBsVfOG1NsaklS8WVwGAtAi1JmZJn1/YOMWzIji2qsTC3qd5RtrZU/97K/++yzzyogow4gc0k1MlVTFqhlg3rg2IMe9KBO/5Ed38PHoXeaJX98iiyNMqb7/ZMxHJ+sRv7o+k0rKw48Axg7i2rkT32hgPT5DuiTfWGiwLOHAraUdcw4ynuV13wGkNJtSJbZLPCvKzTlX+rP88piGVOlfZHRLfuPrcryW3tSlhO9ltDiBbCmHIcAbPpFv8oUNq5qbG2NzSjfw+dZtrZGF5T2J3a1fFbO4SueZuNyPApwWJaf9lmbot+GbC1djcQmJfHx8b3MILek9OlPf3pXDEBG/thEf4A9lL7uvqz8G6ND+TRj9WLq7R9r5W+9bW2N/OHJorZ2lvzV2IxyLEbWSp7mXPhaY2tr9G18ZXqNfi7Jyo1Qlg7PK5clyWN8xxr9nXb0j+HPWL3X5O6fqyyMjK2eWPmQc+HrsuQuzxyD0wz5HLm/nMhLm3PNcaNjFgsBZwIJM1izKAG0MmZc/A1Rli/m2pBDkxnVktkpv+iR44E4FtkonBK314BfbhP0AdbKpQ+1zxLM59eYBMmWOJXOHUfOzHJml8v6Y3iHhE9Wn9R2SofS9FOs07JWasqWz2+fj78cWEQu7d1VLjuTUVaSGbHMzJbnfRYkMsbkWTYAB7emrGxWypeTwAE1dmQ9AaUsNRIsMbx9yhIqyzfM/MoKFQhymMsxOU9/ceg8n/Giz5ItmucluwFfzfZbUoUEQFe60pW6z8kUEZA6x/nub6KfjFtGKzPV3c3b/sWZd2+fhvQDZ9JEAT1Ht/rTdr/+CRxC9HEAr76D5noyT3wOZQ+kOII5P/ZID6Ms4Sjvy8w8nvsLrbf8lQFnnuFoYggfABOCK8DZovKXPpHBI/vAZI0xAMjcspJRGJonf8qNHbO18pc+JNvlEt60LUcBYPptyBFi9/SX9zMxlYDR/VnWmLrKY8qNlalF/Y7ymXmPGvkzJo3n/GgH8KI/XmpkqqYs/0imCzAH8E136QMgC30oyCIffZolf7IJ8/7qm9b3AXfHyp86s1QOkEP39In802HaUCN/yW5WH3+rT9ro+WTEElOUgKNfNkBG327QzzK9IpcAs1mZ/v16+9/H8q28z1Jnfp2xpJ+TWZcy7FpsW87lSM+wc7FvtXZ5rM3I83JMe4ZsbXip7DxdUGNr2Ta2HI+M6dhaugTxE5xjS0vZcW2erc3Y6E/yuxdP47/7jrwX/yS+wNaz0/+P0aFAjrF6cdqTFpG/9ba1zdfb6utFpsbo24DmQ75e2ddjy431HWv1d+mjle1qcrdxYowauUsfjsFpFvGj1L8rYBYLAWdh3qxjGTwCpoYAIPfHeUtdmcnMd8cYoRJ0Kq/XfhYgps6+g+oZZu84mzIBFgXOys1AzThl+UnZVkqPE5dgsLyW9vUdNwaYQ8Bpcr99LoaEWF01Zctnt8/Hbw4sIpcCoFKOzWCV+w2SVzJtfEehhstlgCmQqSmbOgQw/jzD/QJZxzjAHGPPFrQLLpMZm/uBH8kM9S5f/vKXc6kD1ufpL9e1XaDXB7VimOi6gFAql4HTJ220oafZ5BI4c37Tykw1mvYLw3QFByjBbFd42z8TAX3i1MgU9EcP4Zm+yPKjgB3R0UMTF/1n+W6fF4Gn+gFrMrUsnXJ/P3Ol3ybf87zwrSyTQLofWKyn/HlewNMhYBBfvGcZdI+RP/VKP5dp0N+gG++9t/eTGVheH2M/x47ZWvkDQCNjdtqP07hun5Xww7jrk3M5D+jJRJp+7O+PVt4rOKyRqUX9jvKZi8gfwDyTceoyEQC4T7aiczUyVVNW3fTbU57ylG7CD/gfuZVhhAIkjZU/oJW+MY4BXP2N2LtKV/5FL46VP3yiyxB+lTzrTq78y/YdwK0a+cv9jn295Fx0ILAjmUtkVnv6mSRpY/jmfrywkXt0kEykabpY+TE0lm9lXTK7M5a0316D9hMMOaef+YV937D/XT1j7bL6x9oM/BtrawXRaIwuUG6srTVJgPCDTe2T93beclZLEEPavmmOrdUGMmac9imga86LJbIntPtk49NrfBDAXdqZ8o7p3/65nKdDa/RiWU/5eRH5W09bWyt/2j7G1tbI30bx9Wr0bVZQDOm5sn/HlhvrO9bq72kxe5O7jRNj1MhdZCv2Md8d6VOUPl/Ej9pVMIulAWcMNGcEg33u/3qmzTBlWZQBAqYz+H1KhkbpFPfL1HxnvDgAlLYgRtBSUmaFIgDltTGfZYPFIfzkJz+5mtHWv9csnDXo/vqU4LVMneWMZ98SwbmNmgn9ENWUHbq/nTv+cqBWLgFRAUTIq+DcrBlnESiDAMk2yDWmLEUz/kJZCui7Z9eUdY99/rRB1gWDHLL3kjFujAgk6RF7U3m2NpTjuzQEQIJa/SVApjdkS2QvGO1IBojPspMEtf3ZbdfcC/TSNmMbEFGSmcM4zbIGhkiWkiwoOq1P/XPq0z8CU3xLIOm+7PeRNshckU3oXYBf9Geo3E/HOX0c0MyS1xI4GApeUk95jM4jQwKTMtjDXxSj7PN6y59ARqo7ks1bLvHx/gmWAIJorPwpC8jQj2xjCYy6loCLnauVv7FjVptr5M+4IVNAh+xppq2IrTbujRfL1QSoyJGDXfZ9bDj55qSHd8YdH6DsY/yxT5jxaSzUyFQt37oG9/7Vyp/b7a/oXeKjkF3njAFUI1M1ZdVtIm3TSqBvabxskIBm2hCQJzpprPyply5NYNbve+CuLLTI0lj5w5/c4xklZVkmsIqOMUZq5K+si14qxy35pScQmaL32AWy6j0E4yG+E3I9ekbf+lVOwQEZBlTR52ulsXzLc8iGGXmUJZu+O5/26GPjz/vTXyVliVfAqhpbW2MzPGesrc3k8Bhd4F3G2lr1lbYqfDAm6HD6BYjVB4XH2FpyIduL75KxlfrxvqT8wAbZ98MUpc+ePQJj23PfGB3Kv1mrra2Vv/W2tTXyhzdjbe2u6OvV6Ft60VLxZIpFbhytFjD+AMJ03ZhyY33HWv1dtqv83ORu48QYNXKXPhyD09T6UbsSZrF16i/cWOcjo4QYhzgtvhvIgjN7xJQzfa5xcIBqIeBaufwh59d6TIBt2UCZSVEGnnFEGDGzLP4yaznt+d4toBlHNstAh8pLUeaEyaxIQKgcAXIOb7IHku8JPAUdln2WBrisv6as+xh67xaHrKxrkc9mjAUNyfbp1yHTxnV/ASnLMvrcNb8Y1SdLX3NNvwwRZyL194/3XtkXTjaUfk/KflnH5s2bu3sFI9NIZpV6OYYlATWdL/m4lvrKuoGrDCLn3T5e9tPxfHvorTfVyKVn46m+kK1k5jap92YPAgoHINc3eBLS/5mNpS/IfE1Z9ciWosgzPpwTNPp1TwS8Rvb5Mt44qRy2kixzct71ONI1+iv7uXnfLLtUf96Ns89ZAJbYVLP/l58ON6Zdyy8Ppo0CYyTg6+vM7sLKP4EUAs7hfWhoqR/9JyPAtehX5Tlcgk58eMc73tFVYXZbkIE/sh1Cgo+AbDlXBg2Z7XQNTxKQ9oOE3JujgC+gS/iXOjKhEBlzfr3lT/97f2TD3pKMOe3HjwBfY+VPPcm2ZBMFdiF6LRmbQChUI39jx2yt/CXjWl/7NcWS7B9E3gWPZdaoMtkzxecy05Ajj9gwQS1e0mulTADjAAHsqEmtWpmq4VvXmN6/WvmzBMuYIzP27JJd7rNz2WOmRqZqymo6IMBffoDIOfy0gTPyPuFJjfxFDwtC+UUhskB/0hUJ3MbKn8Crr/vyPWPOZvbO0XOLyh8bo50hvwyLJ/SnIBHl/fgjAa1L+5TxrSzbHtDMhGV8Q9fWQmP55hlspfdAVkXY2y4gpPPxpRKw8G/o+JDv+XXHZNmEB+V7Kz9kl2tsRo2trdEF2jbW1loCH9kqj7GTwAfn4x+oG20aYWu9H7IqJWPAd7FLP0ZI7EO+S5+dPWQ70JAvO0+H1urF7kG9fzXy59b1trU18uf5Y21tjfypN/pxTKw6Vv5qbW14MUbfZvzScfkxLu/heyYzjaux5TIm5vmOtfpbm4aoyd13O7bsanKXvhyD09T4UbWYRdqxs45LyzjzQhwgAb5ZS/u3cJoFtEErDez+bI37BGNmNA1SgR2Hx6zUUUcd5fK6EKfDkiLO0KMf/eiubRx8zoXnmVHKLCsDddvb3rZ7rjT/OCZDDSkBFw7Z0HIs2R2HHHJI58RRrgIE92U/tAA6DD8eIAFkjKuZWRkzfeJoHnDAAVVl1cGAczTd/7jHPa5fbfV3ypej7T1iZMpKbDIdsBJg2F8uqz3u7wdj6igdEzN5ARbL+g3q0tEvr5WfzdhZJpRffHUN6KXd7jfz3W+bMgJc1+PoO4fcy3AFIMm5Retzv/rIqR+uKIlji8+MLPDM5tQJispyi3wWXIyVS/Id5y+/vMYoAmIFc5y/gw46qAs0OKoAFPdYomAckGVOO4fS/idIUDK2rPIysOgM44ieEWzjufHiGcYR0l9+3Y+sK7v//vt3415wgc/I8hugCKrRX9ossKBP9txzzw4YoDf0kecOLdXuHjLyXzLGAigN3cZQ0VkcKfqCXHiX3Fveo63K4xMwLMuj9A/9Z0yUek7f6kuABh7LRqObE3Cmbs6futShnCAVH/DYOaRf3Bc+597yqM8E5yYi2BDv7XnAJu2Kk7kM+SOL3t8vp5FPP38tQC3lxA9cpP1j5c/7AUSB63QIXcsmchoSfFnSamkcqpG/mjHbVT7yH10m49Akl34VKOoLckPPku2hDd5dt1coHUr+UpYcIbbG5vRAMhkeJgM8iy7hIyA/JkJOa2Wqhm/dgwb+jZU/8hG7LWvJmEI2DZfJ5JosMPJTI1M1Ze1xJaDFt8c85jFdEEhW2QgyWi5Dq5E/tpuNFPiTWe+mv+1JZRyyzwH4N5r8lT4nfzOgdPnDNGw/IAnfHvnIR3byRxbdW74b25CJCDpsjz32GJCYrUvIMtkwWGDgZA3fdt9991VbGXvi+IhHPKI7D4AGBL3zne/sfBE61o9FmPgwiWEcaj+f0r6iqMbW1tiMWls7VhekzTvb1tIPbCF9Zfmu8U1Oosc75m77h8f0Id/gYQ97WCdnxlDiIMXif5T3zdOhtXqxrDufa+RvGba2Rv60eaytrZW/GpuxLF+vRt/yj7WDDEoC4E/zW8iVMW68i5fp/7HlxvqOkZ21HJvcbZwYo0buyj4fg9OM9aNq8I2yDTvr87+tOH4HeDhgiqKeRcAshsJyiGnLhcr7OR+QbE634EAA5WhgQ1o53EmltrwLUh4EVll/ynLY/ORuApXyGbM+e6534uhnM9iUF1wD7gRmHCUBLsPmeWYr8rPLynNA8/PTHOJZwassF3Ugx6E/75xffWL88JSj6/n+EGXHmQjZf0k70VCdOQe0qCmrPlkDm1Zm2RgbynYe6aNZskI+BA0CZHKSPlav9wwvfScP/b6xLIwza1Y1MybK6qs4sL4L8vuzhc4D5a5xjWv42L0PcNYsvj/py/qeg8OxBDwJDANQChTSB+Sx3Gelq3DlHyc82TP4HfJMz+ZEZUZyLfVpH/Ah7VEvuRFQCrD1F9khn/g9BCKmbeVxXv8pO0YujU9L2sgeeQVKhQT/gkbvIBgRPMqk8RmgJpA25gRg+sOSkiwNVEdNWX3KCeU04IX+Ua+A4bDDDttO/vDNNfpG29ynz+gWmzyXeq1Gf2kz3UDnkF3vpi0AAsYDf2aR8kBk5Us+5p7NK2CferU/s4O5Vh7JOJCVfOODP3XqD7IC+M+yeX0iW4h8qdsfUtYPBJSkbyzvwTeBNL7pdxkhAjJOG/kzjjxPZhj+aod6OXKWN3H06AXlkn1UPiefAS5skvbREfjjeSY06Gb9tUz5M37JaIDdyAmHD/iTrDDtrZE/tsP4o3fInXcQNBnLeHnEEUeEBV3wPtZ+umnMmF2tvPdhlvzRn2QZL9IX+lAfCMAj297HpAge0VXkDd+UZROA+8ZkiF33XR+zbcoGYPPesX+1MlU7btOe8jhG/pQHkmuzNpa/omkMCWZcM6FiL6wamaopS6Y8XwaYPtCXxp7xaswJskI18ucewZQxr2/oCX/GobHpF8LLzdHXIn+eRcepm40rs1XHyp86gLvkTbvJK16QLfrJD9aUvga/hL6UcRv5wzfyA5DKRBSdZaIlpI1Dfwlm8Yjdn6bLU0+OY/jG31AnOvLII7vx5bN30PfsDl0PUOI38WnIg/FKv7iGTDqwtdoWqrG1NTajxtaO1QVp81psLT+SzcKrZDSmXsextpYdpcf1NznDZ31hTLB5xrAYY8vK5JHnsYX0KL2oX8gLW8Lu0xPxPcbqUM9aD1s7Rv6WaWtr5K/G1tbIX63NWIv8zbK1NfqWXiRnfGrH6Ga+Gr+XjkZjy9X4jl3FvX/T9Hev2OrXJndbf4W+xsfb2XJXi9OM9aNqMQs+o4l0OrzcZmFVuNbhAz/duBqiE6wEt92aFC9YghJDhddyjkO3aRs4I2DKHgvT6qRcgDM/+MEPOkd8Wrn1OI85OoFzz+HnZPWJ4yJdX/aMcutNgnlZTPjE2M7jz3o/37IOg+LAAw+cWzWHYZ6sHLCS+fb/2bsPeMmSqn7gTVABBSUICoqjKCAiCiggiAxJAclITgPLn7QgkgQkCR8EydElw5JzXqKACwiICCxgILMSVpCkoICK+H/fO/Prqblzu7uq33s77w11Pp/u231v3bpVp06dOudXp+oyYIERpWBbTsehYJRqY7yWtqTcy0CkKEL2N2AMi4ZizDJezbBqs5IYMKId0GMe85hDjPukYwiY/UPKlzdj3fGOdzwEFGTciA4sSRmUhbMrGiXkmZ7N8M+G15vJT4QPgxiZ2S+d9TyTzIg2wAszzTG+cn3qWNN+uW875JK0LZycAABAAElEQVRzor+RdcZNCazmuTm2pJUfGXakYyjVRYRfDA7GLEejdACn7pHnnkr9xQkDHOnDytEK+E89f51zeOHZInmWESBMe0hL9zAilxEnGjjHiC0dsPIe7RaDjuOwGZ1pHPBM9YgzWz6r9XeLTJGTRAeQE87KImqRP3kAkOkShq660SeLqEX+tqPPplycPvygF32WlZmTaFKGfgaOLyNjMF0mHSd6qs+sI1MtfFtUvq2WvxaZakmr3fU5Y5tIW+2zjFrkT96cf84ZnbZs8nCnyJ9ymPzSZ8nUMgIs0YGA+VVpl+WzmWvbwTf2CB1Dp9Nfy/pri15sGTNax9oaXRA+74SxVrtx5Dhay/oFsExbaANj4pSOS71yrNGh6+jF5F8et0P+WmSqJW3LWNsqfy1jxnbJn7ao1bfaMGMUYDyAWdm2+V2brtZ2TL6bOXa528+93Sh3LThNZG+r7PjNyFzNvfxkgT5TdPqpk9txjnNsBqaWOEhb4STVPI9hlVnzqfSMD0sEzaRvxgGcyjvnOKBAhCNBnBsG5hiA2kxZGO4cLLN7JXBmhg6ZgbcnC3CNMZFnE1TnGBdmN0OUioEEWcbCgfMR4VfO9Cf9qiMDx4eT4fljMvgwDkW1WRI6Fdk2vmfZ/9b8GBF4hxKtNZW/gdIHqANkqwHOpvJZdG475JLBWCvrLWnpGEB7DZEvDqBPDbXoL06KGf8jTRylGiKbLeUFMq4CGrVb7fNXlZEju5XObItMkZPa57fInzpz0n1qqEX+tqPPpoz0uk8NiRoyyVFDxuBlkZTyWEemWvi2qJy17b/o/vH5FplqSavdjaMZS8fPHf9vkT95A9VraKfIn3LU2pwA21XyV1P3zaTZDr6xV0WS1FCLXmwZM1rH2hpdkPrshLFWu4kwXUUm0mrlMXnV6NB19GLyL4/bIX8tMtWStmWsbZW/ljFju+RPW9TqW21YO0bVptsq262Ur0W/u9zt58xulLsWnKZW9hbJyU46f9qdVJidXBag2dReLju5zLVlM2si6sE6/62iGBIBu+QLEINQI2huZsSzBMH5bF4rWshAGrKe2swR5QJ0EdKMAEZAtVYyc+6DpgBazzCDiIBzWeownFjjqzU/4fvqi7IfyaLHajeRkM95znMWJennOwc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6B9bgQAfOKphm1s5r5UX2HI0k2u4JT3jC0hDf1npnLxFRW0Jxkb1EgEFAMUBkohHsxRES8o7GkUPZUyyRMYAzMxXy27t373DP1Je9SWxo7WPDegCcTcdtOK5cZqPKfcrKPABRlpJKt29jY97NUkt+iYLz/GXh/8pkJnO7IiE3W+d+f+dA50DnQOdA50DnQOdA50DnQOdA50DnQOfAbubAqbZUczczqZe9nQOWcgHHbHqaDfiBVyigmI1Zr3jFKw7RXKLGRJNZfokCvPltbXQ26bMxPgKa2SfCMlMRazZgnyIvGlhEwqztbSaEdIosGXjTm94084pq5bJpqyWm61JLfpaQoilAzP5u5QsWUh71sWFzp86BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoGt4UCPONsaPvZcJjgQQArQg2y+i/ImI0BSXoJgfy6bFdvbS5RVGd0HXEPSum7vNB9vZUE2XS2j1oaTB74sw/TmNkf5hkTZ/dmf/dkh+6jlWnm0n1rqsRVLNmvzAzoim8OOycbzU5/siTZO3/93DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOrMeBHnG2Ht/6XRUcsNmuN6Xt2diHzFvx7KU2BsUAZEAzoI83HKLsfea3pZgBxQBkxx57rNOH0ZWudKXZpz71qcPOe3FANjK3x9od7nCH4U0ZwLwLX/jCw6uaD7tpdMISy3ve857DHm2WbL7yla8cpWj7W5NfyiwSz8spysgzr5fO8ldPtm9cB83a2qCn7hzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6Byo4UCPOKvhUk+zFgc+/OEPD/dZdpg9ykpQzMWkEY2W/c3KN5xe/OIXn79lE5g0/gRQOve5zz0ATMsKCrQ77rjj5iDU9a9//YWvmy3z8QapN77xjcMpSzYvdalLlZebf9fkF77I3D5tJXnbzgknnDD/vPe97y0v99+dA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DW8SBHnG2RYzs2RzOAaCWj4gpLwZAeSFAUvtvg37gmhcJoHJ/s8tc5jLDOYDbk570pOF3+SXv+9znPkNkmqgze5YtI3ujifi6/e1vP9wjguzhD3/4sluGa+973/tmQDz7rWWvtpU3LUmwKr8vfvGLc95d/epXn335y1+eLXpFtH3kOnUOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA5sPQdOFeDM2wxtEG8PKxFD7373u2df+MIXmmsDWLn5zW8+++QnPzl729vetvB+kUv2o3rrW986+/jHP74w3fiC/bOAL95o+L3vfW94jv24bLo+RevWy9LDqTdBAnXsbYVH73znO4e3T+a5ZznLWWY3uMEN8nf23Oc+d0g7P1H8OPOZzzy74Q1vOJyxv9erX/3q4uqp+xPYYymmpZoo+5ulFOqsvmTDUkq8/uY3vzlcVo8s37Q32BQB5oBK5zrXuQZAaxVwJo9TTjllKIeXCnjpgBcIvPnNb57K/pBz5RLLQy6s+WdVfi9+8Ytnt771rYdlmcccc8zsDW94w7AnW/aFIxPklRxuB7X2h5oyeKvp1L5t7v3MZz5zyHLblrTuv+hFLzqzBJeeELVI1r773e+6dBiJgJQW8OolEyL8vvSlLx2WrvWEZbP25CO33oaqHGX04KL8LGO+6U1vOrzZ9iUveck8mTqd85znnP+f+iGCcdVLK8iIOisP/dtK3kSrbnQJnXJq0SUveclBf+DPZz/72aFN6Yt1qUWmLnKRiwx7KU49y56J3uxbUov8Wb6ubvQi/WW/RuPaZmndPrtI/jZbnu26v1aeL33pS88udKELzd7xjndU83ddHi6ra4vctaT1zBa52y16bxkvx9dsZ3DLW95yeLmQMbWWbCNhEsxkGB2qD5500kmTt+uneOceNostJryN2+/toK3Wey26rCWtuuOJSUXR+Ow9ttoiHb3T9J7yX/e61x3sTy9Wim2lTuzWVcQP4SssIjy51rWuNfv617++1hYfR2rcPd/5zjeMT2xMe/zqF+W+w4vqu+h8i0zxv+jsKTLJPn4RWIv8qQ/7jE31rW99a/axj31ssKmXteFUOabOrdtnp+RvKv+dcK5WnrXJla985cGeP/HEE6uLvi4PFz2gRe5a0npei9ztFr23iI+Lzq+jn1rGUkEhl7jEJYaXAPIb2cpwHeP1kaRtBc4Y4//v//2/Q5bDnec85xn2lnrPe94zX/5WwwB7XdmfSmQS5bkIONMoHFAgDIOoFjhT1nvd614zRljI4HG5y11u9sQnPvEQEGuz9bIsUadbRpYDGqxe8YpXDMko/PIeCuZd73rXZBYM76Q1QBxJ4Mx+XDFAGCVeCDCmT3ziE/Plj5YhhvJSAMZpGYWW6zkCDa53vesNb/AU2Za3dub61PF1r3vdIIf2TRPVJgKMM7yMdFbg1TWucY1lyaqvrcoPUOB5V73qVQfwTOSZD3CVfPuEvJFU2q2ilv5Q+0wOqcF0Eemv2aeuJa0+K4KwBJjOe97zDoDoW97ylkP6CZ7d5S53mQO5ysJQ88ZU4OkqAGpR2Z33htd9GxGMdBWSr2W2+vLTnva0QW8NFya+3BfAvrx8+ctf/pCyltfym9G3qtzA+uiEdYAzIKNBDMB9agFn9jP0EpAQvXnZy152mDSInORazbFFpuTnbbqA1SmiywKctcifvEyAJALXf2PixS52sUH2Wxx/95a0mT67SP7K/HfS7xp5xterXOUqQ3+kD2qAyc3wcBF/WuSuJW2L3O02vbeIl1PntRndxi6spetc5zpDnyvT2ycU0Go7h5L0zWtf+9pzve6al/N4s/VjHvOY+dYP5T2b+b3Vek9ZanVZa1pOP+A2ZAzDQ5Ngz372s3N6OO5EvWcCO+VnCwY40+Y+q8gYsAgkdK/xkmwax9bZG/dIjLt0ZlZ6qAM9CjQ2mT8GrVyvoRb5Y/MItFhEb3/72+cv+mqRP3W45jWveUg/xl82KR8v274seu6y8+v22UXyt+xZR/JajTwbw25xi1sM9qIxqhY4W5eHy/jRInctaVvkbjfpvWW8nLrWqp9axlL+oIAR43uID2CS5vnPf/4MbnCkaFv3OBMdJhII8MGxe9WrXjV3ugyuFHINiTy6853vPIBmy9IboHS+ElBYlr68BmzTyQ2cL3vZy4ayin4ymN7udrcrkw5Rb1tRL3yxP1U+ZupEpwBBUGbVD3n4gT8Z7Keu2fR+p1CWYirPohmrMgqtnPHNrJP7lhnF7smMEce6TFv+Lnni/Ite9KLhFKBD+6Okz3E4WXwxlPKWzeL0IT/Le/M7x0MSbvxZlR/ZsETVTG7yAGBExsnoBz/4weENoSUfx89p/d/SH2rzBkQjcg8gHX9KRdiSVnRlQDO8YLDjG34xipKXZ9/4xjcegCjX9DV9PXIp8pBDtA55WYO8yZKIRopdRILncCYAu4sIsCbNFOk/2n78KSN2v/KVr0zduqvPGR84G2QF+PnCF75wpp74G6OstYKRgxr5kzfdj8Zy6n+5D2OL/DHeA5oBxl/60pcOLyghJ8CgzYDy6/bZZfI3MGAXfumPopACYtdWYV0eLsu/Re5a0rbI3W7Te8v4udlrJkkCipjYE+GrLyLGuijuEEcRyEaORCQbLz70oQ8NjjuDHuC8lbQdek/5anVZS1o8jB1qPHre85437LvKFjOJBIAJ7US9x2a4whWukCIecgT8jcfc/KerETtd1NLRRHhCBhE5N5HjJV/IhDxHeR1qkT++FTKRPR57tYHxG7XIn4CLgGbyNXEu8EAbWnUiwGNdWrfPLpO/dcuyE+7j97duH7MuD1fVt0XuatO2yN1u03ur+LmZ661j6W1uc5sBNANoCwoBmFtBZCxmzxxJ2raIMxFSZnmRmZYsV+LY3u9+9xsYYsbu5S9/+dL6M3I4v+VbBMc3ABFs9D7eRH2cbtF/Dksc5ic84QnzqCgz1N6mSOkC+ew7tVX1UhYGxlSUEKXzx3/8xwOPhClOASKWgkkn+qgkEXcGgp1C6viABzxgaXGEX97//vc/LM1DH/rQw84tOvGgBz3okEtT+R2SYOMPY3mc7slPfvI42WH/xzPSSfCwhz0sP+fHzeSXTOzv9vSnP33oA9r97Gc/+zA7xqiIEZG0W3Fs6Q8tz4tTCAARgbWMatOa3cobRcsZUYYvEJxhTMk+5CEPGRRuXkBBDyUS8yMf+ciwT57oIv0tTtSy8o2vATyEElPsaXPRriIsXQNmix6N0Z37LX+82tWulr+HHd/0pjcdds4Js1jANsDpM57xjMk0u/WkNgViIvpRNCiyrMJ+hgwc7bqIN0Piia9amXIrfW+A9kKRZbLaIn/yzUy+pTve+Is473Q5OeaYMOpbad0+u0r+WsuxU9Lf6EY3moMFtWVal4er8m+Ru9q0LXJHjneb3lvF081cBwAggADgGlmmaVsEdqAJS8tBEGcbAUiM+/S38cLkGb0N3GebbsU4vF16r1aXqWdL2uhovHjqU5/q9oGAG3e84x0HXWcCy5YFO03v6RO3utWtBh2fcpdH/kp8lvK8SQ9+DTmgvzNhW6bZzb9NHOAN+ykRcqK7TULQj1aBlJNGNXVtkSn5xX+xdHbZ9hkt8mflkHrpp49//OPn7cYWFT3Eptb/Wttz3T67Sv5q+LoT0wDSgeYttC4PVz2jRe5a0rbI3W7Te6t4upnrLWOpFXVsYv3VuGsMQVaewWTIDHu5dkXhZso9de+2AWf2GKMcRF+MByAKEdp+8sknz8tEYTPiRV2Vy/KSD9SREp3qlIzCgGZmvoBbPlPE4eKcAB2yH5bQP/S1r31tDpr5z1hyDlhB8Yp6SHlq6yWfVgKG4Q3BgNKWZMDGC/s4iRQAFpSknIjDngGovN5/714OUCJAtPGbSbe6Ri39QfSPiFD9QURpSP+72c1uNuiAEzf2OOCkJKqqjJZK+vGxNm2WH+oX42UEIr6E/YsMACjrNxwk+zSMQZfoFiD5mAD8eEI/MazwH7hRRnoFvAOClPT+979/cLCA+4xuTldJQpHpSf19z5495aWFvzl3iVoyyz8GzznfdAPeKK8JAM+YIrrQ5AT+AP4AReqlLcv65V56h4OZaDB8Y2BnX8Kkw2NLjOlO9yCDH4d0/IKQ3JMjXsYZxb+QNsZf4Ka9ibThdskfJwFlwE4ZxscW+bOkR92NYfpESdpIvcuw9Fyvkb+WPpt8HWvlTx8XeaPdlVF760tTWyYY2xmM6gmM1u9M/thnrKTadC3yLH+RyuSa7FsOP9Wny3LkdwsPt0vuut5La+w/0nVsLnYQnaAPkaNEwRyaev9yeXpHenJnEsQESYAt20YA3slGuZekfOgxdqSJPERnxt4UxUz/hNipdDo7K3m7powcBOVmf5m8YWvKOysJksf42KL38ITdS/cDcOjtEJDDswEeth+o1WXub0mbCI3x/rCANLYznaFPiRRYR+/VtH1Lnw1/HIEleGTbAaDJovGxvIfecx+y1czYjiFz7G/twrFjE43H++TXOj7WjrucTdt50CNsMPJIB1qVMaWrUx5HadkBiO1Ukv9kQzQYuQegbYf8KbP89bVloJmytcgfMJctoi/q+yF6RJtqf2NVacfUyF9Ln80zHWvlr0WfsN8if9qRbgCA8nNNsIZq07XIs7z1J/v5IX0jY9lwYslXCw+73tvPyO3Wey32nhKt0k+tY6mVYwi+Udrg+qcABPp1rH9r7OQh0y342jbgzMCADJiUISRaR6IMAWPjqA4GCqXPESmBM0rfcjaOBsN9iigIzH3jG984GFR3v/vdp5IN5yh/xj/lEfIfjRvCOYqWwmXko9Z6DTet8ZUyTRlblD3nEU/HwFmWN0oDte3UOdDKgcheTX/QP4DR7rG3XAxFYbYGbwN2ZgUYqIijcdvb3nZwaBnYro8Ntdq0BndUGkPDiY0vyjUGMSNIlJmlNmOyPCcACDCgJMu0YwAw5hjP8rJ0HLhkOQMKOFQuN3XePeoIdNuzAYyFP64xcJ2nuxi1QIxVxKg0yYDwLctMcx8jBGBZktDy0unLNRt7jt8Qa7m6fQTU73GPe9ywqXHSO2bZkvzwFqhxt7vdbVgem7KI2Igx6p6kpTtF/4kiE4mwiBJ1g2+lUyq9qDO6L/zeLvmLPHCMLT2g/+liz3v9618/Bytb5I/TPwZ31YlMZcJDVHNJtfLX0meTf638kSntRvaQ9jQm7t27d4ikFKUd+dLGok3KtGTc8jf6IFGetela5FnZOFOizxGwIvtkDidWfLXwcLvkruu9g41kZh+oHyJjdLVoQnpX+5ZEH+3bWDrpKC25YyPZL+mxj33sMBaJLEDAHbrOBITr+rkJ3nL/R/IqL/YlgAXoxr7izNLz430lORv0Zhx6ZWDTAm49QyTysv0hW/Qe+w5vlI+8e5EQonfZuJ6diaxaXeb+2rTl2Do1waJc6NznPvdwbNV7tW3f0meHgmx84Y8xytgCdBTBUEPGAfUGyI4n3oxH2t51hP/aM8Brmf8642PNuEvO7nrXux6ie7WDfkBXi5Qb7ztXlsu4L72yn1wENUgDeHLedfbQdslf9jYDONtXkDwaS4DZZCiTxq3yBzgst4FRJ3XJWMFOLUGzWvlr6bOeiWrlr0Wf4Af542sjbUX32PBee9F/qDZdizwPGW98mVDVViY3rbyI3Zzri44tPNwuuet672DrtNh7uWuVfmodS2Pfmzg3hvBT5EEvwYPG/l6tnZzybva4bcAZ9BlxLO5xj3sMSsp/xgcngfORJTjOU8wUP4O0pIc//OHl38nfUPVHPepRk9fGJwF2QLPyOQHROGpjAtyhpGmt1zi/mv/WRWeWfOxIuR9YxnlkZFOODDtkNojRxghctdH9cEP/6hyY4EBkvaY/ALzM0pM9yhOwYVaIkjN4H3/88XMlF4fC20xDBnrOjBkUIfQBimvTBtzjBFG26a/y149C6bf57ygyAaiU+sqLUg5Z0mPwVw9gjygD/Y0BvWcDBDO7BmhTZs4Umup3ebMnQyjESNWH5c14L68lzdQRSAlocd8YBJRH1v7TiaLRGIfKC+wrCX8ZVUgd6GPGI37YJNV9lsiPozKcF/EhcoxB7oUtZmoZoNHBjE7pLEcUXcyxovcZwtKaTVoGnEmDwrfhz4GvGLfyR9slf5koyXYDBx4/TJzg2zOf+cxhomUz8icfvIqxy1mIs+t5LfIXGa7ps/KulT9ybVkwgxjwZ8m4Z4josscWAE0/0j8Y5V7SIa1IbUuItSHwipNPtgDE6lmTTjlr5VlatG8DOFEOEXwM+BbgrIWH2yV3Xe/tb8dEI/rHWaLrAB0AWP0CIGaCNWD9/rv2T8rYu9SkQhwA44Ll8rYFCTDJ8PYyKPo8RB+TmSzfzF5L9OIf/uEfDrKetAx50cyWKEZPZXmw8cDeUMrG/jVJpAw3uclNZkDmRdSi99iFdKi9gYy/HD9jIL4g8iniCdXqspa0+CfKTV8HdpTbiRhrM95GnoeCHPhapfda2r6lz3o8HkWnnHDCCYMuK8u26DeALs51VqqUaekzeoft/ZSnPGUAYYzVezcAqzGtMz7WjLuWDmsPbWPs5+sAmsk+O4YNIB+2wxRlHBpPViWt8/Jn2wGhtkP+9mzYVUhfSFSN/56pPwuO8NzNyJ+8RAzTBfihXuW42yJ/LX1WPVrkr0WfmKjWfnxBuoc/bXsQYzcdBKzVZrXpWuRZvYyzJsOAynQ1maulFh52vbd/zNsuvddi75Xtu0o/tY6l9Aw9Ra7YjSF2p3MmAGJ7t9jJyWezx/3e3mZzmbg/DavSFJNoD7N5OhYmc84yoLtdZ2dUlEtzJrLd9CmzFp5TKkqNhJRtTAZClDSt9RrnV/5ntNmzJ5/73ve+sz/90z+dh7syTMazqu4XocJ5wUdLskIECI2jZnK9HzsHajgQWa/pD/KjxMhqHGcOCAKwRLkx3MgrAi695jWvGRxrjgoCFMegbUkrgjXLVBgFcYSAS+kP8vd/TIyJ9GfXypBg/3O/wRpohhgmQJM4DSYBwi/Xx3k4F8cqZZM+G5cDoABMtRQw0MwLh64kzpy8lVGbMC45cYC5cVrACR2iLTiK9Jy0JjMC1seQLp/BGM/+P+7FC6T9GOn47Pme5wUJZogNgPZFCVg21RblM3J9XGZpApz5Td7QVsufPOP44Qmwxz5nnNHIOTCSPG9G/rRXyeMS9FWGWvmTNjJY02db5I9jHrl97nOfO3c0tWdA5gDhiRo33ouuIVNk0FIukTZ4xx6oTdciz3gAnDM7SW5sq9BKLTyU91bLXdd7B1ssG3nrf6KpyA49UkadsCHH5EUiiXB2zPLgvDApE5JABHItckyaLM8UgUbmEVAYmZDwW1+XNlHFzgHDkH4ckN3EZgA9uh2IgTiWpc07nCy+WvUeEAGYjUQZZ/wEmNFVoVpdJn1L2kTPAWzKemW5lvwABWNapfda2r61zwJMtDtgocXXyAsj6BZAbklAUZNIyFjLBiCrxo3IQdKvOz6uGnflr15sDX3AxJkykIUAwcargBQpT3mMLaSvTRFdjiKn2yF/iVBUdu3DvuCrmWxRfpM0gC+0rvwZf/Td2KPqVdoaLfIXXpT3D4Xb+JqyVWrlr1WfhG8mOsk24gfyu7WnPodq0rXIszyByuxgbXb8xkS5Ywu18nA75K7rvf0t1mLvlW28Sj+1jKXkT9/0YS+yaWFHkWV6bt/GBGmCFVrs5LLMm/m9bRFnqZROZMlPIjEodZvFG/AsFTHQHGnKQBFFWpYnzhlnAG11vaYGMjxjpJk5nXKElINStKa3XK5pRgYx3KIohxP9q3OggQMt/UG2HH4Ar6gjzgEivwzHkP5vrxPybuBLf2JYMjoAQkLYUUtafcWyCU4U5Qx8Vh7KV191Xb8OeJXyOALrXedwm7X3Ef326Ec/ergnA7o0oqVKYmzRYdKnLq7HkC/T5lz4ytmSN2dtvNS6vG/824x39MV4XxlpMyM+NtZdE4ET59F/zt8jH/lIPwenb8/GTK+6+GT5S3TfkOjAV8CvnAOMMWpFFsjDEqgsCzDA0UnyZLDFsYwOTR7jI2d5EZWOWPi51fLn2cYpziAHNGCoKGVGKdBTO4iiAyCtK3+i/LQjXQ1sJr+W2jz4wQ8ejN1a+VPe8KJmDGuRP2BO8tdPSooskhOf9F8yMXYkRJOG8jKMVemyJLlGns2qi3RFANs4eXlmzbGFh/Lbarnreu9gK2lPRI7Gujf6I2BF7qKn7T1VkjfnifyhgwEDpU4zZmWpvckAUWXGL+nlQ3+FRFKLng2RYfYXAI7TFd3rurKPy5xxiA4k91PUqvfkIarT1iQBPeQxtqlbdFlLWoAmntHJd7rTnYb+QGf5n/pO1WmZ3tNvW9q+pc+yMYxRnmESoJbIW3RbuY1M7rfXF2Krj5etGi9L2SDP64yPNeMuewYpr2eaHDPuRoe7RqbprSky4bWMMraE59Jutfzph5bu6W9sFnTyxiShLX/ufe97D/0YiCmqaV35kxdAkV1iL07jrjHdZIvxvEX+puR7KPTG19hWaZG/UmZW6RM6Tdtol4D6KYOtEbI9AlmoSWdCGdXIs/zykg1L19l/rdTCw+S91XLX9d5+zkZXkKVV9l5pY63STy1jaZnW5OsjHvGIORgLTLcs0zgjaMhzW+zkyM9mj9sGnBkgdGhKL6CZwnIyOR8MjjjZm63EZu/n+DF+gjqX+WUJVTr3VtaL4JUz44SVM71oYCvLxSBkuGW5JiePAHHyppbrlPf2350DyzjQ0h+Sj727ymVnIspKogDLWfDymkGLUWEQFrVk8G1JKxpM3+HM0Dn6LCMQKMUwAupM9aksZWGEibIym8kZAwiUuoAB6jNFIhg4CZ7POfNfXUuKU4OvQo2zj4eZ6+xDmBl7Rq9z+jADr6QsOzOTmaiw8npmdab6/1RUG9Adz1K+Mq9FvxOZUV4PcBZ9nn2IYoCWaWt+Z7yYWuaTCC089wlttfyNHfA8x4bbxgC6lnHL0F5X/tImomhE+tlrRx8wy2bcDK2SP+lq+2yr/KUNyXa5dCZlyxHYmtnsKflLOsfadC3yHOOd7FgWkKUBkSGAhX6lvcpIgLJctTws79lquet6b7/eS7vRw4vkLgZz2oMOHhMbKyAOmdD25E8fDmiWexjl9KF86WHLjUPjyP/YX66zZTPZ4H+WS/o9pjLd+No6ek99jCN5WYw6qVtJLbqsJa2IH5G4ot1MVGkrbQD0MA4Dm/SpMS3Te6JYW9q+ts/KM8vHRCIGHCjLRu/SXcpQji30Bv3nXDkZmHszITTmu+tTIOk642PNuAuoEdFkXDKOtFIiuNR1irQpKm2prZY/Ns/Y7vFME5/xG6Pb15U/MuPDzrTE+H73u9/Q501eGR9a5K+2z7KxWuSv1BOr9Enstyn5w7tQ9ttbla5Fnm3Pwl7W7/Ei9iwZR2xx59irWX2S8uRYy8OyT2613HW9t1/vtdh7ZXuu0k+JgtTmq8bScsUcH65sd88kv8ZocsoeD9XYyUm72eO2AWfQaoNpAKeyoAZWxkYUcXntSPw2WHL64pSVZQhwlsFiK+sFRMysSvnMmt/KDCTjqAPQzJ6gsTFYk1dP0zlQcqClP+Q+hmjZf0SLlPsTMuT0JQ5JBsrcWw7kjLaWtMmD0+PjGe43sDoGbKJwPZvRRS+VClcewI+rXOUqwz3qUm4kK6JoSo+5L06A68pu4BmDWjHEpI1R4l6RDWNSRpG4DMXSgHR+z8ZsORpvSj2c3Piio4ANY4fS9XIWx39OjY2kkbKLEBTZgy8MHQDLFC3L27Ih1+3pQ7cb8DgOIrWEcjPsx5EYU88IT8O3Mk0G9rGTvJXy53kBT6ccH864epYORo38ydcyE7q6NA6cJ6/qrX4iA8vrNfJX22db5S9RCPossGARARkS1TklI+V9tela5DnjNOBb/xkTpyGOQ7m3apmuloflPVspd13vHdR7+jcdAuQYR1GE/2OdjH9jci7nOdpZ2jjWH+5jOwHOEF1VRg+lHwwXN77ISgx4uiIAkXzt97eIpqInk3Ydvcf+EykdMgHF2UiUrPMtuqwlrbyNq3/+538+jC/6XvSlyFkUh6pW7wGtWtq+ts/ikzEUJbp8+FN85QUtosrKaAr2NTp5YzJjLAfOp93KCCPn0VgXrjs+jvORd8b0yLT9RjN5RV9rG2W2HYbtYFZR5N24hlfjSPrwL20qv62WP3UyBpKj8vmeFTsx5XCuVv7ItWgaYBmZKclYC5zP5GiL/KXtV9kqrfLXok/i+y3yp+k/thjfFa1KlzrVyHMmwcjM1LhrXHbe0mFbG0xRnreKh+W9Wy13Xe/Nhj1Do99q7L2yPVbpp+gW9+QZuX88lpJVfZBMjdO6R8AD/Ii8lEEKNXZynrnZ47YBZxQfJV4i5ylszmWQzfkjddQQpWFdlkMDoRg8O6leZrtttG2pD0VC4BYh52Wd+u/OgWUcaOkP8gFEBRDRTzjnZp8BM5ld8AZHmwIzSC1FI6uhLHXw37Nb0rpH6K4ymP0ujSJ7LzEaKGH9VqSXvak8WxlK47g0xgwajAyGm/N+jwFpG+eLEgvAwVEx82fWOvu2KZtyJXrNbCbngrE3JvcCvZQN8FZGO0jLQIkDuMjxZySbbY4hVT5jfE4kHjLw2NQfj0LZOyjPy3lHyygYnyGGrjoi5dbGAc0seS0duCnDKvmUx+hafGPQxOCTBn9RjC2/t1r+gIbZu+i44447pL6MSWVCecFMrfy5h0OJr2SrBEZdi0GrTVrlr7bPKnOL/Ok3ZEo7Z08zZUWcYf1ef/FGQmktzU2k2P5U+78thQHWMqDJSU26Fnk2AZV+Vj7XzDcDDE+VE0C8iGp5mPu3Wu663juo9+jxGONjuQMsc3THcqyd2UGlzkl0L70KiCX/gBBGt/Sl3ssyFcY6Jx1A4D79lbyW/dW9AS7o9ey7ZLwwJpQ6y/32HzPejCdVIkuOrXrPPfb19Mw4EPqAc3QvatFlLWnlLdpkz8Zkjigstqj+j5QhExwZC2v1nvtb2r62z+LPWF48C6XdtTcwdgzYkClUvgBhOHHgSyCAaHv6wPhQArp5Y2DSrzs+rhp32Q8BzcolyJ6b+vk9BYY4j8h9HFb9y2RQCCCLXC/H3q2UP/1E9JejvQSzl2rKkCimRMa1yB9Qke1FHrPnW/JNPw4w1yJ/tX22Vf5iW9Tok8h1bJNS95gEZXcB4ixvRKvSsaNr5dnERinv4an+z06i84y5iyY/pK/lYfJ23Eq563rvoN5rsffK9liln1rGUvmSV+OIfMfL4zPWmuRvtZPLMm/m9/4pmM3ksOBe6B+jQxSKtw+FKC+DByo7EwPH7O2U0Z17t+LIAfUczkDI8ghlZUzFMXPNYOGcgTR7EbXWK8/YjmNenW7ApPw525mp2I7n9Tx/MDjQ0h9w5JhjjhmcEIaHWaUsNxJiniiPAEzAgb17984ZyeBL5JMBVl9rSSsj8k+ZlsAMpZs9j9JPONdxhDipJVluwmBzPXopTjYgKeCQezhRNqS0x1WM7CzhUN8su5Q2dePMMfI5X97ENv5kHwqGqWveAFcSBwUxbvPM4UTxlU2L6dgyvH8qhDn1Ud/SeaTzYoBzDsek7QIcueatTfimXICOOBmuJbLIbzyJ8S39MgLMxfkN/5JHJjIiY85vtfxpf3xBNgouKS8FYBDGka6VP/kAbxCZYhSH7NeQiE0gFGqRv9o+2yp/cRa1eTbITpktjyTvgG99P1Ga0uYFIdL6HweSQ1CbrkWeX7Cx/9S4T/kffquH/zHSU4fyWMvD3LPVctf13kG9F16wC0tbjSzR3UDYKVsx++JpIw5iJnQCWHGc6SU6SPuF6LqkTUSNPi5SFtFDAfL8V4boPdGi5JoD6Vxe/CIdki/nTKSTtIuoVe+JjqPr6Sp7dnkJgd/OJXKuRZe1pFUHzrGPLQ5C6m+5IFKf6LD0wxq919L2tX0WcDGlH5yLrrdtiv/l+GqcjJ7+9Kc/nWoecgTkkBV1t89riKxm4+qcW3d8XDXusndCAYD8Z2/lpRH5n3RTx/BeNHwmckqbLWOee7da/rRD+gdwO8/3LJuWh3fxxVrkz4QN0gf1j5Agjuz9mjThQY3uqe2zrfLXok+AHeyvsfypI7lB+l9tuhZ5ts3EVL/K2O2ZrscGHwoz+qrlYW7barnreu+g3mux99Iejqv0U8tYKr/ypT7lKglbEqT/ZlImY0yNnybvraBtizjTUaHRnEtKGCDlHCYYiMxaRAGqiFfbA39EdiRKZSsqOM6Dc82RVJbMylE6BgRK1SxG3lSZGQ7KgeJDrfUaP38r/5tBVZ4I0hiZ3cpn9bx+cDjQ0h/07QAtefsZYxY4zbDhyDzsYQ8b9o4wiAJQ3GMmTPiu2VAGEvDGXmPIDH5tWulFYDFQ9V/7RDG+EmniGfovYphRyJS8tPe///2HCADLNxm5yGbtlDxiSNt4mcMkX84XAzUzHoybKG9lNlMpNP3YY48dHCmGnr7puXmz2pDxGl+JGAugNJUFA0R0BtCCo8fYVJfcW96jvHikvPe4xz0GUI8RmbpJG56U95W8kDZgT174wOi05IURh2cB0/DYOcRJ1ebhc5l/fmszzimQUhuoN71t7AB+xLjdDvkji5buMt7Jp5fZmNUt5UR9U/5a+VM3gKjZUnx74AMfOMiUyZmAABx14yZqkb+WPjtkXvkF7BVRYRzXrqKbtQXZMV6T7Wy0rc+SKyCB/Vz0cbwkV9oeYHHiiScOfKtNVyvPldVZmqyFh9shd3jS9d7+JjIRIWIYcKG/0G1kjSzRAcCB8eSCO8mlZWnAL3ovMmpsCtlTEyBOn/zJn/zJAO6YUKXb9GlvpguZ0BAt5ZrNyY0B0evSxFYFttj0HkgmLy+p0XeMjQE1vBzHGLGMavUevRQ7VWQQ/iBvHhMt5JooMHqrRZe1pBURBHxUP5FCHBg60mQYPpZLs1r0Xkvbt/TZZXxfdC0TNfQYe3sRkS+2Dt1n3DOJTb5K8Me9646Pq8ZdYAfgFlhsMku76y/6gHMh4wy5XESWGQts0KbkXdr0jbLPbZf8eYPmvn37hn5r3NXfEo2vzHRkIqxa5E/fBJDRB3e7292GN3KqD/5kbMq+vC3yp0y1fVbaWmrVJyeccMJg85lQuNe97jX41+rG1iIHmZitTVcrz7X1WZWulofbJXdd7+1voRZ7r2zTVfpJ2tqxVFrtwU/L+G881yfoIgTrgH2gFjt5uGELvk63IYh/Kh+DAqNjK4lTySkg7I4BeBiHXldvMAoxynVyBgBDfYoY78rIcF8GEiUU3wzCeIYZgMfwEZlVvgnCgMbIMfBzJn0QRU3pltRSr/K+/N6zET1ipl79S/Aw16eOypOIDcBEiHMrP8rRrHtmywzalCgDZtneNMmn5chA2mpZaXl+T7s5DtS0X01/0I8saWN46CeZJVA6zj/jneFI2THiRdL4DVBjwNAHnCBAlyVx5fLElrT6ORCHU8XIAkrIlzHpdeZlKDlgwjUAvrK5j2Gpn7z2ta8dQLhwl2Fl5ozeob/U11F98RBgUObNaZGW0aluyqI/MgrwZxlJTzdJX/Ix9xhE5Kv8mc3LtfJo9swLQxjI+OAjT+1Bv9kPxeQEQIthJZ1yuqZudDOjSVm0EVBIHaOfARn0ufIaLOkwBmf0sQkGz+Nw4K/8lRto4i3BHAt6XrpEgZTlz28DpTFJNFPGDnw34cIh017bKX+iE9VfXUs54ajhT6LClLdF/oxdZMdsNrlTBzqc/jbbVzrtrfJX02fD3/FxmfwZ78hIOY5rQ20AhChlW1rtrZ87Gre0m76tL8b5rE1XK8/j+uQ/8A5/yXuih3Jt6ljDw+2Uu673DrYKXUNP09GxycgSGWLrZG8n/cgbwPRNTjVd5h4yaimHJUrGghDnwIQKXU2HSauP00neSFmmpbfIqomW6CF9wbOA5+USfrare+ms5EuH6Nt09tiOTHnKY43ek97kjLxT5uRBz5N51y50oQsN+2G26LKWtHSZ57MztQEdgo9sa7reWBJq0XvuqW17aWv6rHSLyNhKrvgD2rskkdtkkMwti5rJag9pOXtkSp50unE2tj5+tYyPteOuMpvII6dpC+2BALYZa8hwIn6Hi6MvYz2dK7IjMqxNjUUm/+Ksbpf86dP6EJnyXPYL/Y1n9nYNAKTYLfKH/4Ij5Kt9fOStjfDN8kR1DLXIX22fTd7j4yL5a9EnbAq8Y1fQT8aoTDBYmpqxrzZdrTyP65L/JjzJvYmCRDHl2tSxlofbJXdd7x3Uey32Xot+ahlLyQjwzEoFshxbUj82lr7iFa+Yi1GrnTy/ccUPfcVzp+g0G87tsCaF4JbLe6YSb+YcQ5oC5GRQgjuVKBtLAww0FOqqpY+7pV5byW/G4nbKylaWted1OAda2q+1Pxz+tMPPMIgAu/qYAb0En8apW9LKj+HqCJRZNrvPYNJ3ATve+BInbPz8/JfnngMAtbyX6QUGJ+BIGmkBDEeC8MKzl80wM6QBIpw74EJNWcmEAY0TkEiHcf20W8AT+Zb7bozTrvoPpOeIqEcM91X3LLveIlPkJJF45ESdF1GL/MkDgIz3jFR10waLqEX+tqPPplxACfzgFPssK3PazeRVALPkUx5r09XIc5nvZn5vBw9b5K4lbYvc7Ua9py04YcAz+pTDvIw4jiYmgWhZ+rUoPaOcnmKXlo7zVHr5GgMY1Kv0EIObHen5dGSNXh0/M/1iq/Reiy5rSat98BAvAZIlYDauk/8teq+l7bejz06Vf9U5Y5XJq2W+zjrjo/qtGne1m8l/epr8r7JrltUFsMROM1m1aJxfdv/4WotMuRcf9QFA2jKZapU/fij7TH9fZfO0yJ8yb3WflSdq0ScAU3YFPVVOAuzP6eB3bboaeT6Y6+Z/bTUPW+SuJW2r3O1Gvddi79Xop0hHy1hqAsxYmomx5DF1bLGTp+4vz/GT6dIpOtWAs6mH93O7kwMtwMvurOHRXerefkd3+/badQ50DnQOdA50DnQOdA50DnQOdA50DnQOtHFgGXB22raseurOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BHwwOdODsB6Odey07BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6Bxo50IGzRob15J0DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DPxgc6MDZD0Y791p2DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DjRyoANnjQzryTsHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHfjA40IGzH4x27rXsHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHGjkQAfOGhnWk3cOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQO/GBwoANnPxjt3GvZOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdDIgdM3pu/JOwc6B35AOHD6059+dqUrXWn2Mz/zM7Pvfe97s09+8pOzv/u7v5v913/919oc+J3f+Z3ZGc5whsn7P/OZz8w+9alPza+1pHXTRS960dkv//Ivz370R3909k//9E9DWb/73e/O8yt//MZv/MaQ9kxnOtPs5JNPnn34wx+efelLXyqTrPX7J37iJ2ZXvOIVZ+c4xzlm//7v/z6UQ96r6Ed+5EdmN73pTWf/+Z//OXvJS14yT65O5zznOef/p36456//+q+nLs3P/fqv//pMnfHl3e9+9/x87Y8/+IM/mKnbq1/96tlXv/rV2ts2ne6Sl7zk7IIXvOAMfz772c8ObfqNb3xj7XxbZOoiF7nI7Kd+6qcmn/Vv//Zvs7/927895FqL/O3Zs2embmc961lnX/7yl2d///d/P/SvQzJc48+6fXaR/K1RhFPlllp5vvSlLz270IUuNHvHO95Rzd91ebis4i1y15LWM1vkbrfovWW8HF/7oR/6odktb3nL2f/8z//MnvOc54wvL/zfolvw+AIXuMDQX7/5zW/OjFXvfe97D8v7F3/xF2eXv/zlZz/5kz85jJlvetObZh/5yEcOS5cT644XuX98NFaT9yn6v//7v9lf/uVfzi+d8YxnnF32sped/x//MNZ//etfn5/+8R//8dlv//Zvz85znvPM6GD675//+Z/n18sf+hA+/OzP/uxwmu3woQ99aPYf//EfZbK1fp/vfOcbdOdZznKW2b/8y7/MTjrppIXlKB+wSBco5w//8A+XSQ/7/YUvfGH2D//wD4edL0+QQbJ4/PHHD21fXlv1+xd+4RdmV7jCFWZf/OIXZ2984xtXJd+y61stfy0y1ZJWhU93utPN6Maf+7mfm5Fl8mTc/P73v38YP3ai/Cn3la985cHGPfHEEw8r8047cdWrXnXo629961sHG3mqfOvaDV3upri5/Ny6em8r5G4d/dRiR9Hll7vc5YZxUz//yle+MnvPe94z+9d//dfDmHKZy1xmsHmMR/y7pz3tabNvfetbh6XbzhMdONtO7va8Owd2KQcMiPe6170GQzBVoLgptyc+8YlrKSqKlOGwiH76p396Dpy1pGWs3v72tz8EYDrvec87+73f+73ZW97yltm73vWu+SMp5bvc5S6DA5STnA0OwZvf/OaVAFTumToaXPbt2zc7zWlOM1yW76/8yq/MLnWpSw3KnbG3iNwXgLJMw6gHriwjoOYq4IxDZwBF6wBnAEnOxdnOdrZTDTg79thjDwGuOGEcvec+97lzOVnGl/G1Fply7+///u/PAKtT9J3vfGcOnLXIn7xucIMbzH71V391ni1H9GIXu9hQpxbHf57BgR+b6bOL5G/8jJ3yv0ae8fUqV7nK0B/pA877KtoMDxfl3SJ3LWlb5G636b1FvJw6r83iTE9dnzpXq1u0xx/90R/NGOmhc5/73AOYv3fv3tmTnvSk+VjIGbz5zW8+O+1pDy7kWKbzNzNepCzjo/Hi/Oc///j0/P/b3/722f/+7/8O/wFsy4CzcnIA4HqNa1xjXje62MTCt7/97dmjHvWoAbTMQ4CGd7zjHWd4F/r5n//5YULpWc961uxzn/tcTjcf9WeOU0gfV7Z3vvOdh4CCuZ7jMl2AZxmzk358BGitAs7YR/IxThqTW8jYT4bJ2akFnG2H/LXIVEtatuExxxwzTKCFr8aA613veoNtpX1CO1H+9IVb3OIWg2zQ27sBOGO7kkeyefLG5PIUrWM3dLmb4uTyc+vqva2Su1b91GJHmdCInRYusNfYxHwVflnIBJa0IUEYWzEZk/xqjwdH+No7errOgc6Bo54Dop8M8ACCl73sZbNXvepVQ6SZWcLb3e52a9WfYYnMEH7+858/7POJT3xinm9L2hve8IZz0OyDH/zg7NnPfvYQDcBpAdQlL5nf+MY3HoAo10SCqVtmzQFtDPx1iLMkb4bzKaecMnv+858/+6u/+qthZtSgw8BbRIA1aaboH//xHwdHg7NRfsyAh8zOHG1kMBXtRVaAny984QuHWSj8jQHaWufIQY38yZusoylZFbkXapE/Tl5AMxF0L33pS2cf/ehHBznhCHBO16V1++wy+Vu3LEf6Pv1RBMgqh3hcznV5OM6n/N8idy1pW+Rut+m9kn9b/btFt9A1nEfjxac//elhvND36RCg+q1udat58UwukDtpjSvHHXfcEOE7T1D82Mx4UWRz2M9znetcwzmg11hvGT+UO5SxTgT5OK3/icBW/2te85pD3QBlL3/5y2cvfvGLh4hqPBjbA7e5zW0G0Oy///u/B0DrhBNOGNICb4EfrX0y5RV5re2QaCOTDAGzRCLh/xSt0gXatRxb87uMtpPmaKLtkr9amcLLlrR0OWcc6CtqUttrE/UgU+UE106TP3UFqK+KapRuN9E6dkOXu/YWXlfvedKRkrtaO0rdApoBwPhMgDIRzcYJAQ2xidQHcIaMQyZh/uIv/mIYb4eTp+LXwSmhU/Gh/VGdA50DO5cDHPgYNU94whPmiL6IjXve856DI2EGt5zlq6lNFCCgR3jtMqpNa0bFEhpUzjpbSgP0s/yB0/iQhzxkUMS/9Eu/NKQFsFl2iCyluc997jMYX5e4xCWGJYHDhYYvgIfZD6HDT37yk4c7P/7xjw+8c+3CF77w7BWveMVhSl6UwtWudrWFT7LUZ4pELQHbOD3PeMYzppLs2nPaFIiJ3vCGN8ze9773Db8/9rGPDe0E0NKui3gzJJ74qpUpt5JvAzdDfZmstsiffBMtwSkz8CPAGaOaHHP+Xve61w3nW77W7bOr5K+lDDsp7Y1udKM58FlbrnV5uCr/FrmrTdsid+R4t+m9VTxd93qLbsG3PXv2DI96//vfP++XxgtRSnSQ6BZ5ijBKZDCjf9nyTBmuO14MhVny9WM/9mPD1Re84AVz4GtRcpFzSN3KWf1xemXFC3V85CMfOY+mAiDe9773nQHr6DXRAaKmAmKY7AjgRMfd+973HoAOkUYBvMbPWvafM6YcxvZXvvKVQ1JbOwBV9F1bJJQTGslrlS5YFOXL1kEmqcolrsl3Nx+3S/5qZQrvatNylm2/gdg6mTTU9kBbdhAb75nPfOaOlD/lF2V1NNG6dkOXu+XLvadkZF29d6TkrsWOAozR6SZ0HvGIR8z9IytoHvCABwzBGyZLMo5ED9DzJp6PFHXg7Ehxvj+3c2CHcuDiF7/4ULKvfe1rc9DMCevInbN/lyWbDGMz8mc+85mHKCtRaSFr1m92s5sNSlFYOkM5UVUxfJJ26libNssPzfKPjVuzFxwcM5WcGrMU9sTg7IxBF7PrjBuz62P6rd/6rRmeMBY4D9bdAzfKSK+AdxyEkjglgDGz7SKNxg5VZuCFwu854KSV90/9Bmomaul5z3vezMx+SZxvs4F4o7wAT4PTFAFqDF74A/gDFKmXtizrl3sNXLe+9a3n0WD4xomx709JeGyPDLKSwc6eb/bLEEW3jPDSzKTBFP9C2hh/gZu/9mu/NrThdsmfwR8p8zJqkT8OtbqTs/FSDW2k3mR1TDXy19Jny/xr5U8fv851rjO0uzJqb33pbW97W5nd8NssIkdaPTnR+p39kuwzVlJtuhZ5lj/HnFyTffv/TfXpshz53cLD7ZK7rvfSGvuPdN3v/u7vDkvE6QR9iBwtAl7IHL1jSTm5Y1ybIEmkVYtuocPoQ7p73F9N0gDOkMhYz5QeGfssBQXwLFp2VztekGFjGKJn7ekVMrlAT5mpB/54rrLSk4kWS9qpY/rFqiXM+ikydilPCE9FiZsUMh4Bzlw3wWG8ibMjvbYwyWOMMSaUVKNbtH2ASeN6Sf7jAwBP/bVZaF1dYMzGT/XJBEfypP8scZW3NGyiZcDj3r17h7T4LYrfeH3yxnivPcdjt2fQncY4IKjxxyTf1FYMrWN3yu9YK3/aMJG7dHgmseShzUX6kTfbJ5DDWplyf23a2EX6/th2xBfAaPTmOvJHtkRU0jV4bsJVxKX2sW8iWlf+5Heta11ryEPZU87hxOirZpzPLdpP9Lol04B7csL+1fdKqk33m7/5m0M0j77Ozh/v31rm6Xet3TC+r8vdQY7sRrlT+lX6qcWOovuNI8ZKeqQkusf+xvQ+ubz+9a8/O/vZzz4koftMbtuGZ+xTlXls1+8Ox+2C4wAAQABJREFUnG0XZ3u+nQO7lAPZEH1spKgOg4LhG2Pafw6Eeyi6KDHh8hQeY1nkFYrSoyhve9vbDoaTQdr1sTFcm5aDhEqDfjix8cWgCGDEKGKAWkIzJvtnBAABBpSUGU3nKHZGs7zufOc7D+CSJSMo4FC53NR596gj0I0BGP64dvWrX304z+gBQDBGVhHHwAwUwrcsM819DBOAZUn2ChgPSq7b8B8AVRLDnjGmfo973OMO2RxaOuAJnsrPEahxt7vdbVgem7LYuP26173unPdJy7E0MyyKbGpj7ZQjUTf4Foc31xiGnIrwe7vkL/LAGRDuTt4Z0Z73+te/fu7wtMgfw38M7qoXmQJEo3EUZ638tfTZ4UEbX7XyR6a0G9lD2pMO4BBymkWlRr60sf2NyrRk30tG6INEedama5FnZRONyMBCHB9RKLXUwsPtkruu9w62FmAIqB8iY3Q1R5ne1b4l0Uf79u2b6ydyZ9bdnl+Pfexjh7GoRbeYIHnQgx5UPmL+O8C6E0AqOjNjjXGHLOX//KbiR/TXqvGCrrTMX0QqPWRWHgU08dtkBMreZhz/a1/72sOYph96+Qi9U260TM9n6Zj9ZAB/+o5IWE5zOQ4CuxD9Nab084Ba9JdotzHR2cnnAx/4wPxyrW4Jf8kA0KkkSyudx2/ARMahdXUBAM7EE7LMNOBJnmk/1YB/nityKnKXNDne/e53H8b4/HfEe/tHAXkf+tCHlpeGtNnDR970pH4gYtQWFKF1xu7c61grf+wVQCS+2vNTpIdJE4AQudT++G+cbJGplrTASWQSZEyRv9h6rfLHcWfrkBWE5/oFUE9/snrAy5DWkT/5mWRURhN+ViMsAs5qx3l5Aib075LYJ+xCAHpesFWbThSY/onUn2wvWwVRazeU5cvvLnf7ObEb5U7Jjamr9FOLHbUo2tezjPVI/6MDkq9zJmR96N7Sp3Lt1KAOnJ0aXO7P6BzYRRyIgQu4GFOMl6QBeDECGZtAFcCGCAGGtEH4+OMPvmEqxonZrRCFaObAIP/4xz9+bqTWpg24xxAzKKd88jcjF2LojYmzAFRKXeRVzrSZTWXoqAcH5m/+5m/mDsyeDRDMTCIHg2Ft9gjZW2ZMebOnwTLEEGOsyNuMdnktaaaOQEpAi/vGIKA8ABzIDI5oNM4EhwvYVxL+2twZqQMwCMiJHwxk99kfrnzDp7TOi/jgrBlE73CHOwwDGLDCJtGIoS8dJ4wTJXLNzBFjz2Bntn4ZcCYNCt+GPwe+Etkmf7Rd8hdgmFNZEkcG3ywLIS+bkT/54FUcAw5vGbXZIn+R4Zo+qz618keuLQtm/HOcn/70pw9AMMfdHluMbP1I/+C8cCqlFYVhWY02BF4BQMgWgFg9a9IpZ608S4v2bQAnyiGKhrPSApy18HC75K7rvf3tqP+Z2Uaiy+g6IDoAVr8AiAFgApLsv2v//pkvetGLhkmFgDLGBc6hvbladEvyHB/JV/aspONM2lhWwtkUNQJIs/fKMqodL+Rtv0wOOP2ur4l+0ycR/ZNIzj0bYxJSx8z6+28sxgvRb9G7ARBdL/sIwMBYQTeJKEd0LrnMZMJw8sBXxhU8mSLlpmty3fOBLKhFt0RHjidS8kzn6R11jUysqwu0rfGFLi1BPs8yIUTnAZOMbYkULicWUiY8NEYiEeom7rQN+fVGVzxlO42jJ9kwT33qUwd9S95NqgDZ6FzA1bpjd8rlWCt/0rLhLLPVhiJtvRAjgJCIOddRi0y1pAX4AqozqTA87MBXKef6eWQraZbJnzRZxsuG48STHZNhJnbld5Ob3GSYGFpH/vQr9oJoS/qLDpqilnGePZIINoCByQMyRf70M7L78Ic/fFabjvwFNPPiEOOaPqT+UzZzrd0wVU/nutzt58xuk7uyPVfppxY7qsy3/G0cTUQqnwwYfP/733/2h3/4h0OfEmlm7+MjRfu9vSP19P7czoHOgR3HAQYoMuCPKYZJ0rhuJpQhybDiEGcgZtAHWABABewALr3mNa8ZHOssbaQk4yS3pOWkZGmGKLfMogOXGCQh/8ckMipK3rXxsrzcbxYTaIYYikATz8QDRm3Ji3Ee7gkAlLJJn+UPACjOVy0FDDTrDugqSfSYvJVRm3C6GISAuXFaBhDHQFvYoF67SmspRqKeYiyWz7D0KREO7sULpP3MEOGz53seh4/RC+Rj8Mdpm2qL8hm5Pi6zNAHO/I4zttXyJ+8YjXgC7LHPGaMycg6MJM+bkT/tVfK4BH2VoVb+pI0M1vTZFvmzv0Tk1nKcAHPaMyBzgHDOnbScWDP1ZIoMWsZk1hDvgGe16VrkGQ+Ac2YgyU2cfudrqYWH8txquet672BLWTqlf+l/NqInO/RIGTkF4B8TYzoRzo4BlURGolbdMs6fjNzpTncawAvlCWgwTrfsf+RMmprxgs4FuiAOhTGW7oueHy5sfGXPKOWyxJ1uBsQDqvES6MYpRiVwYZJFf6GvsxQUSJOxJhEswItsESCPctIpDrHzIfUErEVPK1ep41p0S8ZpcjBF9AxK+66rC4AQiWyYWh4JaEDGgixxJWcZE4eLB75MJqqvyUS6UhmNX2UUWyKWcx8eBTRzTr6xoQJwrjt25xmt8kefZgJNnUyWAYSUFdikj6IWmWpJGxsRYGhyNqQsJZir7UpaJX/G3kyMWXodwJU9ZtIRqScQv1X+yBDbEI/oCMdF1DLOG2v1ZbYXME5/MCZrB/JlAtk4UpuOPkEiqMk0MkE2ta8rfq5rt8q3yx0u7F/Kv9vkbn/J90ckrtJPaecaWzT5lkdjjLEKGXsy/pRpjvTv6WmiI12q/vzOgc6BI8aBGKcG6DHFCC5nfhmHZr5EHTE0EECh3P/IQG9Gi3Fj5jv3M1YYZoz0LH9pScsgsV8ZJ8pMsM2KlcdsIWPedfUIeFXWh7HhOiObQewj+u3Rj370cE+McGnGofGMFAOE9KmLvDNolM/JufDVLKa8LaFhsNUSIzHG4dSeKjEiYwCW+TLy4zw6b4mQjZ4RA3LPRrSCuvjEcUhbD4kOfAX8yjnAGMeMUSsPjpdlUQiIItJBnpy6GAtTTlbyc4whXp7Lb4ZhKPzcavmTPwecwcyYjHPLuLQshfGoHUTRAZDWlT9RftoRQARsJr93vetdZw9+8IMHg7hW/pQ3vKjpsy3yxwhP/vpJSZFFcuKT/ksmxqCnaNJQloGsSpclyTXyzAEV6YoAAHGk88yaYwsP5bfVctf13sFWCqBAjsa6N/oj0Ty5i55+z3vek7/D0cy0KF06mPPbqlvKzOg4oBl96VnGvKnli+U9U79bxwt5AMAs1xN1Ft4AsoFnISALQAJQE1DHBIt9FUUM4YEIJg63pa76CKe75BkQiA7yjL0bS7Hta0VHAcnUX6RbImf8VxftUdYp5dGfRJ3SDfIyPnm+yCG8a9EtZT2Tf3mM3vPMzegC4JS85DMe6/AvkwhjUM1/+iflUDYgb5bmG0uMf8ZC9c7YmvxSF8DaWKaApu6JvK87ducZZVvFNsk1x5yLPnSOTImMYyMFoFVn42GoRaZa0gIObQNCtskhx5peYOOVRB+XtEr+Yi+5h8yM9Yw+rj21W4v8ucfbdh3xKGB0Wbb8lqZlnA/vy6XU8sKPcll5AMZV6dgcKMD88GfjiwzSDeVKiBa7IfmUxy53+/XebpS7tGONforeINtjit4rZaFMYzIlb9kkg1PL/sv0R+p3B86OFOf7czsHdigHACEibhJ1UxYzA+nYARE2Xi47E1FWkuiTzGiV5/0GUgDOKFpRSwyNlrSiwShrDjnFrIwMHaCU5T4M/LFR5bmnnHKKw+BIiLIyy8E4BQiUdTfD6zNFIq0YWJ7P4PQ/UXlJn9lKfDWTn71oGKKWbCCGNeKEOMdoGW+inxlng1eiwoabDnzFCHLvmKai2ix3wrOUb3zP1H9g35gCnAU0zT5EcfDG6Vf9ByAg7TamRGjhuU9oq+WvdCbzDEezX4xUxi4DCHC2rvylTThXIv28xU0f4BxweEOr5E+62j7bKn9pQ7JdLotJ2XIEtsaon5K/pHOsTdciz3FUyI5IBB8UGeL86FfaS/+Zoloelvdutdx1vbdf76Xd6OFFcheHM+0Rgz3/HYFDcYDJxDq6RT7GBaCZZ8oP+JRIGNdbyP2140WZ72tf+9r5/pYiOMeAsvFiPGa436QRgEP0V/qFsS/jX/kMv+kiY2HGQOW13+W+jWXQ+rm2cc4S/JNOOmlY0r8IqE4ZAR+WzSmDCQfUolvUFwXUGf4UX3HIjPOb0QUmMRA+qmNJAfrSfuU1v7VpypFrQEbyu6jcSZfjVJRG6l5OGq0zducZKX+NvZJ7HC119kIEY5S6jqPsWmSqJa1nm+QEBlklgA8+7CxvKjeRhaZ4t0z+MkHoXvxcRNIB+tCidky7kz/lZEPiEX0TGy97NrFPnWPDlYBczTgf8NRWCMuoNl307BislTf+xt5vtRumytblbr/e241yl/ac6mNj/bSOHSV/wQ/ZW5Kf8ZSnPGWtCdCUdTuPHTjbTu72vDsHdiEHOL5AkIAUZRUykI6BKAZNmV60iL0WQowt9wKG4rzkGiAixDBpSZv7LE3x8Qz3c44dAzaZtfRsjoOZynH4L/AjMx3qwiEIiSgaA4W5FvDDdWXnDIxBrRgn0ibqyv0iIcakjDZf5eyUTpDzezYiutB4tns4ufGlTYANY4fS9fHMtugg+7UgZRchyMjEF0YdQ2mKluXN+HLd/hgMSYaSqCKRWpYbMXbHs7pTzwhPw7cyTZwtRmlJWyl/8g14qvxj4iSqZ2lE18iffCyVEnUwngkmr+qtfiIDy+s18lfbZ1vlL4a9Pit6ZxEx5BPVOSUj5X216VrkOXqJw6D/jMnyUB9UvhmuTFfLw/KerZS7rvcO6r2AECKgxhvoh/9jnYx/Y3Iu5xn46+gWkxqWp9FpymXpexlpM35mzf/a8aLMK3sbOUfPj8cpOp7+oLPGM/oZY40jiO6iizhCY2do/F9654477rj5GGq8oN+jzxOVS8cDxumNcaSNvdkAZ/Q6/dqiW6LvlVsdyvrlnHJyuNbVBcbE6C46d0xxECNP4+vj8yalsm0FXYYf5IZtsWgPqbRPmXfqE3By3bG7zHMd+bOiIHXEc3tc2k8w1CJTLWnlT9ZEKfqwTY2XZJo8oQDkLfLHwUdkyx6di4hdFGApsrZM/mJzSjs1FmlP5y2RzhJYz64Z59VTP9d/xqRt8AnVpiMHZH4qv4CB8mu1G9wzRV3uzjRMcuLNbpK7tGWNflrHjipfdmK8z1LpPHenHTtwttNapJenc+AIc8BsYOlolsWJoZKZPNcAUTGgnTfImq0GzDB0EMVos1wDuqVoGeBdswQl5Nktad3njUTKYF8GSjuUfR4MUJwJkV72pvJsZVCWUDkgAAk4Cowj5/3OjGPS2zifQxWAg+Mgcku0UBmJoFyZuRftwrHPniXJy9G9nCFlA7yNZxQ5IzFaFzn+J29EKYmCAsyMaXwuG2+bVbSpfxwT92XvoDyvzMtyidIhYsSpI1JubRzQzJLXOFSuTxmRzo8psoVvDLrSkcNfFAfY762WP6ChZQmIs1jWl2EeIxMgiGrlT1rLoPCVbJXAqGsxVLVJq/zV9lllbpE//YZMaefsaaasiOOt3+svH/7wh4c+ZmluIsX2p9r/LSqAw8dZICc16Vrk2dK09LPyuWb5OTB4qpwc/kVUy8Pcv9Vy1/XeQb1HjwfEGMsdYFkUz1iOtbOI31LnJLqXXgVetOoWOj5vieVo0wfkaLNUO17kOcZS8q0ewBd9Ut8j9+pFp9zvfvcbjnl5S+51TLRLwB96CK/oNnUqST9FARZMpIgE0Hdtcl/qQxFISEQZsjcdQA+wka0AhgsbX4lsVgfgeYtuAbIZo7SxtjdREcpebK4bF9bVBYmEoyuUf0z4zGagp/GonHyjC5WtJHxAwF/Lx0vKGFLaHa7LZ0yRYeVC647dZb6t8mdMTFtnyab/zrNrUItMtaRl+5B/MsDGE+kYyv5gsZda5C/jtzZgx5V2hv5kz13tbbxqkT/tDSAaE1Bbu8vTOAQgaB3nyaU8SiArz7nPfe4z9DGRgLXp9Be6gQ1e2q3ypB9CrXZD7hsfu9x9d5hIxpfdJHdpxxr91GpHeVFC9o5897vfPQDIed5OPe6fftqppevl6hzoHDjVOWC5EOPWzHCACoVgoDrH6S/35jrmmGMGo9EMoA2zs9xI+HuiPAIwMTr37t07rxNjOpFPjAl5t6SVkRlICr0EZjgZ2fOIMkYMavViFHFSS7rZzW42nHc9EQ5xsgFJAYfcw+lnsDG0lRdlPzf1zbJL51M3BoMBBVhic83x59WvfrXkg3PgmmURJe05EG3GgMszy+t+5+1cwLly6cHUEoDUR31L0EwbZ8nl2BHwDG0Xo99/e97gp3IBOkpjK5FF0uFJHBzplxHHDL9Q+Oe3PALcRsac32r50/74ghjiJeWlAIzfAF+18iefON1kqnSaOKaJ2ARCoRb5q+2zrfJnjyOkze1PVJIlUeSd0a3vJ0pT2kRa5N5sBs0Ar03XIs/2whj3Kf/Db/XwP8BJWY/8ruVh0m+13HW9d1DvhRdAAyBRiGzR3YCLKYA2++JJX0a4cn5Rq24xLtCDdNljHvOYuTwNmW3iq3a88Ah9LPqcU3z88ccP+la5yCCirwL22PA7ILxr9o2JXs64nX7Aac7yTWn9N/mB0k/lhdcmokrdb3xlDwAJoo+ju+iyLHuUFxA0upRuN4a16BZ5RCZEaqd+pT0RfbyuLsgSqoAwnjmmgDbGpZRBmrzYqExP/tAYRJE2uj9pcp8JCpNyIWNeXsiQybt1x+7k6dgif2y0vMUVaOtlHQGtnQ8g2iJTLWmB6GSUHRNeqAMdYFwh+wBd1CJ/xiJtwx7JpvdDJhtfQGmgIIc+/apW/my9MDUWZTwDGLseu7RlnA+4RS+WdqY+qy+qCyCzNh2bGOmrpT4lg2Qx1Go35L7xscvd9wfgbLfJXdqxRj+12FF8qYBmZGMq0jfP3knHHnG2k1qjl6VzYAdwAAhioKTQ7NeQdeeZtWYYZPaTERugJW8/ozjNtpph48g87GEPGwZzBoPB3j1msRmh2SgXeGPfGGTgr00rvQgsQJby2ieKoaOsHAvPyAa9DCyz8cAfab3emEPFKItDYCNkoAjyprG73/3ug9EvX2kBcpl1YXjFQFFmBp4w/GOPPXYYHDkrQCzP3WzocSLGAigNBRx9cQpFZwAtGPYcJnXJvWVy5cUj5b3HPe4xgHoch9RN2vCkvI8DFF5IG7AnL3xgXHqbFANOuoBpeBzATLtwOsLnMv/81mYcZAOrNlBvhjOHg9FdGrFbLX9k0dJdDij5fMADHjA4CqWcqG/KXyt/6gYQ9fZXfHvgAx84yBSDN0arJa1mrFGL/LX02SHzyi9gr42hgcTalYGtLcgOI4ps26Qc6bPkisNhbx99HC/JlbYHPpy4sVwL32rT1cpzZXWWJmvhYdd7Hx14uV16jxENqAES6C90G1kjS3QAoHY8uaBA5FLkhSV79F5k1NgUqtUtxrDoQ+DAn/zJnySLQ46ALNFYLVTLN3oyYKBosQBf3oQM1KP7yKIX73iBwL59+4Y601nGK+WObvHMAB7e7ChiSP72bpO3/oln+qrx3d6jSHSX5WdAnj/+4z8exgr6K2AcoErfQaKw6AxRQvS37Q+MwXS3Z2lD6VGLbpHecjoAgTHYyw7yHGPSInlwXy2xV5B8FxFdZzwybpKzsf1Q3gcQMX6wNfAYiO9/gC9p2QdjYsuIqtQG+KY9RLvRnUg7rjN2Dzcf+KqVP8nJlLajy2PHOJIF54FOgKAWmWpJy67S/9XZclHjEGJL4o2xOkBci/wBb72gB0hGXr1YStvrU2QM6Veej7ZL/lrGef2fXaJ8ZCo2aeQIH/Q3n5p0dKHxWp3vfOc7D/qB3RedMVR8C7+63M2GSYPdJnelCKzSTy12VF5iIX/jmM+YyPITn/jE8ekj+v90G4r8T5WAYqaYOnUOrOIAg6DLyiou7dzrNe0HmDCYAgsYiT7IwEfpI4azJW2MF+eBUiHOP8CNYcUosYm22UC/GagcGoM9JwjQZblIOdPbkpbTwsHhVHEUGPXy5Qw885nPPGTGFzDhmlB3ZXMfh4Azb/PlchkkY9wsIdAAuKG+mdXDQ0Z0OZvszUTSMmrUTVkYZ4wT/FlG0ovuk77kY+7ZuwH2yVf5M3OZa+VRtJS3ljF88MFHntpDe9qrw8w1QIuTJJ1yuqZuwA9OprJoI/xQR8YqwAuQwfhXXg4LY9qLID7wgQ8MxWDse57IMPyVv3JzyuyHwkiTj3SMvkXE8TUmiWZSLs8jZ5YWiGrUXtspf2Zi1V9dSzlhFOBPZraVv0X+gE5kx4wxuVMHhiqHUgRGuedJq/zV9NlF/F4mf8BhMoIXaQttqA0416VsS6u99XNHekO76dv6ojZFtelq5XlRvYB3+Eve9Z1VVMPD7ZS7rvcOthBdQ0/T0RmDyBIZAr5k3yH9yBtf9U3AEF3mHjJqKZQ3O2aJotxrdIt0HATjX8izpz5kmV6iqwAdHO1yKWHuHx9rxgugGYCAfnjyk588AETy0Z88i67fsxGNLC/ggnqKxKGz6H6yTx/bFzNRze7XdwHi0urT0kmPON/GY/chOt5STLqbzif/9IHzgIVEQg2JN77IsLGY7qdDtYUxF1/obuUMtegW4xB9IOpIOeSrnvQkICfRpcl7fFylC0TZKKcXwxgHp4iMAWfobzxTBvKHZ/6TOXtB4i+9CADDX7oQP6R13hiu3ci19ORc9JSxAeGxD3nDL20vT0SXtYzdw00TXzXyJ0ocgI0AswFetYUxnL1DbrStaLRamWqRP89mP+pf+IWXPohd4wUBJbXIX/qMvCNTZFZ/00axdeW/WfkzCajd8CrRlvJtHefdy74iH3ivLyI2C5tU2VFtOv2XPOMtO4Aca1v6FZ/la5J4ipbZDVPpnetyN5vr6t0idy36SRvX2FHSlW8inhpbndPv8oZjQQEmHujb1skqz2shujh6ZnzfaTac26GXMSYSDj5O1P93DpQcoGi7rJQc2V2/W9qPIWmpDIPP4Jl9TzZTY8Yuo1+enFmKcRG1pJUfBe8IlGGgLCIKmXPPmGXwxQlblF6eezYMXUaJvJfxgQHGsJFG2hi8i/LervN44dnLZtAZ9QAR9WKQ15SVTDDyGVelE1TWQ7sFPJFvuX9Ima7mN5Ceg6Ieq5yjmvxaZIqccAI9n5yo8yJqkT95AJDxnhOsbjF4p/Jvkb/t6LMpE1ACPziWPsvKnHZj5AQwSz7lsTZdjTyX+W7m93bwsEXuWtK2yN1u1HvagsPJsaNPl0XdanOOpOWNnPwss1okC5G9rdIti56z7Px2jBf0lboB0haBQCkTR5ke4sDTccv6tPESmKgN8CzgWvIqj+SS7aB+xvlVY0CLbgEYsCEAlovGoLIs2/GbnBnnOXB4t4gAHPhr7Fql55MHQEL7ASyBdVO0ztg9lc92yF+LTLWk5TRrd3YKe3QZ31vlj5NMXukMMrXMFtou+WsZ54G0yks/GmMX2dG16eQDyAEYrNKxU3LUeq7L3X6O7Ta5U+oa/STddthR8j01iJ9cTpyVz+zAWcmN/ruKAy3AS1WGPdGpyoHefqcqu/vDOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHdjgHlgFnp93hZe/F6xzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHDgiHOjA2RFhe39o50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA58BO50AHznZ6C/XydQ50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA4cEQ504OyIsL0/tHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHNgp3OgA2c7vYV6+ToHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHjggHOnB2RNjeH9o50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA5sNM50IGznd5CvXydA50DnQOdA50DnQOdA50DnQOdA50DnQOdA50DnQOdA0eEAx04OyJs7w/tHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHOgc6BzoHNjpHDj9Ti9gL1/nQOfAkeHA6U9/+tmVrnSl2c/8zM/Mvve9780++clPzv7u7/5u9l//9V9rF+h3fud3Zmc4wxkm7//MZz4z+9SnPjW/1pLWTRe96EVnv/zLvzz70R/90dk//dM/DWX97ne/O8+v/PEbv/EbQ9oznelMs5NPPnn24Q9/ePalL32pTLLW75/4iZ+YXfGKV5yd4xznmP37v//7UA55r6If+ZEfmd30pjed/ed//ufsJS95yTy5Op3znOec/5/64Z6//uu/nro0P/frv/7rM3XGl3e/+93z87U//uAP/mCmbq9+9atnX/3qV2tv23S6S17ykrMLXvCCM/z57Gc/O7TpN77xjbXzbZGpi1zkIrOf+qmfmnzWv/3bv83+9m//9pBrLfK3Z8+embqd9axnnX35y1+e/f3f//3Qvw7JcI0/6/bZRfK3RhFOlVtq5fnSl7707EIXutDsHe94RzV/1+Xhsoq3yF1LWs9skbvdoveW8XJ87Yd+6Idmt7zlLWf/8z//M3vOc54zvrzwf4tuweMLXOACQ3/95je/OTNWvfe97z0s71/8xV+cXf7yl5/95E/+5DBmvulNb5p95CMfOSxdTqw7XuT+8dFYTd6n6P/+7/9mf/mXfzm/dMYznnF22ctedv5//MNY//Wvf31++sd//Mdnv/3bvz07z3nOM6OD6b9//ud/nl8vf+hD+PCzP/uzw2m2w4c+9KHZf/zHf5TJ1vp9vvOdb9CdZznLWWb/8i//MjvppJMWlqN8wCJdoJw//MM/XCY97PcXvvCF2T/8wz8cdr48QQbJ4vHHHz+0fXlt1e9f+IVfmF3hCleYffGLX5y98Y1vXJV8y65vtfy1yFRLWhU+3elON6Mbf+7nfm5GlsmTcfP73//+YfzYifKn3Fe+8pUHG/fEE088rMw77cRVr3rVoa+/9a1vHWzkqfLtNrthqg79XOdAKwc6cNbKsZ6+c+AHgAMGxHvd616DIZjqMlgvd7nLzZ74xCfOvvWtb+V09ZExw3BYRD/90z89B85a0jJWb3/72x8CMJ33vOed/d7v/d7sLW95y+xd73rX/JGMr7vc5S6DA5STnA0OwZvf/OaVAFTumToyfvft2zc7zWlOM1yW76/8yq/MLnWpS82e9rSnDcbe1H3OuS8AZZmGUQ9cWUZAzVXAGYeO4YbWAc4AkpyLs53tbKcacHbsscceAlxxwjh6z33uc+dysowv42stMuXe3//9358BVqfoO9/5zhw4a5E/ed3gBjeY/eqv/uo8W47oxS52saFOLY7/PIMDPzbTZxfJ3/gZO+V/jTzj61WucpWhP9IHnPdVtBkeLsq7Re5a0rbI3W7Te4t4OXVem8WZnro+da5Wt2iPP/qjP5oBjULnPve5BzB/7969syc96UnzsRAIcfOb33x22tMeXMjBwV9EmxkvFuVpvDj/+c+/6PLs7W9/++x///d/h+sAtmXAWTk5AHC9xjWuMa8bXWxi4dvf/vbsUY961ABa5qFAwzve8Y4zvAv9/M///DCh9KxnPWv2uc99Lqebj/rzZS5zmfl9+riyvfOd7zwEFJwnOPBjmS7As4zZ4/vyH6C1CjhjH8nHOGlMbiFjPxkmZ6cWcLYd8tciUy1p2YbHHHPMMIEWvhoDrne96w22lfYJ7UT50xducYtbDLJBb+8G4IztSh7J5skbk8tTtNvshqk69HOdA60cODjCt97Z03cOdA4ctRwQ/WSABxC87GUvm73qVa8aIs3MEt7udrdbq94MS2SG8POf//xhn0984hPzfFvS3vCGN5yDZh/84Adnz372s4doAE4LoC55yfzGN77xAES5JhJM3TJrDmhj4K9DnCV5M5xPOeWU2fOf//zZX/3VXw1gGcODgbeIAGvSTNE//uM/Do4GZ6P8mAEPfeUrX8nPo+YoOkC0F1kBfr7whS+cqSf+xgBtrWzkoEb+5E3W0ZSsitwLtcgfJy+gmQi6l770pbOPfvSjg5xwBDin69K6fXaZ/K1bliN9n/4oAmSVQzwu57o8HOdT/m+Ru5a0LXK32/Reyb+t/t2iW+gazqPx4tOf/vQwXuj7dAhQ/Va3utW8eCYXyJ20xpXjjjtuiPCdJyh+bGa8KLI57Oe5znWu4RzQa6y3jB/KHcpYJ4J8nNb/RGCr/zWvec2hboCyl7/85bMXv/jFQ0Q1Hoztgdvc5jYDaPbf//3fA6B1wgknDGmBt8CP1j6Z8oq81nZItJFJhoBZIpHwf4pW6QLtWo6t+V1G20lzNNF2yV+tTOFlS1q6HEAO9BU1qe21iXqQqXKCa6fJn7oC1FdFNUq3m+hotBt2E/97WY8cBw5OCR25MvQndw50DuwgDnDgY9Q84QlPmC+vELFxz3vec3AkzOCWs3w1xY9TCAARgbWMatOaybOEBpWzzpbSAP0sf+A0PuQhDxkM9l/6pV8a0gLYLDtEltLc5z73GYyvS1ziEsOSwOFCwxfAwxJUS0Of/OQnD3d+/OMfH3jn2oUvfOHZK17xisGpKrMVpXC1q12tPHXIb0t9pkjUErCN0/OMZzxjKsmuPadNgZjoDW94w+x973vf8PtjH/vY0E4ALe26iDdD4omvWplyK/nm4DHUl8lqi/zJN9ESnDLRFwhwxqgmx5y/173udcP5lq91++wq+Wspw05Ke6Mb3WgOfNaWa10ersq/Re5q07bIHTnebXpvFU/Xvd6iW/Btz549w6Pe//73z/ul8UKUEh0kukWeIowSGWwZ47LlmTJcd7wYCrPk68d+7MeGqy94wQvmwNei5CLnkLqJtl5EyooX6vjIRz5yHk0FQLzvfe87A9bRayKZRU0FxDDZEcCJjrv3ve89AB0ijQJ4LXrm1HmgtnIY21/5ylcOSWztAFTRd22RUE5oJI9VumBRlC9bB5mkKpe4Jt/dfNwu+auVKbyrTWuZtO03EFsnk4baHmjLDmLjPfOZz9yR8qf8ovuOJjpa7YajqY16XbaPAx042z7e9pw7B3YlBy5+8YsP5f7a1742B82csDzTOft3WbLJMDYjf+Yzn3mIshKVFrL/yM1udrPB0BWWzlBOVFUMn6SdOtamzfJDs/xj41bEFwfHTCWnxmy5PTE4O2PQxew648bs+ph+67d+a4YnjAXOw7/+678OTlQZ6RXwjoNQEqcEMGa2XaTR2KHKDLxQ+Dhp5f1Tv4GaiVp63vOeNzOzXxLn22wg3igvwJPDMUWAGstU8QfwByhSL21Z1i/3MmBvfetbz6PB8I0TY9+fkvDYHhlkJUavPd/slyGKbhnhpZlk0RH4F9LG+Avc/LVf+7WhDbdL/jhiSJmXUYv8cajVnZyNl2poI/Umq2Oqkb+WPlvmXyt/+vh1rnOdod2VUXvrS29729vK7IbfIkM40urJidbv7Jdkn7GSatO1yLP8Oebkmuzb/2+qT5flyO8WHm6X3HW9l9bYf6Trfvd3f3dYIk4n6EPkaBHwQuboHUvKyZ2oThMkibRq0S10GH1Id4/7q0kawBkSGeuZ0iNjn6WgAJ5Fy+5qxwsybAxD9Kw9vUImF+gp+4YBfzxXWenJRIsl7dQx/WLVEubssWnsUp4QnooSNylkPAKcuW6Cw3gT0Ex6bWGSxxhjTCipRrdo+wCTxvWS/McHAJ76a7PQurrAmI2f6pMJjuRJ/1niKm9p2ETLgMe9e/cOafFbFL/x+uSN8V57jsduz6A7jXFAUOOPSb6prRhax+6U37FW/rRhInfp8ExiyUObi/Qjb7ZPIIe1MuX+2rSxi/T9se2IL4DR6M115I9siaika/DchKuIS+1j30S0rvzJ71rXutaQh7KnnMOJ0VfNOJ9btJ/odUumAffkhP2r75VUm+43f/M3h70y9XV2/nj/1jJPv2vthvF9/X/nwNHAgQ6cHQ2t2OvQObCFHMiG6GMjxSMYFAzfGNP+cyDcw8gKMCRcnqHLWBZ5hc5+9rMPRwb3bW9728FwMki7PjaGa9NykFBp0A8nNr4YFAGMGEUMUEtoxmT/jAAgwICSMqPpHAOR0SyvO9/5zgO4ZMkICjhULjd13j3qCHRjAIY/rl396lcfzjN6ABCMkVXEMTDzjvAty0xzH0MJYFmS/bOUY0w2/AdAlcSwZ4yp3+Me97hDNoeWDniCp/JzBGrc7W53G5bHpiw2br/uda87533ScizNDIsim9pYO+VI1A2+xeHNNYYhpyL83i75izxwBiyzIO+MaM97/etfP3d4WuSP4T8Gd9WLTAGi0TiKs1b+Wvrs8KCNr1r5I1Pajewh7UkHcAg5zaJSI1/a2P5GZVqy7yUj9EGiPGvTtcizsolGvP71r+/n4PiIQqmlFh5ul9x1vXewtQBDQP0QGaOrOcr0Lse2JPpo3759c/1E7kR72PPrsY997DAWtegWEyQPetCDykfMfwdYdwJIRWdmrDHukKX8n99U/Ij+WjVe0JWW+YtIpYce8YhHDLkENPHHZATK3mYc/2tf+9rDmKYfevkIvaM+IXo+S8fs/wf403dEwnKay3EwL/Ohv8aUfh5Qi/4S7TYmOjv5fOADH5hfrtUt4S8ZADqVZGml8/gNmMg4tK4uAMCZeEKWmQY8yTPtpxrwz3NFTkXukibHu9/97sMYn/+OeG//KCDvQx/60PLSkNY+bkje9KR+IGLUFhShdcbu3OtYK3/sFUAkvtrzU0SfSROAELnU/vhvnGyRqZa0wElkEmRMkb/Yeq3yB7Rl65AVhOf6BVBPf7J6wMuQ1pE/+ZlkVEYTflYjLALOasd5eZrg0b9LYp+wCwHoecFWbTrRh/onUn+yvWwVRK3dUJav/+4cOJo40IGzo6k1e106B7aAAzFwARdjivGSNAAvRiBjE6gC2BAhwJA2CB9//ME3TMU4MbsVYhRZFmeQf/zjHz83UmvTBtxjiDEGUz75m5ELMfTGxFkAKqUu8ipn2symMnTUgwPzN3/zN3MHZs8GCGYmkYPBsDYjiewtM6a82ZORFmKIMVbkbUa7vJY0U0cgJaDFfWMQUB4ADiTaQTQaZ4LDBewrCX9t7ozUARgE5MQPBrL77A9XvuFTWudFfHDWOKZ3uMMdhqgiYIVNohFDXzpOGCdK5Jo3YzL2RCCZrV8GnGWpT/g2ZHrgK5Ft8kfbJX8BhjmVJXFk8M2yEPKyGfmTD17FMeDwllGbLfIXGa7ps+pTK3/k2rJgxj/H+elPf/oABHPc7bHFyNaP9A/OC6dSWlEYltVoQ+AVAIRsAYjVsyadctbKs7Ro3wZwohyiaDgrLcBZCw+3S+663tvfjvpfljWLLqPrgOgAWP0CIAaACUiy/679+2e+6EUvGiYVAsoYFziH9uZq0S3Jc3wkX9mzko4zafOABzxgcDZFjQDS/uIv/mJ82yH/a8cLedsvkwNOv+trot/0SUT/JJJzz8aYhNQx0ZP+G4vxQvRb9G4ARNfLPgIwMFbQTSLKEZ1LLjOZMJw88JVxBU+mSLnpmlz3fCALatEt0ZHjiZQ803l6R10jE+vqAm1rfKFLS5DPs0wI0XnAJGNbIoXLiYWUCQ+NkcjyexN32ob8eqMrnrKdxtGTbJinPvWpg74l7yZVgGx0LuBq3bE75XKslT9p2XCW2WpDkbZeiBFASMSc66hFplrSAnwB1ZlUGB524KuUc/08spU0y+RPmizjZcMBnciOyTATu/K7yU1uMkwMrSN/+hV7QbQl/UUHTVHLOM8eSQQbUNPkAZkif/oZ2X34wx8+q01H/gKaeXGIcU0fUv8pm7nWbpiqZz/XOXC0cGC/t3e01KbXo3Ogc2DTHGCAIgP+mGKYJI3rZkIZkgwrDnEGYgZ9gAUAVMAO4NJrXvOawbHO0kYGe5zklrSclCzNEOWWWXTgEoMk5P+YREbFWXZtvCwv95vFBJohhiLQxDPxgFFb8mKch3sCAKVs0mf5AwCK81VLAQPNugO6ShI9Jm9l1CacLgYhYG6clgHEMdAWNqjXrtJaipGopxiL5TMsfUqEg3vxAmk/0SD47Pmex+Fj9AL5GPxx2qbaonxGro/LLE2AM7/jjG21/Mk7RiOeAHvsc8aojJwDI8nzZuRPe5U8LkFfZaiVP2kjgzV9tkX+bMYdubUcJ8Cc9gzIHCCccyctJ9ZMPZkig5YxmbXHO+BZbboWecYD4JzoD3ITp9/5WmrhoTy3Wu663jvYUpZO6V/6n43oyQ49UkZOAfjH5EUiiXB2DKgkMhK16pZx/mTkTne60wBeKE9Ag3G6Zf8jZ9LUjBd0LtAFAeaMsXRf9PxwYeMre0YplyXudDMgHlCNl0A3TjEqgQuTLPoLfZ2loECajDWJYAFeZIsAeZSTTgFinA+pJ2Atelq5Sh3XolsyTpODKaJnUNp3XV0AhEjk6dTySEADMhZkiSs5y5g4XDzwZTJRfU0m0pXKaPwqo9gSsZz78CigmXPyjQ0VgHPdsTvPaJU/+jQTaOpksgwgpKzAJn0UtchUS9rYiABDk7MhZSnBXG1X0ir5M/ZmYszS6wCu7DGTjkg9gfit8keG2IZ4REc4LqKWcd5Yqy+zvYBx+oMxWTuQLxPIxpHadPQJEkFNppEJsql9XfFzXbt1yLh/dQ4cJRyYniY6SirXq9E50DnQzoEYpwboMcUILmd+GYdmvkQdMTQQQKHc/8hAb0aLcWPmO/czVhhmjPQsf2lJyyCxXxknykywzYqVx2whY9519Qh4VdaHseE6I5tB7CP67dGPfvRwT4xwacah8YwUhoT0qYu8S6M0z8q58NUsprwtoWGw1RIjMcbh1J4qMSJjAJb5MvLjPDpviZCNnhEDcs9GtIK6+MRxSFsPiQ58BfzKOcAYx4xRKw+Ol2VRCIgi0kGenLoYqVNOVvJzjCFenstvhmEo/Nxq+ZM/B5zBzJiMc8u4tCyF8agdRNEBkNaVP1F+2hFABGwmv3e9611nD37wgweDuFb+lDe8qOmzLfLHCE/++klJkUVy4pP+SybGoKdo0lCWgaxKlyXJNfLMARXpigAAcaTzzJpjCw/lt9Vy1/XewVYKoECOxro3+iPRPLmLnn7Pe96Tv8PxXe9617CVAB3M+W3VLWVmdBzQjL70LGPe1PLF8p6p363jhTwAYJbriToLbwDZwLMQkAUgAagJqGOCxb6KIobwQAQTh9tSV32E013yDAhEB3nG3o2l2Pa1oqOAZOov0i2RM/6ri/Yo65Ty6E+iTukGeRmfPF/kEN616Jaynsm/PEbveeZmdAFwSl7yGY91+JdJhDGo5j/9k3IoG5A3S/ONJcY/Y6F6Z2xNfqkLYG0sU0BT90Te1x2784yyrWKb5JpjzkUfOkemRMaxkQLQqrPxMNQiUy1pAYe2ASHb5BB4Sy+w8Uqij0taJX+xl9xDZsZ6Rh/XntqtRf7c4227jngUMLosW35L0zLOh/flUmp54Ue5rDwA46p0bA4UYH74s/FFBumGciVEi92QfPqxc+Bo5EAHzo7GVu116hzYBAcAISJuEnVTZpWBdOyACBsvl52JKCtJ9ElmtMrzfgMpAGeMCFFLDI2WtKLBGEkccgapMjJ0gFKW+zDwx0aV555yyikOgyMhysqMPOMUIFDW3QyvzxSJtGJgeT6D0/9E5SV9Zivx1Ux+9qJhiFqygRjWiBPiHKNlvIl+ZpwZ14kKG2468BUjyL1jmopqs9wJz1K+8T1T/4F9YwpwFtA0+xDFwRunX/UfgIC025gSoYXnPqGtlr/SmcwzHEVfMFIZuwxvwNm68pc24VyJ9PMWN32Ac8DhDa2SP+lq+2yr/KUNyXa5LCZlyxHYGqN+Sv6SzrE2XYs8x1EhOyIRfFBkiPOjX2mvMmpxSHTgq5aH5T1bLXdd7+3Xe2k3eniR3MXhTHuUjn7OAYfiAJOJdXSLvIwLQDPPlB/wKZEweVbt0f2140WZ52tf+9r5/pYiOMeAsvFiPGa436QRgEP0V/qFsS/jX/kMv+kiY2HGQOW13+W+jWXQ+rm2cc4S/JNOOmlY0r8IqE4ZAR+WzSmDCQfUolvUFwXUGf4UXwGijPOb0QUmMRA+qmNJAfrSfuU1v7VpypFrQEbyu6jcSZfjVMRw6l5OGq0zducZKX+NvZJ7HC119kIEY5S6jqPsWmSqJa1nm+QEBlklgA8+7CxvKjeRhaZ4t0z+MkHoXvxcRNIB+tCidky7kz/lZEPiEX0TG49dhNinzrHhSkCuZpwPeGorhGVUmy56dgzWyht/Y++32g3LytavdQ7sdg504Gy3t2Avf+fAFnOA4wsECUhRZp+BdAxEMWjK9KJF7LUQYmy5FzAU5yXXABEhhklL2txnaYqPZ7ifc+wYsMmspWdzHMxUZvlJ7gd+2JTXPerCIQiJKBoDhbkW8MN1ZecMjEGtGCfSJurK/fYrG5My2nyVs1M6Qc7vObCHzXi2O3loE2DD2KF0fTyzLTrIfi1I2UUIMjLxhVHHUJqiZXkzvly3PwZDkoEuqkikluVGjN3xrO7UM8LT8K1ME2eLUVrSVsqffAOeKv+YOInqWRrRNfInH0ulRB2MZ4LJq3qrn8jA8nqN/NX22Vb5i2Gvz4reWUQM+UR1TslIeV9tuhZ5jl7iMOg/Y7I81AeVb4Yr09XysLxnK+Wu672Dei8ghAio8Qb64f9YJ+PfmJzLeQDEOrrFpIblaXSacln6XkbajJ9Z8792vCjzyt5GztHz43GKjqc/6Kwyqkj6jLHGEUR30UXAhjHgMP4vvXPHHXfcfAw1XtDv0eeJyqXjAeP0xjjSxt5sgDN6nX5t0S3R98qtDmX9ck45TeysqwuMidFddO6YAmBFnsbXx+dNSmXbCroMP8gN22LRHlJpnzLv1Cfg5Lpjd5nnOvJnRUHqiOf2uLSfYKhFplrSyp+siVL0YZsaL8k0eUIByFvkz0QJIlv26FxE7KIAS5G1ZfIXm/P/s3cX4LYcVdqAOxCcwV0v7u4eXIM7DASCQwYnBA0hECQDDA6DJDC4BP9xgsvg7gQb3GZwZuC/byffTt1O77279jnn3nNzaz3POd27u7q6atWqVWt9tapa2rGxSHu6bol0lsB695RxXj31c/1nSNoGn9DUdOSAzI/lFzBQfrV2g2caNQ4cWznQgLNja8u2ejUOrMgBs4Glo1lmE0MlM3nuAaJiQLtukDVbDZhh6CBfgbJZrgHdUrQM8O5ZghLy7pq0nvNFImWwL0MZ7ZJ9HhhGnAmRXvam8m5lUJZQabACCTgKjCPXnWfGMeltnM+hCsDBcRC5JVqojERQrszci3bh2GfPkuTl6FnOkLIB3oYzipyRGK3zHP8jtkYpiYICzAxpeC0bb5tVtKl/HBPPZe+gvK/My3KJ0iFixKkjUm5tHNDMktc4VO6PGZGuDymyhW8MutKRw18UB9j5essf0NCyBMRZLOvLMI+RCRBEU+VPWsug8JVslcCoezFUtUmt/E3ts8pcI3/6DZnSztnTTFkRx1u/11++8IUv9H3M0txEih2Z6sj/ogI4fJwFcjIlXY08W5qWfla+1yw/BwZPlZPDP4+m8jDPr7fcNb13tN6jxwNiDOUOsCyKZyjH2lnEb6lzEt1LrwIvanULHZ+vxHK06QNytFaaOl7kPcZS8q0ewBd9Ut8j9+pFpzzqUY/qj/l4S551TLRLwB96CK/oNnUqST9FARZMpPjKpL5rk/tSH4pAQiLKkL3pAHqAjWwF0N/Y+i+RzeoAPK/RLUA2Y5Q21vYmKkLZi81948KquiCRcHSF8g8Jn9kM9DQelZNvdKGylYQPCPhr+XhJGUNKu8N9+QwpMqxcaNWxu8y3Vv6MiWnrLNn023V2DaqRqZq0bB/yTwbYeCIdQ9kfLPZSjfxl/NYG7LjSztCf7LmrvY1XNfKnvYegvvICtbW7PI1DJgRqx3lyKY8SyAov9ttvv76PiQScmk5/oRvY4KXdKk/6IVRrN+S5dmwcODZy4Mjpp2NjzVqdGgcaB1bigOVCjFszwwEqZMRAdY3TX+7Ntffee/dGoxlAG2ZnuZHw90R5BGBidO6xxx6zcjGmE/nEmJB3TVoZmYFkcJbADCcjex599KMf7d/HoFYvRhEntaQ73vGO/XX3E+EQJxuQFHDIM5x+BhtDW3lR9nNT3yy7dD11Y6hyzIElNv8d/r3pTW+SvHcO3LMsoqQtR0WbMeDyzvK+83ydCzhXLj0YWwKQ+qhvCZpp4yy5HDoC3qHtYvT7bc8b/FQuQEdpbCWySDo8iYMj/SLimMX5Df+SR4DbyJjr6y1/2h9fEEO8pHwUgPEb4Guq/MknTjeZKp0mjmkiNoFQqEb+pvbZWvmzxxHS5vYnKsmSKPLO6Nb3E6UpbSIt8mw2g2aAT01XI8++cDfsU36H3+rhd4CTsh45n8rDpF9vuWt672i9F14ADYBEIbJFdwMuxgDa7IsnfRnhyvlFtbrFuEAP0mVPe9rTZvLUZ7aGf1PHC6/Qx6LPOcWHHHJIr2+Viwwi+ipgjw2/A8K7ZxP+6OWM2+kHnOYs35TWb5MfKP1UXnhtIqrU/cZX9gCQIPo4uosuy7JHeQFBo0vpdmNYjW6RR2RCpHbqV9oT0cer6oIs3QsI451DCmhjXEoZpMmHjcr05A8NQRRpo/uTJs+ZoDApFzJu5oMMmbxbdexOno418sdGy1dcgbY+1hHQ2vUAojUyVZMWiE5G2THhhTrQAcYVsg/QRTXyZyzSNuyRbHrfZ7L1H1AaKGhPt/SrqfJn64WxsSjjGcDY/dilNeN8wC16sbQz9Vl9UV0AmVPTsYmRvlrqUzJIFkO1dkOea8fGgWMjB1rE2bGxVVudGgfWwAEgiIGS0WC/Bk49yqw1wyCzn4zYAC35+hkj2myrGTaOzEEHHdQP5gwGg71nzGIzQrNRLvDGvjHIwD81rfQisABZymufKIaOsnIsvCMb9DKwzMYDf6R99KMf3c8mMsriENgIGSiCfGnswQ9+cG/0y5fzBZDLrDDDKwaKMjPwhOHf97737ZcnclaAWN6brzT1Ga/wLxFjAZTGsuAUis4AWjDsOUzqkmfLZ5QXj5T3IQ95SA/qcRxSN2nDk/I5DlB4IW3AnnzwgXHpa1IMOOkCpuFxADPtwukIn8v8c67NOMhASm2g3gxnDgejuzRi11v+yKKluxxQ8vmYxzymdxRKOVHflH+q/KkbQNTXX/HtsY99bC9TDN4YrZa0mrFGNfJX02f7zCf+A/baGBpIrF0Z2NqC7DCsybZNypE+S644HPb20cfxklxpe+DD4VuXa+Hb1HRT5XlidRYmq+Fh03tf6nm5UXqPYw+oARLoL3QbWSNLdACgdji5oEDkUuSFJXv0XmTU2BSaqluMYdGHwIFHPOIRyWKbIyBLNFYNTeUbPRkwULRYgC9fQgbq0X1k0Yd3fEBgr7326utMZxmvlDu6xTsDePiyo4gh+du7Td76J57pq8Z3e48i0V2WnwF5Hvawh/VjBf0VMA5Qpe8gUVh0high+tv2B8Zgutu7tKH0qEa3SG85HYDAGOxjB3mPMWmePHhuKrFXkHznEV1nPDJukjM8LseF8jmAiPGDrYHHQHy/A3xJyz4YEltGVKU2wDftIdqN7kTacZWxu3/4qH9T5U9yMqXt6PLYMY5kwXWgEyCoRqZq0rKr9H91tlzUOITYknhjrA4QVyN/wFsf6AGSkVcfltL2+hQZQ/qV96ONkr+acV7/Z5coH5mKTYzMi7MAAEAASURBVBo5wgf9zd+UdHSh8Vqd99lnn14/sPuiM/qKt3+NA40D23DguFsV+f6uUMwUU6PGgWUcYBA0WVnGpc17f0r7ASYMpoxCRqI/xOBibCCGsyVtjBfXgVIhzj/AjWHFKLGJttlA5wxUDo3BnhME6LJcpJzprUnLaeHgcKo4Cox6+XIGXvziF28z4wuYcE+ou7J5jkPAmbf5crkMkjFulhBoANxQ38zq4SEjupxN9mUiaRk16qYsjDPGCf4sIulF90lf8jHP7LEV7JOv8mfmMvfKo2gpXy1j+OCDP3lqD+1prw4z1wAtTpJ0yumeugE/OJnKoo3wQx0ZqwAvQAbjX3k5LIxpH4L4zGc+0xeDse99IsPwV/7KzSmzHwojTT7SMfrmEcfXmCSaSbm8j5xZWiCqUXttpPyZiVV/dS3lhIOIP5nZVv4a+QM6kR0zxuROHRiqHEoRGOWeJ7XyN6XPzuP3IvkDDpMRvEhbaENtwLkuZVta7a2fO9Ib2k3f1he1KZqabqo8z6sX8A5/ybu+s4ym8HAj5a7pvaNbiK6hp+nojEFkiQwBX7LvkH7ki6/6JmCILvMMGbUUypcds0RR7lN0i3QADONfyLvH/sgyvURXATo42uVSwjw/PE4ZL4BmAAL64XnPe14PEMlHf/Iuun7L1mhkeQEX1FMkDp1F95N9+ti+mIlq9ry+CxCXVp+WTnrE+TYeew7R8ZZi0t10PvmnD1wHLCQSqk+89R8ZNhbT/XSotjDm4gvdrZyhGt1iHKIPRB0ph3zVk54E5CS6NHkPj8t0gSgb5fRhGOPgGJEx4Az9jWfKQP7wzG8yZy9I/KUXAWD4Sxfih7SuG8O1G7mWnpyLnjI2IDz2R97wS9vLE9FlNWN3/9DIvynyJ0ocgI0AswFetYUxnL1DbrStaLSpMlUjf97NftS/8Asv/SF2jQ8ElFQjf+kz8o5MkVn9TRvF1pX/WuXPJKB2w6tEW8q3dpz3LPuKfOC9vojYLGxSZUdT0+m/5Blv2QHkWNvSr/gsX5PEY7TIbhhL3641DuwsHKCLo2eGZd5tq3Pb9zLGRMLBh4na78aBkgMUbZOVkiM713lN+zEkLZVh8Bk8s+/JWmrM2GX0y5MzyyCZRzVp5ccAdQTKMFDmEYOUc8+YZfDFCZuXXp4MXUaJvBfxgQHGsJFG2hi88/LeqOt44d2LZtAZ9QAR9WKQTykrmWDkM65KJ6ish3YLeCLfcv+QMt2UcyA9B0U9ljlHU/KrkSlywgn0fnKizvOoRv7kAUDGe06wusXgHcu/Rv42os+mTEAJ/OBY+ltU5rQbpzKAWfIpj1PTTZHnMt+1nG8ED2vkriZtjdztjHpPW3A4OXb06aKoW23OkbS8kZOfZVbzZCGyt166Zd57Fl3fiPGCvlI3QNo8EChl4ijTQxx4Om5RnzZeAhO1AZ4FXEte5ZFcsh3Uzzi/bAyo0S0AAzYEwHLeGFSWZSPOyZlx3sQJ3s0jAAf+GruW6fnkAZDQfgDLRPPlXo6rjN15tjxuhPzVyFRNWtF62p2dwh5dxPda+eMkk1c6g0wtsoU2Sv5qxnkgrfLSj8bYeXb01HTyASACDJbp2FJ+2nnjwLGRA/zkcuKsrGMDzkputPNJHKgBXiZl2BJtVw609tuu7G4vaxxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2OQcWAScHWeTl70Vr3GgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWCHcKABZzuE7e2ljQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQObnQMNONvsLdTK1zjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DiwQzjQgLMdwvb20saBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzc6BBpxt9hZq5WscaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2CEcaMDZDmF7e2njQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwGbnQAPONnsLtfI1DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DuwQDux2+ctf/h/e/LOf/WyHFKC9tHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWBHcuD0pz/96Ot3L6+e+9znLn+288aBUQ58+9vf7pqsjLJmp7jY2m+naKZWyMaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGge3EAX7yPGpLNedxpl1vHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHNilOdCAs126+VvlGwcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgfmcaABZ/M40643DjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DuzSHGjA2S7d/K3yjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQPzONCAs3mcadcbBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB3ZpDjTgbJdu/lb5xoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoF5HNh93o12vXGgcWDX5sCpT33q7le/+tWmZ8Kd73zn7uQnP3n3H//xH92vf/3r7trXvnZ3gQtcoPvwhz/cfe5zn9sU5R+WcVMUaoMLMWwHbeLa9773ve6tb33rBr99PPtrXeta3YlOdKLRm9/61re6r3/967N7xz3ucfvynv3sZ+9OcpKTdL/85S+7z3zmM91XvvKVWZryRH+5whWu0J3znOfs/u///q/77ne/273rXe/q/v73v/fJ/umf/qm7+tWvXj4yeu4dP/7xj2f35Hf5y1++O/3pT9/zjlzP65c1ab3g0pe+dHehC12oS18//PDDu+9///uzd5cny9J694UvfOHykdHz//f//l/3t7/9rb9Xy+Myw/Oc5zzd9a9//e4HP/hB96Y3vam8tebzE57whJ2y/eEPf1hzXjtTBle5ylX6NjzBCU7Q+Rz7xz/+8bmyNqVel7zkJbszn/nMo0np6o9+9KOzezVpPTRF1q93vet1xz/+8WfvGDsh71/4whfGbi29drzjHa+7wQ1u0NER//u//9t97Wtf63n25z//eemzNQluf/vbd6c5zWm6F73oRd0f//jHmke3SXve856308bGyx/96Efdpz/96V5PbZPoqB81aU91qlP1ffF0pztd99vf/rb70pe+1Oc9lu+Ua6c4xSn6/M54xjP2ZdUf//KXv/T9EY/f+973rsyH9ZLxVfTd+c9//u66171ur8vJC7312c9+dgpL+jS77bZbd9vb3raj+/Dk5z//efeMZzxj8vO1Cc997nN3V77ylTvt+5Of/KT7/Oc/38v4WD6r9gV99KIXvWj3zGc+s1vUb+52t7t16v/v//7vY6+fdO2yl71sd4YznGFh2v/5n//pPvCBDyxM42ZN/1iWWa2eOuUpT9nd+ta37s5ylrP0PNEur3/965e9Zpv7a5VFme2+++6dsm/ZsqX7xz/+0dtPn/rUpzo8DK3ST2rHAu16kYtcpDvpSU/a656Pfexjc2VpvXVV6tmOuw4HGnC267R1q2njwGQOAHoufvGLdw960IP6AXHygzsgoQHTAG5A5Ixd7GIX6w2Kn/70p5sGOBuWcQewabu/ctgO5zjHOXpn92QnO9kOAc4Y9je60Y3m8oERGuDsrGc9a3f/+9+/l6s8cLazna1j0B1xxBHHcFYud7nL9Q4Nwz7Eudljjz26Aw44oHf4AAjXuMY1cnvu8a9//esMOOMwMwpDHHR5Avme85zn5HJ/rEnL6VI/jmlI/S5xiUt0wLMSiJqaFg/8LSNgCeCvlsfDfPGCUc7JLss7TFf7+2pXu1p34xvfuHvDG97QMcB3FXroQx+6DcjFGbrmNa/ZveAFL5j1i1pe3OIWt+hB57HnAEAlcFaTdqqsAynKPjlWDsDrKsCZfqFvl8AcZ9rkwJOe9KTuv//7v8deV33tOte5zkwHnPjEJ14ZMLrJTW6yDXCvv1/xilfsQai3ve1t25SrJi09d5/73GfGZ/2S7gdQAXU41FMJoHf3u9+9H7/LZ0w+AHONHXSWCQiTYi9/+cur8l9PGa/Vd+yTe9zjHt1xjnP0Qp8a3uDHXe961x4gCG/kuVHEBjQehMiLOhsjn//85+dyf1y1L5iMAbwgecwDzpTFpIyxcS1EH5gkWkQAzWXAWU3/WPSu3KvVU+xyE3Gh8jzXFh3XQxZN5OlPbO8QXWAi4dnPfnY/yed6bT/xzNSxgO7FixIMZWcav03Ovu9975PdjNZTV80ybSe7HAeOlvhdruqtwo0DjQPzOFAaTPPStOuNAzsTBxhNiBPGWR7SV7/61f4SR/u+971vbxAy1M3mir650pWu1EeWABQYdoAVJBLkdre7XX8uUoyxZjb3Upe6VO/s3fve9+4OPvjgPmJNtN0YnelMZ+rTcqS++c1v9kkYnAHNRMWIXmP0M9rVxfHNb35zdVoPcOA4oHgBKPvZz37WgyQiRgBzot5++MMf9nlPTQvM8/wY4Rm+4idAoZbHY3lu1DVRiaLNdiXS5oBd8vD2t7+9j2QBMnOO7nnPe3YPf/jD+4ifWp4AehCweUhkrqSpaWv6xTe+8Y2+X5Xvcc7R1G9R+lv/o+Lf3nvv3YNmAEC6gAN585vfvFOPBz7wgd3jHve4itzGk3IIRVWuleSjjZGIEFFOomMBXOSdfhElhmrSAoGAOfqzCLZ3vOMdHYAFEABAu8Md7tBHgvcZL/lH/vANH0XsKs973vOe7r/+6796cMw76D3ApEgokxjnOte5evBS+mW03jJeo++UzeQZftHxouNN7A37wLI6ZAzTn0QY/elPf1r2yEr3yUZsQPU0gaD8rhnbbnWrW3Wve93rZnmv0hdMUNz0pjed5TF2gl9kKGUZS1Nz7Ytf/GK3ZetYNCTvIa9Iuyyimv6xKJ/yXo2eAjAGKHv/+9/f/ed//uc2EV5lvvPO10MW/+Vf/qXvqyJBRcH/7ne/620IE1n77LNP9+AHP7iX9dp+osxTxwKAqvZAn/zkJ/soV3UD2hu/2DDR7+upq/oXtn+7LAcacLbLNn2reONA40DjwK7DAQY/4qwsWt5yvvOdr5/9lvZ5z3vebOaUw8mxY2Bz2gKcWTKBGI4AMo4RxxSIdrOb3awHJBhtlnr+27/9W5+2/CeKYv/99+8vWcZoiScya4rk8/SnP70/9899M72iLj70oQ91v/nNb6rSMmzNyqJXvvKVsyVVDE+RMgxzTg3nriatZV/+hsTp8T58ESVnmaa28B40hcfDPNvv9eOASMzI2mGHHdY7QXL/8pe/3D3hCU/onRhRIQFpp75ZRCGwA6ixqL/JryZtyjqlXwwjY1L2gFoAo2G0VdIsOpLfgBgHHXTQzHG1jFDellKpU8DnRXnNu4d3ZSTXvHRTrmeZGydSn0cih4D69J0okQBnNWlvectb9kvfATh0HzIBYakWcEXff8UrXtH3/f7mnH/0DEebngTAWbY3jC7KpII6iA42uSFCTR1EuCyijZDxGn2nbIkOExVvYmIVUg8E3ManjaJsKSAyOJHNIvxE/VnaL/orwFltX9DW97rXvWaAx7w66F8AuYwT89LVXJ+nw+50pzv14zoQ6FnPetbCLGv6x8KMips1euq0pz1t/6T+8Ja3vKXIZfrpWmVRpLftK9CLX/ziGTjF7jnwwAP7fmz5rUje2n4ydSzQF8gisnQ7ehxQZ4sFEw7kZ9999+3TrJeu6jNr/3ZpDjTgbJdu/lb5xoFtOWAml7MfeshDHtL94he/6A455JD+EiPG0hH7VYl+EVpvoGJEJczeYMVxYAzc5ja36Z1mToAon1e/+tW9Ue16nGeGJGN+6GSYObK0zXu8lwMG9HjVq15VPVOr8AwO7xVFIcQb0GEGMgNuX8Gj/k2p5wUveMHuhje8YW/gjwEiZX7zzvGSgYFfDAHOwne+852eH4w4lPcwQMzm+w2Eeec739k7O8rK0JUPgwg/3/3ud/f85WC85CUv2WafIqCIPbOkBWKYYdV+U2e/8c9MsegiM58MOPvaMObxs5YY48An0QNmGn//+9/PZCUyVeap/CIlRGlFJl7zmtcsLT/eIc7yIhL5IF/O4DBCzNIywFn2SUsUhPye+9znbuMgArVELomwEskzjzjHHEZ7yGi3UGZdh8Y+wIDschpF35CDmrR77rnnLEJkCHQxgs3girBDNWlT7vJINkUMIJFt4X0Nj8v85p2Tf3vx2OOELH7iE5+YLbfRRpZz4DE9g38l6QtmqPU7MhiHAFgkyvBf//VfZ+03te9YAictJ1HbAzcBr3iwjDynrItI/wd0bNkaPREQdFF67563Xxt94n3KWS6d1K85QvhqSRU5JA/0Nj0FrNBPQpxh+kD0BIdOOqQ9llFN2hpZH3uv8U3fsSQroECZbopO1u+QsbHcz0dfd41ulI/+FAKQi0ylt/QLfVg0qyitMbIc1TP2sTJ2keMxWjZWadssTyv1i7zoDsCZ6FO6SptPTavt47gO9xIVoST6Tp4mGZYBRZZnKifg0dJgRM+KhtMn8Ur0GVniEBurnvzkJ3ePfvSj++gz8pNl9v3Dg381Mi7CEt8BU2yNkL4c0AQfx5b3ztN3lsUbNxHZM/HBbsoScyCRCBlRkPgAsFLfvEMEmH6YSFjjpT5ov6/0r6m6SRmkXTSGGseVSZRzSXiszTP+uVfbF+iQRAl95CMf6XXsmGwbN/CTnNHndPEiqql/mY96klFE9mJ3lWlyXtOX8EnkJRqOO8B//Y7umAeajekp7e65EDmi19keSPkA1upEht0TnUiOA0SvVRa9h91or1ftlogu172PzUY+5kWfSzevn7g3dSzIuEdnDW14cmv8ZlvSZ/rTKrpqWT9RXrSq7B35dPu/s3GgAWc7W4u18jYObCAHDHac2pBzoA6yhGK//fbrDT+/DVgAKEvSOHCPf/zjXe43TTVgMWodQ8A2s8oG1nJQZbSL5HnUox4127/ljne8Y++s5dkcvceyIZEQnMepxLizqW4MNGVXBoY5p1B+2bB8aj3xxp+8VqHHPOYxs1noPI+fDFqGzyMe8Yj+MvDLexibMZz95lhwnvG0nIXETyHsyqW+DHWGA0rElHP3tQ+wFE8ZVxzsRYRXDNqSj845BZbsiFj54Ac/uCiLbe4xatQz9VImgJ4/s9oAyRJQvctd7tLzRyapn9nPhz3sYd0jH/nIGXi7zUuO+hEeAQge8IAH9HLNIWMAlg6CpZbDvTGSnz2MUEAI5Vd/RikHl3Gm/dRHvpZSLCLAsHZVl3LTYzIYHst3SLkHDKxJK584cJwyzuBlLnOZbsvWfgVQ4qBw6EI1afNMebTMU/nwqwQAa3hc5jd2TlYsW0X4qMx+0zcBM/UBjgRjugQzPMMRd1+99auQ9P60JZmZ2nc4S8PldRwJYDOnJ8BA3jM8lhMXw3v5DRzX971LPZcRoCFLkYdpI9NjAC8ZDiDpOUAPmSV/ojRe+tKX9tkBeTg8+G9yBOmXSL7ACO1Cxx6x1ZF74xvfuI2DOjVtraz3BSj+0Y1XvepV+yuWusWZTJKpOjlyMrbkW/2MLQEH5A24MaFU6jlpgGv2EhW1hnchvNQvyR2AMhFyuZ8jfiwbk/Vt7SX/AOJ53sSA6+4bT9DUtCJfs2SMnJQkT7pV3wR8LQLO6Ev8tOTVxw8QkAQAEAKWkjcgEZ2lPzkH/Jm80t8WAWc1Mq795KdMQIGU3fIz+h6wMq8vzdN3aQP10WbyBnIgIIo+FcI7Y76xzngMdAESROaky1gG5MeHqbrJs1PG0CEQ4TlgB3sJlXKfcpXX+kRb/431BWOBCRR6wmTRPECM/JAx/dQ4Ny+dd9XUP2Vz1B/1S6RNE+3dXxj5l3ac0peAwGwlNhb9R7egTAg7N9k4RvP0lPGDDCL9FO8zeWGsYsdlYkEZ2e7+2CVPfepTe1A/dZDHKrLoOTZZaa+4hrRRQFW2xDya10+knzoWpA/Ebi/fVY4T7Fv2b62umtJPvHNV2SvL2853Lg404Gznaq9W2saBDeWAmWJ/WVpjUDAAI3sacDA5G1lexegH0hjMgSplBAuDQQi1jVZFdHBa44xy+hhnnstsM4cm0TPAOGQmO04W8MssNoOBsXzooYf2aZb9Y2iLNPMcB1Iovqgm+bnuPgMnUXVT62nvFfmNRUUtK5MZTk4FEu3FQGbwMEzxyjmHIjPO0jHyGJ1maRk/ePvP//zPvRHNeAIIMP6Abgw1xkNJ8uYcaU8Al4gobYT/jAu84BgPnckyD0Y+PgItOTnqD+ACSjLiOQA1wJnZT/XijJCpI7Y6LUL1RWExwJQtBidnBk/K8ksr0sW7teEQFCnLHoNSNFCIPDPKzeg/8YlPXFh3kSKRS/KLGLiIDHBgS0BYWjL/tKc9ba6MWCKF8B0YEhINo03xhtOtT4YYgPoRwqOatJ6JASm64bGPfWzfnq4zru0fpL/ZswTVpO0fKP7hV4zg1772tcWd+adjPJ6f+ug7+rPlrAxkukE9OMucFG3l65BjIBMekAEyJYpEBIgJAPVW5vC9pu/ECdan8VLeQDMyBhAhh4u+jMip1McWUZxU0V2JkFuUvpStYTpRemhsv6REs6Q8HCb9e4899uj7ovYF9mcvPvo7QHf6Bqe/JACavkEPJwJxatpaWS/f69yEjLpwyoeOXY1OBiIg+QyJLKI4kAASY6S+TD5NBgATyeZee+3V6wxgKVlB+GkSAlkOLu08mjJWxbEGwo2R68pmPKpJC2QI+COickgZF6OrhvfzW19F+p73G4ssq0L6n7GKfBn7jdUoAIeJCTo08tPfHPlXI+MmUdg18hQhaMmyaDC2ir4MjB9z1hfpO3aU8rNxTFAAMRBwNcsijfUve9nLerCUDqNz1JucAs/8iX7VVpbUZy+uGt20yhiqX5iIoCcR3VVG4tX0Bc9Hzp0vonJ7gkXpauo/zAc/lF+74v0yCg+m9CUyykaypFgfMAYYY+gCRPeJKhyjeXrK2MTmMVmqrziGAD3GFvabKDbvN76ZJCT/oiUB9GuVxfS9vNfxfve7X7dlq00am9MYUUbilmkX9RPp0peXjRsZO+hL9YvelUdp48WGqdFVU/vJWmRPORvtnBw4zs5Z7FbqxoHGge3NAQM2AjBlCRuHn2FnEAeclWR2GzgG7OFQBZBh8AE4EgUBgEKZoTcAcxY4e4wZA7BnOfNxGGNAl++bdw6IMmhyup7ylKfMBlh7OtlTCgFkQlPraZadITLVwEv+jowHA708LI9iiHPCOErhE6NnSAxnZQYycUw4X0g0T2bBRc7Yn2pIcVDwEGiGAFZZcsUgT5rhs34zMJXTe802akcGJ2AiYFmM6LHnh9cYSHGqOCNAM8TxztJX9wFHSAQGshluyi+tujJkAWBx8PuExT+AYe6RK0s77aNjKRoiTwzPecRpstxPHuQ5+7sEKFNO52SZMWy2GbnGYB0jwG0MzbG9ShJpxuBO35APkDLEaEQ1adNG3o9v+gFwW73Uj4OX99WkTZly5GwiMlYCwLk/PM7j8TDd8DcZDGjmnln8GNUBJjnY0uF3+oy0DGRElheBWekXU/pO2sRSL31F/ckbHgDwsk9R/+KRf+oCbF30JwoDHX744QvTJY842SOvm0UFB+go05TAUMoN4EgEK2A7II/6lpGbcVjoM8sRTcYYB4wV2kHUQfpkTdoaWS/rAmCkd9HYV/NqdHL6rT4zpDiM9CmyjMcEBeLQBgijN02CoNLRwxcyZIwtl872CQf/poxVmTAw/o1RrgP6atKmfvIMwFrmn/6U/lDey7lxOVEziXYmU+TC2Ai8NubgQxnVBsxCZEn/imwm3+Ex/J8q45bwRk7pfVGXiG6Pbhm+o1bfed7SO3XVR0Rveic9xW5Kn6WPF1GNblplDPVMACPlKAEKv2v6gvTrTTX1H77bZAayTH9MNobpa/qHZ9lixldEDwC68Et7Z3llf7P4t0xPFUlnp9onQJMJ1YBbJjgT4WzCIuP67MHiZFVZpAeA3ZED8juUkeI1sy+bz7MLpo4FAGj9BZlASB9ns/gadMjvWl01tZ+sRfZSvnbc+TiwbUjCzlf+VuLGgcaB7cABBjoDz0AVgCav5RT6G1IG71xnXAMShoZnHI0Yv/IXCYMYBJZ6AEX8ZVlABunkveiYPSEYMRlok54hzLlWN0aFMtTWM3nVHBnG/pD3itywPAPAEz4MHQ7GVhk5wiBImgBJKQNAU11jMKhTDAuGTQm+eIbzJG0cseRTHhmWWY4rL4CWNvFMjLa8r3xu3nn2FeF8DmWC08BpT/uTiRjvQ6cXSMW5WUT4lmhGjn9mjMmoenFcs7fGMB9tYpkOmcMngBseovDUOceuXL6QCANGJWMwci4tinFnucpYxIa96Sxj1cY2uGWMep/f3q9NyQSqSZs2koclypEpn28XCeG+iA+Aak3aviBH/eMQp98Bi5bRIh4ve5acBMhJWk64PLPMnJEO9HJNRFgiBkVLoqFMJR/H2r4D2OGkaH+6hYyZXDDhELkp89/R52PRMylTdJHfAVicixYTCZqoKnJIXkoiT3QbMC3yDRz3lyhR/NcWNWlrZL0sD5A0uj1Af3m/RidnHJHfkDI2Rcds2ToRhDyTPt9f2Pov0YKewWsOG32B13F4k3Z4nDomZ0JrrKzyzHXvjCzk2vCduS5t6idN9ESZPnwIr8p7OWcPyDMgEfCcjtNPsuQ3aUWz28fTvXLJKX1YliXpy2PqVV7L+ZiM07UivER068uIoz5vP7pafZd3Z3KMfhqOyfJEiUzPM+UR7zIG4cswD+2kbTKurzKG5oux5MhSPG1mWw3bI+Br2jeyUZYvMrCsfcpnas5r61/mzeZK/yu3ESjTDM+1Exqra3kd30Oi8yynN7lGvhEAPXklXY7L9FTSlcdEdrtGpoZyQDaUWbr0tfL5POdYK4van02krS09Z6+LmLSUUh8qaUo/qRkLtJtxlkyaxNZvRZ95T+oskrqUvym6ako/WYvslTxp5zsfBxpwtvO1WStx48B254Dlf2jKrFwKBxAoyUCGhrNR5aCW9JYoWL4SxyzXVzlmljDO2zAPA79BH3CVCKiaeg7zm/obOGHGc2wgH8tjuJSKgYJiuA6f4cyGfzH+pdmy1ZHzN0YBGsbuuQa4FJkVg39euinXEzU4rFeeBTQxYBhFDNzwSaRiLckLcDZGHGbAGUNI/ThIIY6a+rrHoAQslXvrJfpFel+PK8nMb/ZT4gyXG2iru3qhRD2Wzzq3ybgoHREYjEHgm7YWbUFeAX3hXU1asi0/zmdAM+/TD4E8+nrKVpNWHiFRntpLn59Xv6RdxuOkm3cc6hPp0i4Bll3THvazIfvKpp6cTjwVxTiPavuOaEmRBZwk4DaA2R/+WqozTw7z/ikRrNrb0mKyiX/LSIRquYlzmT56MXqyvJc+qh2jv903CQKcD/iNf0OdefjWaLgxsheVvoT3HDnAWU3aGlkv3x8AycbnZV3KNFN1smgq/THjRZlHrgWsia4kcz5EMY/wIksUgfCJeijTixIFFIvAmjomR19Gf5b5OQ+4oR8lcmxKWjzUd6QlJ8OJgYw9iTwbvtfvRMDkvamzOg7zC+hgoiU2A0CIk5znx97h2ioybm8zWxPEiV603LxG35VlTJ8jS/Nkgw6ZRzW6adUxNOOdcVKE5P7779+Ph2w0cljTF+bVY9XrNfUfviPRxmQty8uHaYa/a/pS+ayJZeM4MjYNJ5XLtFP0VJneeSaUnWuXeZRlkGP31yKLqY8xxrJN9k4mpcp3TeknNWOBCWM6iK9Aj9G9xhZjvSXQ6kSv1eiqqf1kLbJX8qSd73wcaMDZztdmrcSNA9udA1mSUs7OloUALAydkXLWrUy77BwgYLNpBPyx/M3AzOEy8JZLrZbl5X7AhRjyw2fiJKhjzmvqOcxvym+zZFn+YWC3zMnSHLPq9qRgSA+JgVBSnIHMTJf3nJd1KJ0Qs3SJVBo+M4zcKe8z4O2PwUDR1mYulRkAA6AYznKWz46dZ8lSWc4yXdpL+5VOOcOmrI9nxuSvzMt9DhBelWCRNGXecSJdZ4Da48azjGvLfNMP3EcxpJ0z2EpSvwAEgKqSYrR796IvkYrEM7sPZABqZrbYNZRlzs6nplUH5Rnrnxx2DnnapCatMoQCGJKNIV+SxnEKj8v0Y+fps+W99J+AF+4BaMg9WQVYZ3m2fcKGuqvMq5S1KX2HIykyEyigjwPNtB3Zs+cdR7TcC7J819TzlDfHZc8tShfHOP2tzCtgxVD3qI99oEL4yVmJTnJdP9UOkdmkddQuZDptV5PW81NlXVpkTAkAMS+6pEYn69scp4AqR77lyP+5FkA38k/vzPvoiCfxJLqcbEY+y7yzNMjyr+ii9NUynfPoxIDIeC3/gE7S5JpzE12R9VxflNYzmZwBDg7Bh8jTojFFuycfx0wG0bdD0o9QqfP22GOP/lo52dFfGPxbRcYBGGlL2QHds9/mIPvZBMkyfTd8Tr8y5tC7w0j+pC11WK7lmPbye5luKse5ZWOofg+UBI6XBKDUnu5f4hKX6IGzmr5Q5rUe5zX1L9+nHwB30LIP+JTP1fSl8jn7x4ZMSpGtsTFgip5KPuUxuoA8LdpDLgBX+WzOa2TRGGqiWf9nu5YkwhhvgVbkLGWTZopdUDsWAG/9Ac20Kxml+7JNQ1YzTNVVU/vJqrJX8qqd75wcaMDZztlurdSNA9uVAxl8GOnDwZBzaKacw5gvQa6lcDGGDUz7779/P6OU/G5xi1v0p3Ewcn3RUYSCGTnLCodk6ZZBFpVL5ja6nplVHC7vUw78RXEq+x8j/5SXQ6z8wMQsP5NUfUsQiPHCCcI359lPJtnan4WjPzSUc9/Rht4BzQ444IBtnOS0S5l+2XkcLc41Q4xhE1L3RLswVhl1/ly37KEsv3SW9uLFk5/85G3ArORns13lBxb5hHsJJDBWQymTCDR7fiBL/EQRef+QgAJpA5vrlyCYsqoXKq/7veWoiL95UUDSAO04kpYH2YstAAR+JYole7TVpJUPpz/7PXlXKNfyrpq0ycMxkYuLlmlO5XGZ79j52FKmONkBL/KcPkIORO5k9n1ZRFxN31HvPffcs3caLMfJXmScCDIKLAIWjjlNKSNweipZ/rlWijPF8Rjq9kTBBHjIuyxdpks4zfQPmXSNXkD0EaABidIsgQ19IjrOuFKTVn41si490jeR8s6LTqrRyfQEgDlRX33mR/2LQ27fJERPiybT9hy8kgAQJhyAagDcjLNlGufGKSQvepIuStopY1V0J4CzLIPfyP20cU1avASQ6U9lRC1gPpF35bjUv6z4F1AtfRgohqeJeE1S+iq8tuUC0o/0Zfp3UTSYtLUyrvzkLM+eY+sHLtTHhN5w+Zk0U/Rdn9ngH4AwdS3bRTLAtPY5YuvS5nlUo5u0a9p22Rjq64z6tWXRw7ErdkWAg5q+MK8eq16vqX/5jixzdi0fwinvzzsHgoeHU/qSfMgM2SGnJhrpArJlIng4Pk3RU2Nliy6gk9lw+BLSjvahZPuwZeZRjSz64q3oUM9kW5XkG8BcfTNpnXvL+kntWGCs1FdF5tPtoawg0FaxZabqqpp+slabOuVtx52LA0cuot+5ytxK2zjQOLCdOBCD1uDDaDAI+yx8Sb5Wh4aRPGWamnMDITIoGcRCBsOERy8DlfKMY76Mx0mRR4jDsddee/U/GRoG1pp6mo3mbJWRF8l72TGASgkWeYaBE1BQ+RZRDDFpGGcxSjilvqo5pMzUA7nCY2mADL4OJLS+5Pfw+QBZrpcGEcAtfCUfU8neZN6nvmmHPAvo0sbqmPaL0Wf5TPgnvfp4L3kpI8CSl2MAQUZ/5NV1USi+xonwRx5AjgCBnBYbrM/ji2gAUQZIGySqxe98GUsbDyMowstFThF5BZD5OEBIPdO2wIi0aU1akQn4Sn5LXujrHFGU6IeatCkj2YoMzwMGa3icfOcdARLZmFsa8pglhNmYOc9mjyKOE1nQ7wOWJg0ZQGkj5+HzlL7j3cBYm0GHTCokwm8ecJO02/soYiBlIsMhfAwIVC4vFZlFVsiQfbhe+MIX9ueuuYfIj/soAHT/Y+u/bP6OHz6YUJNWHjWynndmKVPAoVwvj9EpU3SyqAr1I8dAoxBn2jUyJAIP5eud9LK9f0ry9WCAEHlRNst0x/7CSx/VcV/+NWNV9J/+nnHFMf2//HhHTdr0J1+wJS+h6FRyNexfSeOYfmU/JJRycL5FNCH5lntYys9eWwG2LH8e6tf+weJfrYwDgY0/bB4fCoj8A+sC4CX7KfouaYfH8NoYDFAJkRX7q9knMsBa7g2P4eEU3TR1DI0tJ8/ocu8VMZQovCxvr+kLw7Kvx++a+ud9iWzUvtH3ubfsmDab0pfICplBPlrjgwDeSbbI2JCm6KnhM36zI+gt9oH9I0s7DDAPkBLBuqifpF5TZNEyZhQbuP+x9R/7J/pe9HHJ2yn9pHYsoBuMB1nirhwmcdiIqNy7tEZXTe0nq8heX7D2b6fmQIs426mbrxW+cWBjOAAoMLiLIDMrbD8dm8RyrBhzIgtcF8UVgGPsAwGrlM4MtTBwg7KIM4aySBgDZCgRC/m96Mh5N+PMMDWoc2jNTpnF5yypK+M4NLWeDCKOEKcmRkfyWHYEeng/0M3gz8jAyxLQCmi5KC/7au233379rL9lJIwyQEJpOOV5Tpc07j/ucY/rZx8ZGeEr46uMGshzObpnqZC8PS+iQhtpm7yPLAAkAhLk2bGjsjJmROgw1g488MDekcKPRCuI2AE6oEO2RteQBzySlnHj3RxVtChyiExxNOVtuZwoSb+3bI38Ul4ykM/Rk3H1QNpobM8pBmE+BW8Ta0snGY0pF9Al7Tfc+wyvtAFK9ET/Y/APbxjXeCGSjsyIkuJU4m8pszVpOUUiHQGlloxy/F3TPzhJHPg4/TVpU/yALXg6nFFPmloe57l5R8CvviQKQpvisXdzqkuis/S1gMxjkV/kjVyTdXIiYqqm74gK0k72XbEZPB6SU7qGrjh8zt5fZTm397n9iwC9Ih7oCIAHPpIHujP6TX+wXB6JkEkkGYBSdJp7nCoREPafoWs9Y5mz/oov0d2HHXbYTE/UpK2R9fAxEZrDZUW571ijk+kukTicUQBOeGLcQvhZRuTob3hLpoDT+CttZGLZhwD6TEf+TR2rLN8C6NIl0VHGG3pIXcqvMNekpVeNpfK1WbwxJLqPrC+rl2eB12QCeCgKRxvhzZ3vfOc+Gk8ZSwccMBAia4v0ftI5TpVx40Mm6PKlS+CQ8d5YCfh95CMfOfuQwhR9V5ajPCfLJp2Mab4MSAbVFYCi7w3bpnw25zW6aeoYSh6Uhx48+OCD+75rnA2IZ7Iokys1fSFlXs9jTf3z3qw+yIRBrk85Tu0fgOlyX7OMqYceemj/RWEyRtYCynr3FD01VkYyo1xAMpNCNspnN7N32HhIPymjsob51Miifip/7zJuAKpMWorMZE/p+y960Yu2ecWUfsJmqBkLRAsa++lhNmn0KvtNefT5UI2umtpPVpG9lKcdd14OtIiznbftWskbBzaMA9mLxQDEWGLEWR7HoOK0M5QNku4znIT0G0hLMgiOkUG1pPzOkZEao4wRbraMwQroyJfbGANxwJJXjOsck5/7vgLHuXMtyz4YNmbgACOZOZJ2aj3zHs9MpTwjSoMTz8HnTAKO1IczFmc/hkbqkWP5LnkAMRnc+A280R6MsbwrYfsAAwAM40K7aT+GmjKof/lFyPIdOZfGrKlycGaUmYEv+oyDlEiNLP/J+1Pu/M5RvuTMMhsyxXkQuUC2gBdAs3L5irYCYjD+OJz4k+gOPIvMprzDo3B+y6GQd3mekSdfX5ckX4gxGMKbsT/9IaQNAMvKJT8AFNBMO3BsE0WR9IxZhC+LHHmABR5oV/yWL9AMb8hPAEV51aSVnlHry4LKoKx4oZ5AHyBH2qw2rfRxSiJ3rg2plsfD5/1OGcklkIIsk2n10B8ABGOUCEbPp6+V6TggZFQ+5Et0Qk3fIWfA+ujJLVsBKPJq7xQRB6WuKd+7I88BiK973ev6epOHEkQtHaB73OMePV/IepahKrdJE+2NZ9IgwJj9g/Ay/ZWOoyfomrJv16StlXVlyVjh2XlUo5PlAVTJRAOgJ6AZB224b5L6AkzT3/AXT8iIqM5lMhFZzzF1mDpWaRuRs2SQLhEJ40hH0d2lTq5JqxxPetKTeieVTozuk5+v49EnyygReQHTjceWtEVubLVAJyWyFw/obDL38pe/vAcjgZfDSLDhe6fIOEA9UXjaEUAQ0ne9W7uJDA9N0XfSpu1yzPP2Q0zdjKd0I14C/rXNmB1V5lGjm6aOofjPpiMvykKHsQO9l/4sJ23Uo6YvpN7DYymDNfdq6p98AwBOkc88k+PU/kFGyAqe+ThLiG0r2h6RtUzi+D1FT0mHShnwWz8CytGv7ED9gZ0sHT3lC/KhPJtjrtfIIps6djqbzfvYP8YGumao06b2k5qxADBGB6oHkNckDdvW9h7KN6SpumpqP1lF9oZlar93Pg7stnWWsPdiobNx1Ha+arQSb08OGOSbrGxPjq/vu6a2H8MaYGUQYSyUxLkyEHJQGbYbQZxWA6FBUZk5GetBHBwDPRBlzCgt37HR9WQ04aOoFCDK0JApyzJ2LiIGaOBLi2VdGE6MBGS/knLDU9cYdAAB7xMJUoIw7i8ixhHjnlEm6mwoG4ueXXQvvBCFtaw8ATQYaQDbRUb38J0AU46AI7kK4DdMt8pvfMdXBrk2XQ/iuOA3ENk4HYBvLO+atHle3gx2YM8yPtakTf7b66ivcjzVAyAwjywntORIGwGSxwgABODUbxjHJdX0HUACkFaZhn2wzHMzndOPeEkfrYcM46V2kSeHPJFYY3WuSbuKrI+9c3gtemiqTlYOfZ4+OWJrxNUy3WXiRgSifuyvVucPy5vfU8cqesS4KrLU+L2IatJG95lIMabUjNeipvFFeQDP2RCfPJT8MfYY53LNthEi6bxr/63RyPMiXId1XG8ZH+a/ym9yxBZgmxjThnpnSp41umnqGGpCgrwAI5bZKLV9YUqdatLU1L8m33lpa/rHvDw24jrbzNjDdten0p+mvqtGFvHcu/gL7Kn1sgdrxgJl2HKUTafvsAsXUY2umtpPtrfsLapfu7d2DpBlY9IYNeBsjCvt2kIOEKgGnC1k0aa+2dpvUzdPVeFEOhnYRW+IGAlZZsGYASDsu+++udyOjQO7NAcAhDYzZuRampuPK+zSTGmVbxzYwRzQLy21AowBwURMW7I1b1LD/mcA8ETU+HgL0LJR40DjQONA40DjwFo5sAg4232tmbfnGwcaBxoHGgd2DAeE/Nszx2bJ9ngSJWGWBDCALBVs1Diwq3PARzzsbygqyEy2qKcGmu3qUtHqv1k4IErFlgM2xLfky8c+7B8qckRfFUkmogXAJgreciwkAteSQsdGjQONA40DjQONAxvNgQacbTSHW/6NA40DjQMbxAF7vADJbPTOqfCHLBeyx4z9Hxo1DuzqHNAfAiaLYhn74MOuzqNW/8aBHckBIJm9syyRtsm5CSDLFu1dFLJEU/+1r6TNzrPHXO63Y+NA40DjQONA48BGcqAt1dxI7h5L825L/Xbuhm3tt3O331jpzcD7yID9lLRvzf4yY/m1a40DxzYO2FdKPxl+xOTYVs9Wn8aBYxMHLMe0t6I9v1bZzP3YxItWl8aBxoHGgcaBjedAW6q58Txub2gcaBxoHNhhHLBh8qIvNO6wgrUXNw5sEg4Mv/K1SYrVitE40DiwgAOWaubrfQuStVuNA40DjQONA40DG86B42z4G9oLGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaB3ZCDjTgbCdstFbkxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoGN50ADzjaex+0NjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQM7IQcacLYTNlorcuNA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOPAxnNg941/RXtD40DjwM7IgeMd73jdDW5wg+7sZz97/5XGr33ta93HP/7x7s9//nN1dc55znN2F77whZc+5xPzf/vb3/p0V7/61bvLXvay3SlOcYruT3/6U/eMZzyj88n6qbTbbrv1n7U/z3nO053whCfsfv7zn/d5TH0+6c597nN3V77ylbtTnepU/Ve9Pv/5z3d4UdL1rne97vjHP3556Rjn3//+97svfOELx7i+7MJa63H+85+/u+51r9ud/vSn79vxTW96U/fZz362u9a1rtWd6EQnGn39t771re7rX//66L3yIp5c//rX7053utN1v/3tb7svfelL3ac//ekyyeRzbX2GM5xhYXobRX/gAx/o06yVLzI55SlP2d361rfuznKWs3Ty07avf/3rF5Zh7OYVr3jF7iIXuUh34hOfuPvOd77T86DcjN6X4cjzMvrMZz7T/fjHP16W7Bj3x+rxhje8YZL877HHHt1FL3rR7j3vec8x5Hos30X8afJwZNMsk4eyAS92sYt1l7zkJXv98utf/7ojA1/84hfLJP05Paydxugf//hH97a3vW2bW+c4xzk65dA3//KXv/QfD3n3u989V39f+tKX7i50oQt1pz71qbtf/epX3eGHH97RWaHa/uk5upt+0K/p7i9/+ct9/Wq++rueMqVM+ug1r3lNp6NkjPvlL3/Z36tJ6wFfbaVXz3Wuc3Xa5FOf+lT3uc99rvv73/9+jHftvvvunXFjy5YtfVr6Vno6bq103vOet7vKVa7SnfzkJ+/bnU7+7ne/uzRb4/SNbnSjXveTldBGjm+ryogxfe+99+5+//vfd4ceemiKum7H9Za7mv5bK3f0tPHlbGc7W993P/rRj85t780od/PGoI2Uu1JQ7na3u/Xj/7//+7+Xl2fn+tF1rnOd3obSx3/60592H/zgB/vjLNFRJ+thNzfZG3J1+e9Vdd482fPGRfYP/cqnOcEJTtB/yd64Ydws6Y53vGM/rr/qVa/qfvGLX5S35p5PtQdksN5yMrdQ7cY2HGjA2TbsaD8aBxoHcIBResABB2wDBhmYrn3ta3dPetKTqgAs+V3ucpfr/5wvIgafwYeTdpOb3GSWFMBT61Dc9a537cGMZGKQqaU73/nO3SUucYnZYwxTdeHkPP/5z59dB0wBXhbRD37wg5WAs7XUQ53vcY97dMc5ztHBxRw6xh0HaR4BkpYBZwDJ+9znPrN6cwwM+gwKIKf31BAectoXEYc7wNla+JJ3POhBD+qAWqHyPNcWHRnRj3jEI7YpNz5c4xrX6N7ylrd073//+/vHz3zmM/fXFuXl3l//+teVgLOxekzhz1nPeta+n5FdQMsQEB7Ld14dmjwcCZxMkYfw8F/+5V86YEVIe+hDQI5nPvOZudwfOZEXuMAFtrlW/jDp4Ou6iCM4nKigv692tat1L3nJS7qvfOUrs0fp+vvf//7dGc94xtk1eo7eA54B2lFt/7zCFa7Qg9KlXgQu3/CGN+ye/OQnd3/4wx9m75t3st4y5T3Ax0XAGfAywFlNWn18n3326cfO1Od85ztfd4c73KF7+tOf3v3whz/M5d4Bf+hDH9oBMULqaqLq2c9+dve9730vl6uPxs0SpNeWANT3vve9xwBXy8yNCfe85z17R9AkUAmcbdT4thYZufe97z2b1CvrsR7nGyF3Nf23Ru7w8Fa3utVsjAfEXupSl+r71+Me97h+TAlPTJ5tNrlbNAZtlNyFH45sPLrS2DtGgBV9qtRjxsrLX/7yvS1inA+th93cZC/cnH5cVectkj1vn2f/6EP0fUifM6a84AUv2MZu1o/p0tOe9rSTgLMae2Aj5CT1acfFHDh61F6crt1tHGgc2IU4YCaXwv/jH//YiVxh4N/85jfvZ+sf+MAHdgyyGhLBJPJhjAw6jBKGSyLKLnOZy/RJOVeHHHJIbwTWAjEGFnTEEUf0UUSi1mqIQRrQTPk/9rGP9UCca6K4GKuve93r+iy/8Y1v9A7HMH9AzGlOc5r+8je/+c3h7Um/11IPjirQDO/+4z/+o58h/dnPftZxopFICIDekL761a8OL23zW56AGe32ox/9qHvHO97Rz3YzdAFHnEXvqyFRNmRhSN4lT2SmN7QWvsiDkRugDMD1n//5n9XgrH4C7MNfUXwACdGJwJAb3/jGvbNMdjji85zhM53pTL3syGMVGZlXj0c96lE9q+bJP75yPkuHILyt5U+ThyM5N1UepBbBENAs0UYmDOgW1zna73znO4/MeOv/RGMCdqInc5PsJKoJaBLQ7Cc/+Un34Q9/uH8W4G92/C53uUtHNhI5DFgHmnn+8K1AGf3ACaCv99jqNIqAA/rU9M/M1JMt5X3f+97Xz44D1U92spP1QN0Tn/jEFH/0uBEy5UUiiJH6l/qkv7j1XxkpWpNWX9IXgZdATHoRgE7XAtSMmQELOUjGVJGA2ud3v/tdz3PRV9I++MEPrp54UH4yos2Q6DU6yTgGjBUJJ4JQVPAY3f3udx8dw6TdiPFtLTJy1atedTYmjNVlLdc2Su6m9l9lnyp3JQ/J1mGHHdZH7N/sZjfroz3ZakDq0GaTu2Vj0EbIXXjh3eyU2Hi5Xh61WUAzOpcNqM8a4437+jfdnXF7rXZzk726yVZttarOWyZ78+wq+hVoZrx8+9vf3q9kMQkNlDbx8PCHP7yXkVKOppzX2AMbJSdTytnSdF0DzpoUNA40DmzDAY5bQImDDjpoBiaIRmH8M9bM1JQz6NtkMPLDUpGxJXyMFrN3HL/nPOc5s2WaATQY+YCHVcgMOjK4cWJqKbP2IuCUDVl2w/m0pIlzGuCsjD4r3xOAkcMyXEpVplt0vpZ6JMouS8DyHm2MOMmiw2rplre8Zb/MExh58MEH948D20QFAhS16yte8Yoq5+/Nb37zaDHudKc79U4Sg/VZz3rWLM1a+CITs4CI7JWzxv3FCf+AAuHjJz/5ye7Vr351/xRn9QlPeEJ3kpOcpI/0CHD2b//2b8fIFYiw//7799c521OWUw0zmVePZfwBoFgSNI/m5TuWvslD1wOQU+UBD0WFIH0w+oPs7Lvvvj2Q5X4JnJEV9KIXvWgbcKe/WPwDTiFL00unWTTvfvvt1wM2F7/4xbtPfOITvXNN/6JXvvKVMx1NnkUWcx5EqQHBa/onkEb/4FwYQ7L8Xl1vf/vb930PcLRoyeZGyJR6GrsQJ3hZv5+aFuB50pOetM9XpGCWuHKqgReAf/1NNBlQlG5AL37xi2dOt7Y/8MAD+4kOkQqrLOvPkjOOvPZEygDUE/0mom0MOFP+TKb0Dw3+RT4Hl2cTaKuMb6vKiDENMLRRtFFyN7X/qtdUuVNW/Uw/euxjHzvrT9pY/wWIs2NEaW9GuVs2Bm2E3OEv+xbIRb8tIsBY9Bj+shUQfj7lKU/pJ5fxV39bD7u5yd4XFjXH6L1Vdd4y2Ruzf9hUJkQRkNqkB7IFAZuPPWXCa95Y2See86/GHtgoOZlTtHZ5wIEGnA0Y0n42DuzqHBCZgKzJL5dHmnFzTSSC2RFGvxkWBiFgyjr+kJnzDGiWfIw5AYwWM35IpAPj2+zRP//zP89ADRFTjEgRCxwLxJETJs+A5pCJGgBgccqQGXYzgpbRIXtYiWazf4V9uMzWAHcYT8puplZUjvKX4foiLKR517ve1eeTfxwRwNm8/cGSjnFvbwxGbYC33HPEQw6/WNqWAABAAElEQVQSINKA7N32x+LwAImW1QP/5GH5ltlPERRAGrxwbvmVWTCkHMLL3bf0yvIdFAev/1HxT/0RILEkzqjIRLy3b5MZY04bsgfERz7ykVly4BrnCTGQS1lLIvyXDxIGP4UvU9qXTHAkQ3hDDp773Of2l8jF7W53u040GD6LCLH/mci6EABV/fE4y9lyzz5lHFFtu4gsdVVekUHl0ijPyP8Wt7hF73RrP+ns5wOATjnG6hFAYp78y5vc6VtkU576a0lj+Zb8KdM6b/JwZHvVyIN2QnRSSfQZh1f7h7S/9uS4lRFRuV8e5StdjPrcox+1NYAny0z23HPP3jGkv4cTG/Q7ffztb387WRzjONY/JRJdaZygXwKauW4cAJxxRpVjWHdpQhslU+mTw2XJeW95nJrWnmbIJMtQp4pm5aRtOSqaFj9EpuJBIlU8q3/R28aVYXT2FJ0mTZa6D3UJAJa+I1fkKEt6vdck1W1ucxunfdkT3dtfWPBv2fh2wQtesJ84UG8yqa3f+ta3zpYJryoj97vf/XrekctEZo0Vk+wCNcgoWSP7xiB7OS6iqXKnHUVYI2N2uTcl5xq/jWnGtpr+K7+pcqeOSFRoCUIDrMmYMdYfoGcVuZsyDq4qd8vGoL5iI/+WyR37UISlsZuc03uvec1rZvahLNmdxnX3TSBc6UpXGnlT19uHeKmfBjRLQrYUWcnkZI3dnDyGxyZ7R3NkR8rePPvH5BN5JxPOQ2SDf8LvsFfoEDijf+gtMulZYzidUY5/NfbAVDmhFxqtPwcacLb+PG05Ng7s1ByIUzW2hA/AxKiPwea3zZ89w7iIorbchBEP6Ji37M8SIYMFhyEDDQc+78dEM/P+gGcGpszeu2ew4lwynoVHA74sTxFBUeaRmSP5GLSkTaSNPBip/gxGT33qU2d7EYxFiDG2AvaM8Ue5EAfFchJkM/USkHPtMY95zMzg8htZGsvgY+jbJ2lRPTgBokcY5Eg9PG/WiqPy+Mc/vj9yzhA+44lBH4Un+PGABzygrz9wiLE9BAr7Bwb/EhEofUnKIR/GJGcyBj1nDAhkJtx9z9s4lRPHgRoDzdwzK4zIUKKxFvFlavtyauJk4hHexJnk8Hmv9yN1ClgsUkcEjWuc3Je//OV9mvKfvOJIA9vmkdls/Uhew02JyaPldGUZlBNfgaVbtrYxkG+sHvJLu3t32pr8M9TIvig+JCpQFMqQxvINf4Zp/W7yUC8P+g4dQX+RGY43eQBoIqBziEwiS+dve9vb9tEjZAPIBowolxzSLWNEd5EBFHAnwDpAi+61RJ5sKQuHknM4j7x/rH9KPxZhTCZNiiDjQuk09BcH/zZCpuhIf4geuelNb9rzxIQQR6jsrzVpo4fp5SGlD9PBeCBSe9jfPcN5z2QM3oc4kFPGLO0mf/1/CHbSsdELdHF0qXdw6JTR+EqnTQHOlo1vJrbIaUlk23JQQJLJp1VkxBhCB5IdYKCyjxG5NhkS3qu75+yvR+8nOnjs2alyJ+IEAMMGMYGYfqf/0u1ItDuq6b81chd5GW5K7p1xxDPO1crd1HFwFbmbMgapw5CWyR2Amg2FIu8i7R72sId1j3zkI3v5do8Nog+wzUwizgPO5kW8yQMIgkT2IjocjdmFQ7u5Tzjyr8nekUzZ0bI3z/7JBv+CCNiaJRnPAWdAsiGZhI1udjSp+uhHP7qfUI8urrEHpspJ/LFhedrvtXGgAWdr4197unHgWMcBDhZiXAwpjkEMNiALMIFBY3AQsmy9P2ON4cK5LyMOkh+jg0GDXvva1+Zyb1ADcoBCjF+RZhxDBLBi1MuX0fqhD32oN1oZ45xPs+YcHzM5/v71X/+1N5wtWYhjKQqL0QbIYhQZtOxB5p0GPFFygJEhiXqy10UcJA5sGWE3TA8UMkDiYekESZcv6DkXHQbsUyb1s8zKOeNvUT1sWqos6vG85z2vB5+0g41u8d6sF5BR9BfnnCMMFAx5B7JpdEh+jEiRbvYfGoJ9SecYAO43v/lNebk/53whDh9SPkuQGPL3ute9+uVjcdY40O6PEUCWLGrvl73sZbMki/gytX0Bi8BYewkBhByReuEhh4szYnklI4kztNdee/UAmhnvN77xjbPy5IQTbtY5fQM4UUbYJV2OAazIrKW0JXmHMogiEGnHEfbBBiH6+gBwlXzNq4e8xuTfdVFu2kLEjQiplMO90KJ8k6Y8NnmolwdRg3SgduXk628BdQDJJXCfyBrAF1AipK9zMkQ8+srbIgJykRnynuV6McDpQEuR3EcmEXwIhpwPI9fyjnn9M/dzVEbAAgBX/hwOfXgZbYRM6TehUu6Bhu7hi0g7VJPW+GKpVkDqvMORPg0ZY4aTBHThlq2gV4AO7VimyZKiZWNWxqahQ5d3u06nAJDirOGBcdbklUkAkdhTaNH4BpBMBBvHTVuTW3tiqiewidNY0hQZ8SzH1HhgyWvGlzIf5/hIn6srR9fSWWMSeQVo0dH6Hr0+RlPljl1j2fR973vfviz0v0g/4wei/xPdVtN/a+SOfWEsjy1V1ifvzJL98p7zRXJXMw6uIndTxqBhef1eJHfal91U2ofGeDYH+SN36ds+1rEWEtWWqMBEHtXYzfPe3WSvzgbbKNmbZ//QxWhsv+RMBGUMLdvYNboAkE7/0k9k0kRStnOpsQemyklZhna+fhw4Mvxg/fJrOTUONA7s5ByIAc+YHlIMekZpyDJEzpjnADqMW2SgSGRD0uaYLzoCTsaWcSZdeeTIITN6QDPkeUa09ytT0vQ3B/8Msox65AtjcR5s3A6cQJyIRNP1F476JxIjg7RLARDLNDk3IGY2Ml+AzD1H9zxvhonRxQBnAPsIQ8CqfFCgfK48ZxCiQw45ZLbpPAAGSIcXgLN5BHjJ4A6wsYyBc5GlsJzIGAhjeZRtH2OhTAdURAEBOC0vfelL+2sA1oc85CE9nxm4wMvUuU9Q/IvDaflqwLji9jFO19K+yYxBnCVyyhbnikMdEKwEG/OcIxkJaOY33oTPfpfEeUs/G9tnSRkYZ0BjUT94JWIgAKJ8yVktWboEqNFv4kTU5jFM3+ThaPC55M0yecA3fTWU/uK3CMvInt/aDJEDOoPOA9zra2SB0w58mEcmNeKQA8O0P4qzRx4BK/Y2o7PofvkC3sf0oWen9k/vFbGZvgAMzvvlM0YbJVMiCUIipPWBF77whbNldgDy9O+atFmyDsTA6xB9V4Iawz6rngCO6ALtW44tNTotkyHlkr2UwzHXo6OMQ8ZL7zR54TiFlo1v+KetjS3AODJOlulTZSDnw6i2ZTKCTwAQ+XI+89XTsfKK5vAO8mzygM1ijKVnRQcpD3kfo1q5o5v1GWTs4BBrS2Nalv27V9N/a+RO5B4C2lqSGVK/tHOc7NxzXCZ3NeNgrdytOgYtkzv6FvnIT+xDY6b9GcmCScHooD7hiv/wmb5F+J82SB+eajcPX99k70iObEbZS1vFNhyzR8tAgyFYbeIz0af0YvbqBb5GN2j/KfZArZyk7O24fhxoEWfrx8uWU+PAsYIDUd5jRkaMA4ZIiKFvVtnsCeAJiXDKPkxJlyNDLsahPUemkLJk0GLgD5eBMMgNKAGUxvIsHRjA1DAP+XqPdIlQSz75sihASlg/R9BSOksAhhF1Zj7lg49jUSAiSRJNwin1PoMnZyIDbulEpww5qmPyHy6DBYL5W0QGbstcGLwiVdKWgEQGAccnm5yP5ZP07pWDeNJGRiJHrgMJAaRmhGMo2PvHEqIxwpM4mVnGO5auvLaW9k0+W7ZGNSBl92XBklIe9dNOw3bnGOKNqAbRf+qqrQ444IAym/48edvbbCxqL1F4+op6kQ15lQ6n/lA62cd4yeACmc/GtpaKxZEeJKv+2eRhNXnwhTv6UjvQL/qy6DHL0Ti+HL1s7s/opjNF9GZvLsa4PmVTYv1Q9NDY0mERudkTxfK4REho6PRfuk8+dAMC2IpQdV+UI6CupJr+aa9Ie6iRX04z3QkEAVol8q3M2/lGyRSnGr8BOYdv3VczhPf0uT5iKbS9GmvSmiACxAFttB3nGjg4XLYz7K/0jEkLOsV7LfG2nFDkmjG1RqcFjDQ2jFGuq79zkVKO9HDNx3OWjW/aGZXLXv02tpg0GaNlMiISHMhLX5r0WkSxLYzhQwd32ZdcV5E7ALZ9RkXABbw2FqQ9lLWm/9bIHTCQvBnLRbolYtBvdTF+lONw+LZM7mrGwdQz8pV35Jjr5G4tY9AyucvE5nCykr40obsetMcee8y+sikiXcRhKHxOfXPdMTZRKV/lfeflvejlMk3yyHvca7J35ATQGM/xJ9fXKnvyQkOb78irR/6P7e6X95U09APoJpNe+imdT/9OtQdWkZOyLO187RxowNnaedhyaBw4VnGAQreEZ2wpRK4NBxBLMjjlMV7K5ZdD5liOyDDgrPmS4BQKICctoy6G3fDZhM8Pr/udPSic+4rYPBIlMKTMcAO8RB7tv//+/aAsn9IR9VyivTgD6jhGnFERG2MG0lj68loiR4ZOQZlm0bkZ+PJrfWVa9UvEQPZdKu87VyfGm7KLTksUYtJlpjuRZ7nOqbchMGPG85mBy/3yyEhGZvHMGk+htbRv8mfYI3Url8Tlfo6iNYbRlCmnejo3Ky0sH+BWzkLjGfAAzZN/wKk9ABlVMf7y7lWPlsfICzhCxiPnjDckGlPfBGaMRRLOe2+ThyOjSGvkQURKZAAoZQ8cZCKB7AAYtI+20Y/oEn9DogOAz6KWEuWaNJxmETDpF8CgoV72PHAHCBfQzPOMc0B6IoGSZ441/VP5/XEO1A8YCPQVmTwPONsomcLb9NPUJUd6XZ/N8tWatPIAdos2E/mi//qjG+0jCChEpR7oL2z9l8hnwJvlc9rSF5tR2s75sjHLkn80b0yJ4w28U056iR4G3OdLrHkfHrgmunso18vGt4zB2Q+oL9SSf4tkxKOAKQSMS1ljE5Bz14wV+kiAuzICpH94wr9V5c5kFYAaiWpLm+aVNf23Ru6UF+DtAzzajr5wzYb4ADiR/UMnPmVKGcfkrmYczEeZpsjdWsagRXJHlvP+lCf1XK9jtr2QHwBXNGPJWzJcazeXZWuyd6TO24yyl3bKJGdsplx3ZNch7eivJPIyJPIiH3YA0K3GHljF/h6+v/1enQMNOFudd+3JxoFjJQfMxjNKA4KVlcy14cw5oyb3pN9763462Sy3fN55Ns3nrGW2cphm+LsEZ0QgzVveN7ZJbvKK02LQGdujKuliUJo9ZoQPZ86BCt7jvsiCEjgTqZXlT/MipRhgWc6KjxxKzq9oEHutxXFLeYbH1KOc4SrTAEeGA/fwvrbicJTOsjQlGBdHq3w25/gPIGPkDB3RAGfDtrAhfUAgRu5eW/cMe8lLXpIsZ0flyv4soiGmUvgytX3H8o084ov99eYRp1A5gRvaK8s1kt5SEXviqS9H+vAiuiWgA16PgSHyAHjEMVQWzqslq5aEcZRWofRPYJ6opiEBSQLKztvXavhMfjd5qJMHy3DJBscroFl4SR/EqBZNIoIC0ETfmKkuZ5w9kz5LHkN0gw3lEwFDFw0jMaTVZwBnpQOYPESJkoehnpnSPzmyoiPVpVxyKm8yDJTeclR0Z943PG6ETNE75B9/oy/y3uGYVpNWHnSuKDF/+i7gRtukT+GxNAA1Ubfqhz8l2TOU7uNQlYD7FJ0WXa7c2qiUk1zzLk5c9ndzfUwX0BWuA1XKJYdTxjf1JK/KP6RybJoqI6LXQ9Gd+e2orsqqT9Gn6Q8Zh8u0U85Xkbvs6SZ/DjB7yHgequm/tXJHjg8++ODZeOSLfeQsZYqzXyN3NeMg2USRsUVyt+oYtEzu0ubKQa5Ke9G1Uu78riWrKXx4CYlgy9YeZT6r2M3l886b7J1k5hNMscG2h+yVbZQJ9Ni45b2MtSlTeU//H5L+iNTTvsc19sAqcjJ8f/u9OgcacLY679qTjQPHSg4AQkonuqxkAA1OfIjjlX1dgE4iV0Sm3f72t++diKTLMTPSU5dpeo5xyCBjJDvP7HryNLNq2eMQ5Mp9x8ycy0O60nEyaNm8mNGfryJZuuM6cGcIcARUGhpoWRLFiJoXtZOZU47p8OtqcTYYofMo9eDQSl/WAyAnmo1j6MucY2SDXUagutqUvATZyiWaQ0CszEvdGA8c++zv4z5ZSFRiGU1i7yDRZihLNvM5+jKd+8BKfEc1AE74MrV9+xcM/nEqLY9i6JSAqGQMI8t7GToiLPFKVBj+4SN+hpQhFIc2vwMYcErHiMMX0IwTXsp6uVRzCGiM5VVeA+5pnyFx4skbWVbW0lEdpp33u8lDnTxEVsacTTyOfmEg6ws+cOKYDYbLdkiUUL7uRvboLk4q2aS/hn0szwPiyNowWs39XJOmpCn9E/ALoKIbDj300PLxvm+5UDq72yQ46sdGyJRN6UUGAKyADSVF99GdqCYtnhjvAA7PeMYz+oif5J2vMCcCy3I6YDpgzQcZSopDpt0AQTU6zbs5bWRKNHOpv7IfnfucP2MP2RqSiRBjCvmkB7IsOOmmjG/aTR7sgCEB/dVRtLHI5ikycsTWaMzwocxPPkAqvDJmh790uCXPieAonxGZZSmnvcnszTpGtXKn3Y15ymECzPjBHqJvgbG1/bdG7oz3JiLVx4eGSt1tqwCUcaZG7mrGwRq5W3UMWiZ35DqyLzqxHDP1d/1M+4h2rY1IK7/UafJh3oRord3cZO9IDgx13maTvbKdMqmuvw9t76xQCLhWPkfnlH2TfZkJcrorttwUe0C+tTqqLEs7XzsHjrbu155Xy6FxoHHgWMABs96MDLPegJEQ49s1AFa5z8g+++zTG+sMKMZolgFaWpLZ9uTBcQ+oEIMu95YdzaQis8slAMBI5pxY3sJ4mkcM8Dii2Wg4aQEiwB3GZpZ4BPDwvpRZeoZqZk4thygpTuzY4Jl0mWkaOi6Au7wnA2meKY8cWbxmkIviKum6171u/zNlL+/lPOAi5zzp3QPY+PIUwuty5ri/WPzL/nX29Ck3D8/zBvYAb/LNl8Y4Fj4UEEfI9WFkgOWJSB0XlaEoTn9a277D5/3OF1AZRfYaKskyE/LMweb0cyr1E+1gRrokgJrr7g/3oUtIv/KOUQwq90pwgUyQkdAiGUma8gik9TWx4V9kBZDtXozD8tll500e6uQhH3wgI6JzSzIJEB0BZCZD0Uk2Tg6o5pk99thj9nW36GSRJgHNfBV2HmjmeU6g/KUvdQFAIxEWQ/md0j/VD9Gn8goB4+z/h5bp/42QqUz4AIuzVFlZ/E59o9Nr0mof+W3ZGkVXbtJOV9CReAzYQEB3hOeZRPGbHrz1rW/ttF8+SffV6rTodm0Z/eCYtiVPyJL8oR7wO/eNMX4PoxSnjG+ZSDEul2MD/rIfyDyZnCojJq3GyvrqV7+6r4sx332bwKO0Hx1uP9KQ39oCzdO97tXInfEgS2iBgaLzjFscYHYRqu2/NXKnbQGEQEj1C+255549r9kY7DlUI3c146C8p8rdqmPQFLnLhKctQ6I/lY39Rub0p1rQjAwHgCQX80Az76m1mz0zpCZ7f6+ywfBvo2WvbCMTLmxbBDAP0XMJKoj/k3uO9G/ZP9m9ZFL/1N9r7AH51ciJ9I3WlwMt4mx9+dlyaxzY6TnA8GOsMhjMnGZPEc4FYnQn0up617veLDoGIMKIZUAwJkXo2PfjkY985GxTzQwu0g2XxixjnM2kLf80W+MTzgwlIENCpBnDMdrH8mI4WaIJJBMhcNBBB/XgjkEvYIU9p7K0iCFuw04OjugE72P4ZybbUtOhU5n9GYZLcMrymHkSOcSB9G5LKWwEXoKBpbNZPptzm4kbuM1k2Xzeskh5MNgZ6os+EMBp4Rh5t/Yza+33lq1OH6dc2+TrjXnf8CgPfDLz5gMJeA8QUm7vL5cy5NPz8s11X7JTbu8TBcDxCakHioGS68uOte07lh+wTzSGGW5fnOPsKQfZZ4yXdXP+7ne/uzeK9JWnPOUpPSAIHIiRZPPmRBd5H2MpYftxHIflEDnBoPI+BhaQUd3wpXQIyGHAyWEe2/t3k4cjHeQaebAUGOBP1g488MAerKbLos84r9GRIg/1I+1v0366KBE32hr/yQm5iyNP1u5///uPioL9zgA5QFPybtLBMjgTJa6JmgHimwAIIJeMpvRPepZ+I+siaETDAYE9q1yiqRbpqNRpvXUMvamv0jv77rtvXy5lSbm8z4cRUE1az9HrHHyTGTb6R3Ss+lq6HUBa1I1+awwS/cvR59CL0FIueiWbjtfqNHy37IdeJlPkRN20g3E94FJfuBX+TRnfyAuA15iKxxmnM6bhg/quh4yMVcFYpj+YCBNlxX4x9tDLkT39dB5N1WVAq3Jfs/QTEZYmTkRyGl850lP7rzLVyJ1+rN/SC8ZTcsVeSVQ/OdLuqEbuasZBeW8GuTtk6xfG999//96OiuxrczYbmrefaH9zzr8SHMHnsaXCZFlEcI3dPOd1vdxO0XlN9l4wY+FGy97sRUed8H/obeM2f4R9uGWr7Wy8pNsC5JXPlT6L8V0fRYcddlhv29HzNfbAVB1VlqGdrx8HWsTZ+vGy5dQ4cKzhABAsIBTQIKAZhZ19pxhnmcl2nWEWMvPKAWDQlVEycbrK5YV5pjx6FuXoHMAk1N5ABSDiaDDkGcMipIbLHj2Dyjw4owxbwIQZfjPGDHxp1NdSqJDoKMucOHwGRYMjsEJaBuvYUo8AJnGSkld5BBoBupRblIJZTc8BK2PQB2AsnyvrYSkCJwgow0nCCzxhvClzAJU8k2Pys5zoG9/4Rv8TYOd9nDaRE5bTcD6WEWOR0Yg3HG2OEQOA4yncHolSTJv7AlQiqLQ/BwEBEbOUyO8Ak8nDtUVU1q2mfZNn+bxr5Ojwww/v21md1I0c47UZ50Q+SssY12bqLQ150pbSiogYRmxwpJF3LgJXlUFbalPv1z7ORTUAKVEZDer3sB6uoXnXj7x79P156eZdz/M5NnmokwcGuD9OvT4oGoZRTZY44YmowV/RWfQWmaA39Bn9JAa3SQUESHc/5Hzsj+MV4lz76pd2Ju9kzTP6HzB42P5T+qdymmwAmMkLiKDMzskvx7YElFOW4XG9ZcoyTJt6JyJYXZQLmQgxIZP61qT1/LOe9ay+T6sjvZ6INsAk564kaTPpQn/TG/Qvp/lpT3vaNjqmRqfRq56nZzlr8g1oZtKCvCyi1D3HYdop45tnjCHaGS/wN6CZKN2Mm+slI8My+q0/WKqqHnQuQFNZTEpo/+HHjYZ5TJE7dg2d7x35CrJ8tGuWuLKP2ElT+6/na+QOD/ETwG582LLVRvE+OsVYMYzqrJG7mnFwM8gd28Wkgj6kXeixrJAwRs/bs3RRn4i8axfyM/bnXaEpdnPSzjs22auzwTZa9oa60N6FJp3ITezDTDJlwiNtm2d9eCa2HNBM/2QPl9vV1NgD8p8iJylHO64vB3bbui6391A5QBRNo8aBZRxgYDZZWcalzXu/pv0MCJbmcLQYwtn/ZUfXjrGiXAYmM9qrlAtgBpRgcIkWWGRMA+g4ABxBgEcGxLXwgYELVBLhsZY8Dd7yUYfs8zK1XNqVse1ILgCKtQSA1BYiN7TFFIe49h2rpK9p33n5Z/8nQKK/ee3OoAYuSw9wHX4YYV7+i67Lk/MtT3xdjzwXvW+97jV5ONLJqpEHbczBZ4ctA62Ba4AAumhZ2to2FQXFWeRwL3Ioa/KVH3uBk29ZyiI9Oy/fjZAp5aLTgUz67Ly+rUw1aYGg8uUcGTMzWTBWN+OYMQi4Rf9yAhdRjU4TAawcQEJjw46gOIuOJpPmjS/rISPz6qcfGiO9PxGc89IOr2+E3NX03xq5O+1pT9tH1ZlYNHG2qP/Wyt3UcRD/NoPcAczYJEC0ZbwYtvl6/F4Pu7nJ3pEtsZllL7qF/Z4tL+bJD5kwsUUml+njGntgI+RkXh12pevGY+0wRg04G+NKu7aQAzXAy8KM2s0dwoHWfjuE7e2ljQONA40DjQONA40DjQONA40DjQONA40Dm5QDi4CztlRzkzZaK1bjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwI7lQAPOdiz/29sbBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBzYpBxpwtkkbphWrcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcWDHcqABZzuW/+3tjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjQOblAMNONukDdOK1TjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DiwYznQgLMdy//29saBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBTcqBBpxt0oZpxWocaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgcaBxoHGgc2LEcaMDZjuV/e3vjQONA40DjQONA40DjQONA40DjQONA40DjQONA40DjwCblwO415briFa/YXfrSl176yEc+8pHus5/97NJ0ZYLznOc83fWvf/3uBz/4QfemN72pvLXy+UlPetLuHve4R/f1r3+9e8c73jGaz5nOdKZujz326E5/+tN3f/rTn7pvfvOb3fvf//7RtDUXT37yk3fXuc51+nyPe9zjdj/96U+7D37wg/1xmM/xjne87gY3uEF39rOfvfvf//3f7mtf+1r38Y9/vPvzn/88TNrJ69rXvnaf9iQnOUn3y1/+svvMZz7TfeUrX1lT2mM83C7s8hyokcupzLrWta7VnehEJxpN/q1vfavvq7lZk9Yzl73sZbuLXOQinX7/pS99qfvYxz422oekpcukPfGJT9x95zvf6T796U93//Vf/+XWmuhUpzpVr8dOd7rTdb/97W/7csh7GZ3whCfs9t577+73v/99d+ihh86Sq9MZznCG2e+xk//5n//pPvCBD4zdml2jt9UZX8q09Ak+n+tc5+r+8Y9/dJ/61Ke6z33uc93f//732bOLTujN29zmNr0eeuUrX7ko6Ur3rnKVq3QXvvCFuxOc4ATdt7/97V4v/upXv1opLw/VyNQlL3nJ7sxnPvPou3796193H/3oR7e5VyN/5z73ubsrX/nKHXn5yU9+0n3+85/v9f42Ga7wY9U+O0/+aopw/OMfv7vXve7V/fWvf+2e//zn1zy6adOut/zVyFRNWgw85znP2V3+8pfvbY7vfe973Yc//OFuXl/ZbPKn/Le//e2705zmNN2LXvSi7o9//KNL1XTHO96x71OvetWrul/84heTnj/vec/baWc2249+9KN+LPjud787+uw5znGOXo/S73/5y1/69O9+97uPMc6c8pSn7G5961t3ZznLWbrddtut79+vf/3rR/N0cdV+Oy9D49o1r3nNebd7Pcp2DNXoxdoxY7ONtfrJjW50o97W0Hah9Rprb3rTm3ZnO9vZep/DmFVDO0qHrrf8qXONTNWklXfNWLvZ5G89xlo82F40z3Ycvn8V/d3kbsjF5b/X28dY/sajU6yK00y1o2rwjfOf//zdda973d7egZvAjWpxp6NrVndWBZxd4AIX6I2zZa9gsNRWAGhkQDvFKU6xLsAZY+UhD3lInx+HcAw4A1YBoaQNaQwGxxOf+MTuD3/4Qy5XHQFxN7nJTbbJl8HFqOW0vuUtb5nlR4kecMABnQEzxJBTric96Undf//3f+dyd9aznrW7//3v3+2++9HNZoBmYB9xxBHdM57xjJXSzh5qJ40DR3GgRi6nMs0gyWCdR5wMIDeqSavvPOhBD9oGYNLfbnzjG3dvfetbu/e9732zV1LMj3jEI7pTn/rUs2t0zzWucY2+X64FNDeo3Oc+95n1e/le7GIX650yfZMemkf3vve9Z8B5mcbAUJa1vJdzg0YJhuV6eaTX6FeUtEChffbZp9PWofOd73zdHe5wh+7pT39698Mf/jCX5x6VTb7yWm/g7KEPfeg2wNWWLVt63fyCF7xgJidzCzZyo0amPH6LW9yiMzkxRhz7AGc18ievO9/5zt0lLnGJWbZ0+OUud7m+TmsBnNbSZ+fJ36yQE06Am2RhkZxPyGbTJFlv+VOxqTJVm5bTwpkM0T3sEJMRz3nOc3K5P25G+TPJmPIDfVYFzi560Yv2ttRpT3vaScAZO+3qV7/6jD/6Ikf7ve99b/e2t71tdt3J3e52tx7ELy+y1a52tat1L3nJS7aZvDQe/dM//dMsaXk+u3jUyVr67TCv/MaHRcAZ4D/AWY1erBkzNuNYq673vOc9+4kYersEztZrrDXmA07Z67XA2Y7QoRshfzUyVZO2ZqzdjPKnf67HWJt+vj2OY7bj8L2r6O8md0MuLv+9ET7G8rcenWIVnGaqHVWDbwAPBUUd5zjHmRVue9qdR7919vrlJ2baRE/N+/vkJz+5PJMNTHGyk52se/jDH96DZvNeI9JMZwea/fznP+/MUIry+r//+7/eWTK4rkKiQwKaAb3e+c53dm9+85v7mV/v4qAztkKiTAwGDMWXv/zlfTnwl/H4wAc+MMn6ct73vvftQTMz+iJDOKrf//73+zScSkY58p6pafsH2r/GgQEHpsrl4LGFPyl9JJoJ0Dv8++pXvzp7viYtRzBRWXQPR/FDH/pQ78AD6ob9DdhDyYrUfNnLXtYlugDQlvfOCjLxhAK/613v2vc9UQsvfOEL+77vPQYbYNQ8uupVr9qnGbv/xS9+sRM9MvxLv/eMaNZViAHHeKHzOIlAGxG36gJQmwcarfKu2mc4/Zw0sgL8fPGLX9z97Gc/6/kbx6c2z7TtFPmTNx2MhnLqt8i9UI38XeEKV5iBZkAN0YUmmcgJA/VWt7pVsq0+rtpnF8lfdSGOJQ9shPxhzVSZqkkLdA3oRC/QPW94wxv66HUyzx4JbUb5o7utNtje5L3aGbGn6L8vfOEL/W8RMCKSQ8A1ka9IhOhrX/vafoxhq3HQ73KXu8wmIOjUAGUmYp785Cd3r3vd65LVMY6r9ttjZFRcEFGIrFoY019ldHWNXqwZM9Rrs421d7/73XvQrGDV7HQjx9rZSzbhyUbIX41M1aStGWs3o/wdG8faVfV3k7s6ZbBRPkZdKepST7WjajELYzN+sJvhJk996lO3scnrSlmf+ujQpYpnAUKHHXZYxRPbLylgirOMqYtI9BfiNIrs4kxxuhkajCTo5yrk/YRAfo997GP7hpWPKI+nPOUpPUgmfw4qRymDxkEHHdRZcoUs1Xzc4x43m7US+SEShEGGnve85/WOtHMGH4CNcy7yjMFck1YejRoHSg7UyGX53LJz+SIASBkdOfbc1LRmKy90oQv1WZRRAkAJEaMcMgP0vvvu2/fL5Kuvv/rVr+6fA1w84QlP6IEi0QaeraVb3vKW/RJUy70PPvjg/nFAoD4NDBFh9IpXvGKmD5K/mZOb3exm+XmMI9B9jO50pzv1fZ7j9qxnPWssycJrHG1LWtEzn/nMGQAv4i/6hDP47Gc/e2E+G3FTmwIxkXHGkjP05S9/uW8n4MP1rne9fkKivzHxX9p+ivzR//S48WGRrNbIn2LS/cgSukQCWRor0oAcc84XOdn9wyP/1G3qWFI+vkz+yrS7yvlGyd9UmcLnmrTpKz/+8Y/7SNG0kwkBs71kzkTCb37zm00nf/pYGaWbsm+Powgy72eLJVqW/gMOsaGsSAhAbqkJMskKCAuJOt1vv/36Cc2LX/zi3Sc+8YlOtBti1JerC/JMeVy135Z5jJ3HfrVdwZQyyGOZXqwZM/A1+nazjLXKX06iDfm2UWPt8D2b6fdGyV/afplM4cXUtDVj7WaUv2PjWLuq/m5yt3l8jI3SRzV2VC1moS8hkdMCILY3rQSc1RZSKDugSmX/9re/9RESnANKdYwYevZ/4djZJ4gxkuVFY+nLa3vuuWdvDHGazarFmSjTOBflhTieQK6QyA7vpxD8MX5qSLSb/Bhjw2c55JyjNLqZYmRpa0AzvwGTrtlHQ1ScaAvRF5w4jrkylsR4A5xl76iatGU+7bxxAAdq5FL0D5kXZSVqM2TJdRwTyyHM5FsGg8poqaQfHqemtSQT6WvDpTXvete7eoAFKGHmm04AUtiXa7iPIqeTUW2JxZCm6K+Ad/IvieNy85vfvI9KAGwPlfz97ne/Xs9Y0pEogfL5sXM6TV7IskU6rCT1UGa8oW85g3RZSfY0QwCcYXuIkgCabdkaxVqS/YcsIWf0GBS1+bA+SY/H9nqhw0RfaB+6/O1vf3tnZn8RXfCCF+wnPujRLIeUXh5ATmODfTc4OhslfzHmlXkR1cgfXqu79iObJWkjMhQdXt6bIn81fbbMe6r86eO3u93tOpHaJnB+97vf9fs2jW2BIH91BAin35FvQHU51kpngguwLL136KMiZOgSkdUl1crUFL6V+ee8Rv7YG2RFWYHMxuiQ/d7I/je+8Y0evJgqU56vSZsotqHjT6dpJ3t3kQ/R76vIn7YxGan/ay9y/J73vGcWnaW8q8qfJabaHSAFcBrqKXmjWvljO5Jt8krm2ILAsfRn9SCbqFyu5zc+MeTPeMYz9npbm9oeg/4JiC8dYsPal9L72F32NfNsCHBJpp/73Ofm0jbHqXx7zWte0wN6HrYywj7CIZMyIuSQqDm2ZMYxk7DLaOpYWzNmGHNrx9qpumCVsVY/tBcnMt6xl6cQ2V801l7pSlfqLnOZy/Syoq+V49Uwf1ESJtbJXSK9yQ9dN+YLTdWhtXqxLNdU+eODbNRYO1X+asbandnWm6Jv04aivfirnqGD6CL6gX4uaWq6KbZjme9U/V0+47zJ3bYcmWKrrKL3vGWqjVcjd/JdhtPU2FE1mIXtqug8xLYxxsJWhj5dn2CD/m04cJboBeVneBhQOYiWUhowREyVBFTK0gLpMchv+6vNMz7K5ykOgxfHhKE/jygXYB6j0xE45zx7MEEyvb+WFu1Rw4hDjEREWJAPIgyJ88DppPCQfZrKvZrK9JlFM1jUpi3zaeeNAzhQI5fkVFSXZ4DFAVIs9WMgAnWyBDMz8RyZBzzgAb1xz9j0YYshkDA1bdIBiIbE2YkjRueY+RbWOyRlj1Ngk/aSpuqvLM0ZfqSDDlFHes07wh/vsLTadc4cZ80At4wsCxJBh/A1y0zznMHK2v+SGClDXWbAQfTlkLwDhX+eZZTR2dlf0TV6x2A7JKCWpanhvbTO6XLLWUWRWeY/j6LPTCAMgRb8zaSK5zdK/rIfnDJwGJSdjHnfG9/4xhlYWSN/AIchuKsOnKg4vsOxYKr81fRZ70RT5Y9MkbnIhfbMpI4oG9HSroW0daKIXDfmi/aQz4EHHjjbSB0YQqYC/EjL4ffHSBR+nw3ea2VqKt9S5vJYI3/0RaLMRYG+9KUv7bPiUAC/1CmRrVNlSgZT06aPeiZ2hfNQ+mCc01r5E82mfiH1sRclYD3bRbi3ivzhD9BBHwc6irIfo1r5kwfbT92V11GbPvrRj+6jPOnMLVsnBXJ/uA+Vick8B2CR/jGPecxY0fq+myXtABljYQA5+eNLCaYOM5nKN3raXpbKo9+KhDOuGHd8EEHfVG6gmUnhTAwDG0xgKKO+xDYejnHRYcvG5Zoxw8qNmrG2RhesMtYaW/HIxIuyTQHOpF801gL8AWeIvNCJ86LHtREdNiRySAeKeM+ec9KQnSk6tFYvDt8/Vf48Z+xrtl43W4a9EbbeVH2rPdgkQANyisgge/KGN7xhr4My7kxNN9V27F+29d9U/Z305bHJ3dHcmGqrrKL3ptp4NXKn5FNwmho7qgbfyNitHOwfsmTSZXvSSsCZAZRhOEYAqDhznAEDlA7NWbJcgBFtnwGOrBkgg/hwZpmRYHNqjgYlIMpBI5ixSej82LtdYxxNIUYO8MyeH7e97W07S60oIAMWQ8eSx/UkiHJmATMrlaWXDKAhxaEdi0Ao0wLjLnWpS/WXlvGmJm35jna+a3GgRi4BXpxns/OcFcvpgM+cB/0e2B1QKw6y5ZAhuoRTpx/6IEd0wdS0iZbiKGTGLXmX7/n/7d177K3ZPT/w4+cvCZFI+ENEjvjDHxLENYIYt477pe5Ga6qoFi1KtTNtnekMZpppaxANpQaVXlzGJZSpyyCuLfGHS4Y0GYRESFz+IeGP3/f1nHl/Z511nr33+uzLnPOdrk+y97P386xnPev5rM/63NdaETq55sig4FTKGFNXG8mv8K8wbtOheqCoA8ZBAPPnAIIjRmN7LWXWjhRX/eM+67O1oI4o+iIw1jrCz/Db3sFlXTSKT4ymth59EYBTxpjFrgkpv1/96lcvzj5tkRHbAyHsuQwBO+SJppuCiM8y4Bji2xxnnglk1/aQjBH1g1PRH3oGiXIvf86+KKH4remx6OUQ+pPJICgUg9Q6l23WZoX+KmPWu1wepD90bV0ZspE8fuCBB5aMaLL49ttvPzcWORNbYITLUuDcjeNDv5Kzb3jDG5ainC/GuTEv4ERvkNXEqa6sjFVOOVChqQrelsq7rwr9WUYBLd9yyy3LRiAcXngRRyHgEM8mG6M05b7Rspwp9BX9A88yXAN4XvhKeFyuOe6iv2QpKCtjGL/Rr3QyOPaO0feq9AdHnOiArsVBvQb70h/+IOtChisl/4UvfOHCe57xjGcsDrqMOe+zBs7DqXujz66Vw289Sx/Qv+ygbJqk5znnuA0qeLNEB8czPiyb0ZTROIQEqFwHLa833TQgC9w17TQ2A6OytiozUr/jLllb4QVVWQsH+LbAMmfe6DqS22StDQDiNDPGfeg8eFeva8BvdHRZeAm80D1ko6MfY6rdTRvORnhohS+qs4cK/Z1K1o7S3yGydhf9VWRGlf5GZW2F3+IBeAsexSFuuQ16JpqlX6Et2eB0ipFy6GJUd1R2lH8ruwaT7q5i5aLRXduXu/w0FT2qrbf9veaz4GjEN60XKKtegPXJhr3cdFEMKU79x24+AQ4vIIrOaQYI+ExpMOhTZrl49sUgjNPMOYpPGGarBKT8IUfMx/OA3zHGRG5FV44FUukxbmBKjg/wTJBMseXP418MVABHm4CgZtBqtzq2rY1TKbvpefP8uwYGqnRprSZGgvvQI4cQYLhk7HKgZ3zJ5jT1hLAXAQaUesozqJTFOD0bPP/5z18c834TzO2uYhHUrgVkO7QGJadM2qhMeNMu/tWO0Th28gzH7BCHbwLlGT6ehb+1kealwJavOLXe+c53nmfupLjos7o5IvQJh2V+99M5M6WU0tpm5jLYk+2iXo4uApABCfBuzkF8k2LWG5Xw7LmUuNe97nXLtHxlGWtxlq31xVL541+CKyAOx8dPL4c2yGC6KDg2/akzxg/8eU/rnDGOQuey+vTfofQXA94zEyzxG4zSn7KVMVuhPwGf9AfnVhwc+jNO5tZBrS3Ahg7JNHXM1JHsJuq945S0NmHoyFgw/RgwdhkTVZqq4G15UPeV9x2lP1MEOBUBJ3UcQqYyt5m0ozSlnkrZZJrRMZKhrg6O6kB4T/474n/b6M+0QzRuDMikQ/vGsqy1bEhCgQUV+lPe+NEmWVIJJDrfw770J4sMbwXkTdaBFLyUMRdjneNxDXK+lQ99OXwzTqo4Q/oyu/5X8IYek9GIT9s13hjRJ8amfgLtVFFZ4Jxkgih4FeD0zpityNqKzFge1Hxtk7UVXlCVtQwv/ACOOBYdR2GbrEWXgJ1ALgA8YG09zMtnQQp8Ex1yPtPt9ZVpv8kwpv/0sIuHVvliX7//FfpT/tiytkJ/h8radizfjLpehd9KJME/OVdf9apXLTRF77KeIXmAV3OejZar6I7oYJR/K7sGk+6uYmVUV6nyvYqOV6G79CU+ustPU9WjUneON7PPYq+MM0oFgbwG1vMAFK4gDpJbJc51dejcLGLqHGAYRQm9eubqAvgYbDK2cv6QI4WPMNe2Rx55ZHHsiXZThiglpg1IoY4Cte+zRKJlFcCH95KBEYjB71oPYSybIqLwYTqcctrICbFJKaiU7dsx/7/rYaBKlwx+68eI5lPkASWnXf+I0kjBZLAwNEPXDGYGAUVeBhSolEXz1vYxnk2VkKGiPZw9IoOuG19r2UuMDe0QoZM1J4qMH73iFa8o8a+8i7a3As5/kLEcvMqkofTanY3TYBQ4tDItqF/PSB1xeHGq9cB5EaeFaxyaeDjFioLlGudaokS5Hy7TL5zz/VosnGF5rnv05d13373cjv/LgpFRCK9xlKzhKM9zTIZiey6/4yzzP7z52PSnbkYLJwTHR7IIGUk+nJ76QRYdB9K+9CfTBm1QWmUvoN+XvvSll170ohct71aRn6GtEVlSob/LZ0YfUH/riHYutOgd2n6JTFUmgM5t6KDv0X5LM+Rurx9k3Coni2qUprx/BW9pX3us0p97OWZM54txxjDmZG6hQlOVsq9//esv3XHHHYshZRMU4wEOGFbBY5wqbXu20R8c6BeAL/T9k6yLONQr9IfXm3Fg/MZJ2rar/V2hv7bf4qRPXRx9AhjkjynzCVaujRf35Hx4TOrJkYM0a87IMtvm/Ms9a8cK3txvepjsP7KKAxBYk7Jd9/btb3/7glvOGrptgAzAX/SrTGHjqiJrKzIjz8xxm6yt8IJW/qzJkVbW6kO7yzvCEUf2KGhT+NuarMWrgeUfWqDjs2FahzS82ygMOI/+yESfTFlLu1PXCA89hqyt0t+xZW2F/uBkX1m7jf7Qx6jMOKWuV+G3cY7ja+igBTM3Apk6vKscnQCM6I4V/p129MdJdzePjVGhu/TjiJ+mlce5L8dWX1yTsTe7z2IvxxlDQgRrG8SAVubymfLtswa9Mwxj7iERVYrgsYChDCgWEYyYuLUGMB7Pkk3XTn2oPjvphO5jJIsMtERCkRNZznSKtv6cWyM+WX2yczB8Sq1UxU1ZK5Wy7fPn73ddDOxDl9buMm0gCuNb3vKWaxAoyprI7DUXzv7IYCCM0TNFktOtUlY2K0FMSaCAGjvGBWPdApaMpTW+kilUpm/4LWODIUhhTsaHtuJdPmuAf1HoPJ8iL3KcbNGUjzENr6L91msEpqlnt7bwSwapcwRTv4i+NHzgWtq+nHj8K8p8soLaa30wwjUReIECAQT8zkfb7f7JOQQYmHF49Qqa68k88TugD/GnCOScHz3GUZWskPa+RObh3CdwbPojF9ZAtjDaomgzrjjO9qW/8Gz0L3vrypUryxggd1oFdhf9aefomK3SX/oQbWcn6jW8yOwgP0GU4rYcuae/jHFlYzAq4303QaYsjtJUxpH6RvC29tx96E8Gg/GchcQ5L/rxUqGpSllTdWS6cOZwfONd+sC0efyQ43stcLCN/jiCMv7Ut6nvOUHBKP2p03RdwAmE9/TAkY/HaUOF/pLdrD76Vg/a6Pn4pCmmYM354nwcGb3cwJ9NfQr90g97Wef+URjFW1sfeUWvM5b0czLrUoZsWJMPruMz5FzkW1Uuj8qMtCXHtGdN1gaXyu7iBRVZS7aR5XBkTEfW5nlw4Jysr5Z2tGOXrM3YWJOrcBo9SF3Ae9FPogtcPbv5e4SHCjqO8sVNT9qH/o4ta6eud1XXC02N8NsknOCT22C03KjuWOXfrY7WtnPS3c1jY1ToLn3Yy0Xnez/NPnqUei6Cz2Ivx5mX2wWt8cgxtRbxVEcveNYUmQj5NSfSrnasXWcgxjHVK6gUfAKJskkJ3tdxJvsm6xrY2WgtssrApej3Qlabc64nUAKYQkBpwjRf+cpXnkdP+3etlO3vnf/fdTGwD13KmgnNwtyzz9Z+aRdURq+uMzxiYAfDrYHJYKmUTR2MLB/j2jOSip/p3RRj5xni+EmmS+d+zo9khjLmGL6BEf6Fv1GKGXoxEnJ/lGW8Lk4o16Kcp5yjNlrQk6HbOs6cl6UBRM/XAK+gAMWYbcvgeT1QamQK+uBD+Im+yPSjODtiYLdRotTVP8t/axDoR/VzrMmGMHVKG/rMldTTHvO84K29Jn0b9IbFMelP/XGerjkGySGOs1ZWjdCferWfMtsv0I1e0Yfrsv/a6yP0Nzpmq/THSQiM2U2b07jOeRN8GL89OJfzFCwRRaAf+/XR2nvJ4gpN7at3tM/ch/440BOMU5dpXhz3UR6dq9BUpay68TfZROhSW0K3zgG7SoJR+kPP+sY45uCSNbMG0clG6Y/jGy8DsqZ8esj0Fdk8Ffpr60kGSXsuPBAtR8lHs9rTZ5Kkja0DDv8zCyE8yLgc3e29bUf7exRv7T02ochY0v7bz9YalHUYcE4AiHGazLpc6/VJ9YzKZXWMygz4G5W1aeMIL9CGUVkrSADgg0ztwXs7L/O63XhM23fJ2sgA47SHXk6yJbImtLYbi/gaHYTjLu1s60n/9udyHv1W+GJbT/t7H/o7pqyt0p+2j8jaCv3dLLpehd9GZ+71r7Zv/R4tN6o7Vvl35EPfrkl3N4+NUaG79COe2kPvp9lHj7ooPouTOc4IQ8oIBuZ3v3umBcRNhWkNBB2R1P+2U0z3Ab3Qb8tUfhNeFABMW7q0rIEWYrC12WHt9V2/ZVtEIaRYUbDWgIHNSI2h2paJ4G4zD2TlmI8MRMks1Izo16BSdu3+ee5dFwNVupTlwJkLKISMcw4syiKnDLCrFEeyMWUnIOMvkKmA/nt2pax7OGq0QdYFgRxIFpsxwpDER6zN4Nna0I5vfCrAsKryL44PfEO2RNaCUZ92xUmPzzBq++i2cu7l9NI2Y5sjogXOlijN1kZZA7xCFlSijG0ZfK4FZfQPwxTe2ikwFiwFaYPMFU5FgpHxiX8GTPVsQR/HaWbKa+s4WDNe2nvzGw0BeGOYxLByLpkvEcrOHZv+GDIcv0A2b9YG8t/7x1hKP47Sn/s5MvQjQ7d1jLoGb4Dzp0p/o2P2sbOppmn38rDHvzbRH8cBmuKIYLC0wIHAEWq8yEKIMuWIztu+T5Yl+ibH0wbjjg7Q9jH8WCfM+DQWKjRVxVv7PvldpT/3WTbBu+A/2m+sOGcMgApNVcqqG+81/cvUeGtGxmmmDcnYylqSo/SnXo70ZCL0fc9JyDmInsAo/aHt9P1yY/MVZyqaw2Pw7Qr9NVUt63y14xb9RrFHU/ie+tGq92jfL2tbuR4+o2/hjrMFDRu/vd7YPn/09yjeUh/aEJEHmbLpv/Npjw2yGLimJt5///25dTlG1nKqgYqsrcgMzxmVtdHrR3iBNo/KWvW1ssq9wJjAw/EXNCKw3cKIrCUbTZXF11p5rx68r4VbzpZrAWj/ypUr1+jskYna2sIIDzW9/1BZW6W/Y8vaCv3Bz6isvYi6XoXf4oveMfy5pZ3nPve5C/+zQRReN1JuVHes8u+2Xe3vSXc3j41Robv04YifpqpHXSSfxbXcOlg50jFRTsIBww0YyIwza8RQTlqg4HCqBTjXMv2hX08gZfY5xsC2w04ikephJMTIjCJCiImy+KxFMtvne7c4zSiym5xm7rF+DiVMqiSDO0Bxc47jMWsg+R8hS1m1s12Pu9xfKesegt67RSFLPfseRYwt1J5sn74ea9247rMWMdHnrlmfogc7aeSaflkDEb/U3x/tQmU+v+kiSdlv67D2h3viCGqv5TflWZnnPOc5ObUcOTWdb/F4SH1t5ZyrBCKl2Dpesrk8P0pZW/bQ3xW69CwGor5gjFg8Nqn3ogdxCsdBTtG79dZbz5uo/2+77bblP36B5itl3YheGPGZAuQcI8nUUZCsAAqy8cawlRHaAiXfedeTXVHhX1nPjSNJewJ5N8o+ZYGzxKKa/SdbhxvTrmXnwdTDMAZw3GZH5LojQwoQanAfuLwy1ZSwlBHgWrv2GYPHO8BDNhthJDAyerwxPuJky7MyldL/dnoYnBCMQD3bgMEHXyCRer/VkYBCaMz5TwrW0AAAO7NJREFUY9Of/vf+IIGK5c/Zl+lw2g8fwfco/amDkwng5a2xhK/hKyAR8Ar9jY7ZKv1lSpu+tvNbC8973vOW8Y1mEtnO9Tge/Sdfk2lIkQdkGKMWLrNRxnLh7EtZjgByFJ1WaaqCtzyzPVbpz5IMxhyakVluGQu/nXMNVGiqUlbdeB9nQDYgcg5eI584B4KTCv2FD9Np6EUBtIB/Wmcnhtso/TG8et6X/xlzFrN3Dp/bl/7IGO0M2BkWTtBcgpF5P2WTJeQY+ZTxrQ47wMdpJmAZ3TD173scxZv6yUrvAWTx2iggTkjno0vl/Th2MtXZPf5nFkR4THAwIpcrMqMiayu8wHuMylpT4ENb7TH9ysHsfPQDdYMRWZuMdXpzuzM126W1J9QX2wc9tzo7eZip5Wu67C4eWuWL2tJDhf7ce2xZW6E/zx+VtRX6U2/444itOkp/VVkbXIzw24xfPC67u3oP/xPMNK5Gy2VM7NIdq/xbm9Zg0t3VDO6LRnfpyxE/TUWPqvos0o4bdTxZxpkXogAx8CHZYvuUZgZtUt0N7D5a4z7GmIimQXr5zLCj8IhKPfzwwy4fBSgdto6W1WBNM22jNFEuPM+0mUQhOZZERoBF+ONJXWtIa+iZirU2HUt2x7333rsYwZgrA4GjJmswaAMg+OEAqDfCVWSWwO+BYLb+RqWsOjjqOHjcb1fEQwG+KNocUxEybZ0WY49yIYPkkUceaS8v7XE/J0EPjLc4L61fFcdiW07draLfXmt/EzgPPfTQ+Y6vrnF6wb/7RXX6tilDYXE9ir5zQD8SXBiGfgWH1Od+9aHTVjlzXoYIISfiaYcni1PHKHL9EID3UbrkBI3yl53XCEWOG+Oco+HOO+9cDA2KKsXHPXbe9N/4prRTKO04BRglo2WVl4GFZ8A/PsPpog+NF2PNOAL6Cw9hGClrmjPDw5pL8AzsSsQpAir8S5s54/ETi7vjbeikNaiXSvf8ijM/DqW1atCdNWzgFg9AF94l97b3aCuD2hg19cf4B/oH/zN1teVzMiz0JYcGHHOmpe/aevFzjnN1KMeIY3DCsXNAv+jz4Lm9P7/1GZ5rAW4yxHt7HmeTdkXJPAX9oUXvD394LTohHxihoRN8I+0fpT/vxiHKuQ4nskHUS2mIA8KU1jhuK/RXGbPB8ciRs9dUPf2gXxneGV/4rDG1tgyB8We9UDwU/aXsgw8+uDyWrDFFk5NMhodggGehv2QGvfWtb13GVJWmKnjbhINR+kMfkdv0hWQ5CfKRba7JxsNnKjRVKUvGMmjxnvvuu29pA1olI9CoYEagQn/qZdwz/NGsd9NveIZxiObi4L/Z6K/VOcmhOKWNW+8A0J+gLLzdc889y1hEq+5t382YTyACD3vBC14QdF5ztKxHgg3XXNjyp4I3jurIyow5TlpZjc4LrNELbfpAvjlnswjT+gQxvJv24/02nwAVWVuRGVVZO8oL0uYbLWvxB7KQ7vOSl7xkGd/oJHx8Qe7jX3CMH6LBK2cZZ/gceRg7SLHIlfa+XTy0yhfbuvO7Qn+nkLUV+tPmUVlbpb+KzDiVrlfht/Rj7UCDkj/IGXpL9Czjna5rquRouVHdMbRzyHHS3c1jY1Toru3zET/NqB5V9Vm07bgRv9/9jDlf8WBTHBh520BmAkGh7KbpQu39otA82TLLGAccKo6EN0+rnU4yB9r0LhHMeGCV9VGWwsbIiKHSPmPbb88lfCj6iVymvHOMW959ipJnEWyeJ1qRbZeVp4Bm+2kKcTsFJfXl+Hmf93lLHf6ra+0j6pldnxiAcErRpcD5AMyuXfvFVKlEFNfqzDlOi0pZz5LF4UPYuH8XcNBsoxX0wWiguKGT9LF6vWdw6T/c9n2TaWEM5ERMlNVXbWYeI6uPFirHSMsOcN6HgsEY9ZG+rP/gXPs4nkxFo4gBhkL6QOTmj/7oj65z4HGIZDpHiy/P9GxGEho6tD7toxSnPTG+GJQMbP3lPeAQvteciEsjuq9d/af4CF0aM6KiaA+9vu1tbzt/EmNbFo13YBgzHu0+5rfsCHhCQwwwkWxT4jI1UCWVstYoURelwRjRP+plMHAottM14M01/EbbKK/agrdY5JnyEKjwL/fgDXgOwx9u9AsjjfCAn22gPNpWfi1AwLGlXu2P02itPjTOyUqBhwcfdeoPtCIgkGnzpm/JFsL31B2HhTpsENCCvqGMXT5zXjGk4Vu/MxYZZJQ29MfR77dMQ/jVFvTrXs43jmSOM+U4jTYBOaCM8RnZ4XmmTuHNeMop6U9GAbrgGPEe3td/tOQ90HOgQn/kjnfDywRq9A+jyVjGa+JYUneV/kbGbNrcH7fRHzoxruAifaEPjRkGeGgbfgRFKMX6Fr3Bm7JkgkydrC3l+Ry9HNuRwcYiWQwX3iXyr0pTVbz1uPB/hP6U4yT33uRH66Ay3hgzrqF56xJWaKpSljwTWDSW0ZS+dDQdj0HIyApU6M89nIF4pX40jqMjeZ4dwttgzSH051kCGsY43Qi/CIzSn/KUeeMVH0ev2o128aQ3v/nN1+gaeAhex3mmDPpzL/rlmMq7MU7b9eu0ce0TYxaO6BGbeHneK8cRvNE3klli+YPwTu+gLzi28RKyVDaaMYkejFf8xTUgkEGX1rZARdZWZEZF1o7ygrT5EFmL35BZcIXn9jAqa+mm6jIujDl4xqvQDVlqDLMx4NzzyEJ0hi/qF/SC33PK4nuxqUZ5aJUv9u+Z/yP0d0pZW6G/iqyt0F9VZhxCf9tkbYXfsmvQGZ3aER3iS3Q1ei++AEbLVXTH0E573MS/2zLt70l3dR3vRtNd1U8zqkdVfRZk2+UzW4TexQY5BeDjyRbu63+3M4N7mZNCgc00mL7QMf5TIKVAU4oJ/ayxsKluCo/IJkZJkTklYDo6gXLPOUHJ6oHiIl1f9oxyxwbGPPxQeAnbXfg59vNN65BFJUK5CwikXbRiB1EGkwhsMvfUKxWaQYEo9TFcy5JrIfe+8Y1vvGa7b+sbUIZlQ1FmCQkRVn3WAgVGtgO4++67r9uAwnnCK1uEa18ixd/xHd+xOFuVAZQb2YEtaIO2oGXHgGd6tmyBGH2H1GdKKUcMENmnYPSgz2QbwMXP/dzPXeP46cvm/0j/pewp6BKNG2+O2tI6tvLcHCtl8ZjUK6obZ2jqao/wxalOiWVocChtgwr/ohwby4w/vK7q8N/Wjso1Qs6z4WIbEA5o2ljEeyiR24CByQjAm1sDrL2HARqFjuFwCM/UT3gF4yrGbPus6u8KTaETcsjz0QljZRNU6E8dHMjwzpnk3fCTTVChv1OM2bTLeOGgxhd9trXZOBB8wJ8ZqduADMbLlBMoa4MtuW8fmqrgLc/pj8emvwpNVcrqd2OOk4Q+p3+2QYX+1M34Z5zhJ9uChzcL/WmHYCw5kEzATfiAM2ORY35X2U11HHr+FHijj3gvPB3/2jZeK3yxIjOqsnaEFwTXN4Os1W8MOdm128YFZ5m+0Af0nhG9YISH7sMXg7/2eAr6q9BUpWxF1lbpryIzTkV/+mKU3+rDyCjJBnGYtX2b36PlRnXH1HvIcdLdVexdRLqr+GlCe8fS4w+huZF78Wj67ho8aY6ztYdflHOUjzvuuGMx9EWYn2qQ9bMYOLICdsGI4wWeGFiiGKYTBKzRxfDm5LE9N+da6/ySkcZ5R7lQRwwoTMU0FEJQZN96VohaBlkb6fecEceZclfO0uYpzBRlGU8gji7CJ97mfvcsyk/VcVatjzKkTd5Xdott3DeBdds4MkU4ZXHugpH+21XHvD4xMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAx8FTBwDbH2f97qrzkqd9D1tuIU+LU7ThF/aIm0ntN6zgWmKoCRMADnGQ81ECauwgdMK0hkMVrRYnjNHPNFAxOJNlJnGXJYuMw4lSrgsi5D1jLYvEMaabA9FvTLw6Ban2m7XlfYKrfNjB1TCbkU5U+t737vDYxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxcEoMTMfZAHZNM7LWVLtY9sBtF6aI9TAszrwtxbf6Mlm3TNaWVFxgjTzOIE4xjsgsoC/NPZDfjz76aE4tR2t4AeuRAWuYmCamvixsvlzoviySmx1RHTng7AhmmqZ2yWxrd+hrb3/ta1+7TF9TzsK8h0KlPlltwPS5ben/ypjWe8hUOHVMmBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBiYGLgeAyfdVfP6x80z7yoYsGaU7DDZYBb4loH2MR/zMcvrxylmYdbP/uzPXrK5lFPegv8gjje/rTGU7DAL4wNOM6mU1tCRsbYpK+vzP//zl/JrX5x3FoTftKaJ9YxM03z605++TAu1aKtFnveFSn3WYANrDjHru1l4vwfvY8HmCRMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwPHwcDMODsOHmctKxiwCCCwqxi4fLYgPMhORhxJ2QTBrlAWK7a2lyyrNruPcw0oawFRa6f52LEVWHQ1mWrLiebLNEyLbzuqNyDL7sUvfvE1u+PlWnu00H8ca6ZsZneqtkzl92h9WTDfNNoeTE+1Ll3/gcMJEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTA8fDwMw4Ox4uZ00dBji27JRmh0EbAthyu3eK2emJw8fukdnBot0NzFTMOMU4yDZtzsC5ZnH8HmwckB0TrbFm8X8bEGSLeVs17wJTLO+6665lI4PnPve5l2wHfwiM1Jc2y8Sz2UGbeabNmf6qHXBrW/QJEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTA8fFwMw4Oy4+Z20NBt7xjncs/0w7/PiP//jldzYESDGbBABresVBJhss4D4OL2uRcSb1nziUbELAwbQNOO3sVJl7nvnMZy5OtG33uGbtt4ceemgpxrn3SZ/0Sbtu2Xp9pL7gRUUf8REfcU19jz322KWf//mfP/888sgj11yffyYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgaOg4GZcXYcPM5aVjDAQeXDoWVjAJANAVLcf04xzrX3eq/3Wk5nKqc/t9xyy3JOFtp99923/G6/1P093/M9yyYBss5+9md/tr183W9ro9l98tu//duXe2SQvfzlL7+uXH/CZgQ2KPiAD/iAZbOB/nr1/676/vEf//Ecd1/8xV+87PApO28NZPJNmBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBiYGDg+BmbG2fFxOmtsMCA7CphOCFqnmP8cWbLIZJVxAP3P//zPpf/8z/90adkQ4P3e7/2W37//+7+/HPsvjrlksWXzgb5M/59TKu2w6cC2DQTaeznc2nXS2mv7/N5V34MPPrhUa1rmt3zLtyyZbqarBt77vd/70ld91Vdd+tqv/dqcmseJgYmBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBI2LgSck4++iP/uhlHSvOE04SU8v+/u//vvwa7/me73npG77hG5a1rH7t135t4/12cbSQu50W//qv/3pjuf6ChedlLZk2+H//93+X/uZv/mZxsHDmbAMLuD/72c9epvT95E/+5LaiyzVTEp/2tKddV44TyaLw//Zv/3bpN3/zNy/913/913kZTpKv+ZqvOf//Iz/yI5fsorgGnEG33377csnC+G9605vWij0p5/7sz/5s6XsPs7i/DQF60EfZJTKONmU+67M+aykKL3F09ff6/1u/9VuXvvqrv3rZwVNm29/+7d+uFbvmnMw0UyA5oj7lUz7lkgwwGwhsA1Msf+EXfuHSl37pl24rNnxtV31/93d/t0wR/cIv/MJlTTOZZz763SYKnI0BdPOLv/iL+XuU477jYdvDP/3TP/3Se7zHe6wW8b7tOnWVsir82I/92GXtOnzCdN8//MM/XByxaw+zE6t17vT/O9/5zkumFf/zP//zWtHSufd5n/dZ6JbD9z/+4z+WdmTK8raKNvEQ75SdZjfdb0z9zu/8zqbLy3k82DvDy66yaxUZX97tjW9846V//dd/XStyknOmRVv/kFPdLrr4ABmyL1Ro6iM/8iOXHX3XnoVX/MEf/ME1lyr0Z4OPT/zET1xwyvH/F3/xF4u8uabCPf7sO2Y30d8eTXhSbhmlZxnLNqd529veNozffXG47cUrdFcp65kVursofG8bLvtr1gH9xm/8xkV3EowaBeuvoiO7dgvA/fmf//kiB9bup7taNsI9Waf1N37jN5bA31r5Q88dm+9VeFmlrPeEE9n4lrGQFS/IuYlH32x8T/sFH21MZUfybFblncidXWCX9//93//dWAxOvvzLv3zR6fdZG/dGyV271aNBdodNvugw7YZdG194w4UKTbG/sqFYX53ZKey6Fir0Z3MvdgWdin31l3/5l8sGYey9Q2HfMbtGf4e25VT3j9KzPvncz/3cRZ9/+OGHh5uzLw43PaBCd5Wynlehu4vC9zbhcdP5ffhTRZbS+z/hEz5h2QiP3chG4/thO99IOKnjjDL+ghe84Jp1pD7wAz9wmbbHeVYx9C0Sb2F3jA/z3OQ40ylf93VftzgVTKsbdZxp6yte8YrF+ZIOITw+4zM+49K99957jRMr13M03S/OtpzbdlTWoNsGHEnWufqZn/mZpZj3bu9hdHEYrQHFO2UJiBvpOMtUTP23SfAyhuM4a50MH/7hH768HseGPt8E7vmKr/iKxZnk3R999NHzopvuc/4nfuInLn3TN33TMmWT49P6Z5vKp0LGMgUabW0Cjr7AofX97u/+7vI+lK/LZ7uSwmM7NZNT1yYMb3nLW46aDXfIeMi790cGKWG6CeA0jrNKWYaTqbetg+mDPuiDlkzCX/mVX7lmnHA23nHHHecZkNpiPH7qp37qpV/+5V++9Nu//dubmrfzvF1On/e85y19lHrRMGXg+7//+7fS1iYecuutt17T1rVGUPp2OcM468MTdpVdewZFFp7f933f90lznH3nd37nNY4r9P9pn/ZplwQNQidrbd10rkJT6uCk3rRuIgMrjrMK/alXACRT1/0nEz/u4z5ueaeK4e/eFg4Zs5vor63/Zvo9Qs/WvfyCL/iCZTziBwJhu+AQHG6qu0J3lbIVurtofG8TLtfOk4d42y5Z2977lV/5lcuYa8/ZYIej9f77729PL+XoF2RvAK9Xls7I6XZMODbf07ZRXlYty+jnuA2QpfAiCGZjphZuRr4ngJ32C6LFcYYf++wCMmCTk9C97BG0yTm7j+PsRshdPFMwOUA+cbgL5vdOq5TZdazQ32d+5mdekvywCTgrM/OjQn/09i/7si+7ZhwLnn7O53zOsgzMIeN43zG7if42vfuNPj9Cz2TYc57znMVOIaNGHWf74nAbTip0VylbobuLxPe24XLtWpU/4amjshTPfP7zn3+NvcsGMI5f97rXDft21tp96LmTOs5kh9nBkCPhkTNH2b/8y78sho9sDMJVNpJpc7tABhUHB+fRNiCgDNg2E2db+fYax4lBTnBaeF1Gz9Of/vQlI+Xbvu3bll0V2/L5zeFDWdgH4KWdgkgxI7xDjKYeiuDJnOmBsN/kOGuNsv6+J/s/ox7+toGMi2/91m+9rggHxyhwqrawVl973W/KXV/uVa96VV/suv+9Yp0Cd955Z36eHw+pL5VY3+2BBx5Yss6MHc4LQv6xs2mwrZMu5Y9x3Hc8bHs2YwNo8z/8wz9cV7R1clfKEkxxmv3Jn/zJEh2lEHFYcdThMclC9F6EPyNLhsFf/dVfLZk/eIcpu8qiiyqYTmvKrDEsQsuxT+Hk+MIfbrvttktveMMbVqvdxkM4ngmLHjwvfKfdhbYvd1H/kw8EJ1r51V/91UsyZ/WliCce/+IXv3hjxu2md67QlDrwYmCc9UCWBSr0R+iHP6MzvB2tOscZJJt11zqNeW5/3HfMbqO//hkX5b/xwRnYOjtG2r4vDrfVXaG7StkK3V00vrcNn4deEySJU8QO1QJPiWoL3jCk8RwgE4mTDfzTP/3TonMZpx/1UR+1KPRobJM+sNxU/DoF39OEUV5WKQuHcTqZQSIDj2zleEHHjr/0S7+0YOBm5Ht0hsxqWBrZfOHNWSakOb38JI/xFVn+7ayQvtxF/A8naBD86Z/+6aIj6TsBQEFp/dxu3rUUHPiq0F90OVndPX7pbdF5K/RnDeU4zdTLdpJBT0dkX0rw+N7v/d6BN7m+yL5jdhv9Xf+Ui3Pm67/+669xdoy0fF8c7qq7QnejZSt0d9H43i58HnK9KkvjNGPrcr4KaqITmWdsrd7mP6Rt1XtP5jjj5BLlBSItySRi2MrggoRP/uRP3mhM5kUoOQwmivAm4CizQ2IylDaV23SeIhSF9fu+7/vOpxOKUN91113LwvWi172TD+P9oi/6ok3V7jwvxTu7NbaFRVBFMh1FetYcZ4S66/10TYpLFtlv65y/Lz4GKAycJKd2lOw7HnZhWL2A00EG1jYYLSu69aEf+qFLVW1ElOKL4VKMGY3f9V3ftSi7qRcfSiYmB5oNJmQXGW/7OM6+5Eu+ZGHo//3f/31uSHEEmkbJGcIxInu0z4rYxUNiePS4wu84zoz/H/zBH+wvX+j/+jTrDuKPCS6YVqGfKDii0ptws+nl0/cj9IffM45EtrfRaoX+tCuRfFkKychgvOPl6Nj0oH0cZ97tRsiwTbi+0eef9axnnTsLRtuyLw531V+hu9GyFbpDx6n3ovC9XTg95DoHAOAwy9IapkoLzhpDpm/GccbYBtZd5SBLwIUTje7HuU83jTG/FN7z61R8b5SXaXalbHg0XLzmNa85f2szC2SP4HW/93u/d+nf//3fbzq+Z0y02eHnjX/8B3slNkt7jRxn16AD/HvbNM32vovy22wduBFoTIac7G4O4g/5kA9ZlrKpOs4qNAVPHFnA1Nlty2dU6M+Y917GKRsv/UYXkD0kGC1Zojplc98xu4v+FgRcwC+OdDO1KrAvDnc9o0J3lbIVurvZ9L0bSXcVWWpGHZ3YeCV3yRBAf7ly5coyVmWIt8kWu+jhmNdP5jizxphOyvz4ttE//uM/vmSIWLMmgGGLCjCS/viP/zinl7XK1MMIxkRjHJwXOPtBKYzTTESE087c/DVgcInwK2d6G0j00do97Rpcoh3OcVJJqdXuFr75m795eUfvYQ7zsYAxrE6GFIbeAoENF9ZxEi3hLGjB1FKg7RFA7fX5e2JgFwYq40H2Dzozzq1/FTAGo4SJFjBSZGABY28XjJaNc9646KcRiIAb7xgwh7Jxw0kha6mfJk75J/DxoB44+K3hwslF4eK45Nxos47ivFN/C5zeMlc5962hIMu2hX14CB6oLrC21qH30Ga40V6KLx66Bnih4AT8CGZwFHkvfdm+X+7Fd7T5/d///RehhidTsK3n1gIcW5sP7+TI1z/KMEj7nXXb+/wmEGOMZjqk8+rg5CRUGbccZ6eivzga+vfSjhYq9MdZ5t31H9psQR+hobX1/0borzJm2+eO0p8xLvNGv6MTjgTOhrUlE0TSKYzeE71Yj8J0fOuMtTBarkLP6pexja4ZQZ69K1M9barg8FR0N/leeuPqEQ0JnIpW4wnGEDoiT9ZAeQ6t8Ht6lABJHFvGGMc72njwwQevqQLPo0dmsyE8U33gh3/4hxf+kxs4hPB0elbqdk0bBUrcZ8yQOY+dZayqW4bSNqjwPfo1HqXOH/qhHzqfuqZ+673huZassPzAKC9zb6VsMjT6AAZZij/QwY2pX//1X9+L7430fWXMer8AZ4n+kclMx94kH1PeEd+TOQ7MoOn1GDRK/4ZDzgA6US/vl5vPvqrycVTu0nVMORNUg3/0iAdyAq7x6rTHUVnjBvTT6/Qhx5kZROjeWtCnoD9tVj9Zv81ppo0V+jNzhy6iz+I0Uwc+ghb0Pxy38n6E/ipj1vMCo/RX4Sfok51qmqt+tIyLIDBdtV2ne7RchZ69F55jSRkAz5kRsZzY8lXB4eR7VxF5ar5X0fe0aBd/qsrSBLf4N+I08xzj08wd/LWftTSiJ6vjGHAyxxnBADAmiqtph5fPUpwJVY6xPquDcmyqpClTreMM02c8MTSSMt+/OOUBQmUneN7LX/7yvsj5f8xelLA1UPwHfUc4R+nBcJM+7BwgnBjSnkuoMECOCWlTn1HmGd7R1ALe/d5xxmhIGY61CRMDVQyE9kbGg/Ehq8s9opRRFO0CSnij30QF4gRmaJgiy0lFwTZdsnckjJZNuVYZyvuKIEYh5tgWrfjpn/7pXD4/avsHf/AHL/85A1owzTgKAGWOYqouUwUZQ6YzgGR5epcW3OMd8QrPCH6U2YeHUCpl0AF47dcNpISYIt8CIasdPVjYkwOqB3za+8nuslFJC3gwnKrPEd9+2ctetkTf0xZ1MjCC+5QlE6RY49PW7tsE6gS9UepcptYS1OBU9EcOAW3gJNF2NOZ5NggJX67QH6O/d+56BkU2ikI/5kbprzJmPROM0h+aQnNoD+jPBJNssCKCH/qCJ9kmbVm0b/obfpAsz9FyFXrWNsaUbEwgw5OBNwoVHJ6K7kJPu3hkyj2V+Z7IPqd+AI2ZSimbEN9NRkyu4zfJIlIWr6YjoaF77rlnMR7Dyzk16JsyiFw3zvHmdv1H9KpO+iUHC6cbxxraxof69TAZG/hmDHptION83GsN1W0bq1T4HjkFN9qH3q3XChjkdFzPzlgb5WXuHy3byla46SG8P47gKt8b7fvKmE0b4Yc9YoxxOppVMgLkqvfmDO2dheSRvncdwL/+jOO1rX8f+Tgid9HZS1/60mt4r37AfzlULp/JdQ7gTeC68treJjUoz/HkvOvG0Knoz1gElsyxFhJ6NN4EK61Xm9kWVfpbyyD0Ls94xjOW55HnrdNslP4qY3Z50NnXKP1V+Al8vOQlLzlPGNFX+Jsp5ZfP+vXuu+9eHj9arkLPeS82sL4S3OSoC6/N9U3HCg5PRXeT7z3RO8bgqL6Xu3bxp6osjX4v8UA2IDpWh/XO+YN6vWdUT057Dz2ezHEWQ5LX+ru/+7sXhquxFAhRGcZHpuA4z+jC+PsoDqNsF3DCXblyZVex5TqBwGlG8Q0wXAADtweOO9A62jAiWQ+YE8GLwR0TzItOlJyC1wNnGccZ5ZlyGCNONIjSJmtu1w6RfZ3z/8RAMFAZDxxejGe0h3maTidDAJMzPihqYXIxKEyHDIgwUq7RvDUmEpUfLRt+QUnAbDNe1d8+J/woz3WUEcWplLGtLrurBjgzCH/vwdkjy8B4s4YD55noGkGuzSKDoI2OpJ5E+1o+sS8P4aTUP9r0Uz/1U3nEclR/nGp44o/+6I8ufFd7ewUefgkjIEsuziD4kCFHqeTsyFSmpeDZl/MyPmSOUchf+MIXLlNcKaAxQCidynG6mW4hc80URIqw6bAMvm2OswhN0157iHKrfnAq+kPPIBlly5+zLw4feDM9Fr0cQn+yBq0DZAwAxgJnbKBCf5Uxq/7LgzIMXVtLi0LM8WetRU4GGV2333774kCT4YN+KOXoQVkOgh/4gR9YFGg0q8/RlqwH7zlSTjtH6VlZwHGiHZZZQNcVx1kFh6eiu8n3rvZjshH9EyjE6zg68CTjgkNMgDXO+qt3XV0/08wAQYUYAPiJqfQi1fRRIOOMMUluBOh0aCY8LzwA/2aUchYH8ADZzK9+9avPszkyPZg8sMmHtnmeIJE2yMDmZN4EFb5n2RA89JZbblmceQw/7wIvQDA3S4vkPXbxMveNloU/2cnGOjy3y4mQtZF1ka3qDuzie5W+r4xZz4cjwRtgLWO8bARk2sa4zkyV9j6bE+E7dG80QU7hexxWPewjH0fkLj6sP/SNTHS2Dkcz2qfH0AHUQ3dYg8gh42wNnFc/uc8RdQr6y8wdeoIs/wB9Ep2ZKeC5h9CfugTD2E/w4b1aJ3yF/ipj1rtU6K/CT6wHpf/wnte+9rWLo5NeTnZ7X85afTZarkLP3oucpRtxKgtOy7odhQoOJ9+7KvNOxfcq+l7bv7v4U+TKqCzFZ/ApPJTeGKB3OseujO5d0ZNTz6HHq9beobWs3J+O9dIYk2wP0TwDC5IZZ20WF0WDUtFOzVmp9uBTIl+e0xooBB7Qth4ydVNHAkep8N6B8dhnZPT3b/vPCJfVkY92WUw+6a4Uk+yq2dbDOCfwtaHNKkNAoJ8u1t47f08M7MJAZTyoy1ofaNV9BC4DBHCwhLlR3NAr4NR985vfvBjWolOAo5iiACplZbB6Nshikn7jP3ZgDIQf5b+jqHOr2FN200bXM+1ZFhCnGeCkztQYvECZ8AbX49jxO8BRAGKkHcJDOBiByEsccsuJsy/KkbopT1l/Jb/jXE/Zy2eOEzxEXzBK8TllBTOS9RTnfe5xpIxn/R/3Zn01QQ9KOjxzlGqbnW9EiAlA66LEWbbWF+0z8EXQv59zbXBDujY4Nv2pM45WOOHssc4ZYzR0LvsArRxKfzFWPLN1+vo/Sn/KVsZshf6kv6c/yOgYmvozTuY4qEWO0Th5T46hKbRgupisFLijD4yWq9AzHJgeigbRer+sguu7oIJDdR2b7ibfe6KHspC38SebCu3gI/S3ZJ3QIXuQlZIMZ8dMD5ZZBvApwEBHqxysymR6pnKcUSCOMk4gv00dUza7szrHGQaM4zimBDbj0KMfcmIAhmWr8y4nm6+Ms1G+x4nAmQ0ER+IQEmxtM7hHeZl6KmWTaSYA1b6XAEkgMi//HcndbXyv0vfVMYtva5MMqoqtIRgI8BaO3BYY/pxJgG5AP0er5EboIOX3lY+75K760Y9gkzEgcKYNHA0JsJFXm3aJdn+c9pxSa5Dz0ZlOQX/4N9B2/QOfbDU6lPajNY4gsC/9cSAau9H1vFerH1XorzpmR+mvyk9k5YAHz6afo20gqCu4gHeSpWCkXIWe1cmpTFfRZ5x2jhWo4vAUdDf53tUeq+h7bR/v4k8VWYr+jE0f+iK/DN8RHxJaRi/WXEyyQkVPbtt8yO+rHqNDathwb4xJg4hjKBlQmLqUdddFQjDGGw06A4SRtu2JYGYMAIoS4UfR6qdJtveN/l4TZHBGSXv961+/6sxTN+cYIm+na8rqANqVbJLlxPyaGChgoDIeVMvgF7GTdcQ4ABwK7Zoaxj/HA+WM4Mt4olgyFBjfUthBpayxYtoEIwpz5nzWHswXY3XduF7LXuII0A7ZMKJzpuFQLGzM4Z4IdHW0xoA2UrbwMOXzLs6H7/kdCA8JXvflISLe4Rf9VBHPSkScU60HRmSMR9f8lwkMKGmmkcr888n0l7R7KfT4V5xfOceIpdTqV3Uw2jItAP5EidUJTzEs13CU+hyTodiey+84y/yPIn9s+lM3OcUYZIAmi/CxsyxlH4ET/YDfciDtS3/ZvZlSK4MY/Zpq86IXvWh5t1H6097Q1rFl2OUzB2vqbx3RzoUW0Yl+sQYOQBO98d/uWCYzYqRcnBIj9CyzRyYH4LANbSwnBr8qOFTlselu8r0nOiqZYQzanvdGYY6zInfh04888kj+Lke6kMwwPIfe1vIPMitT7QUDZJWRXyLa6sn4U5GpmegqQN7YjZYDjtEV3uu6tvdtjhxSLo6/1JVjle+5T+DC0iRxZnA09jp1hZdVytJP7XzOEWXzHeMBzvzP+2pPD9v4HhxU+r4yZukY+gtviDOzb9vaf/QW3matxh6itzDw+nVBycuWNvDFfeTjiNzltADa65kc8eSuY0D/6Kc1iPNoTYYon/Mtbz02/RmHcG3mQhzUjHLOSnakcSy7SVbTvvRnnVF6CtwItpC7ZLpgC3leob/KmK3QX0szu/gJnqZvjIUEDdK/AtQ+wPuOlIuTbYSe1WdTFUdT19dmSKUtm44VHKaOY9Pd5HtXMVvR99p+28WfKrK0LSv4Sr6RJ0Bms2mZZDn567kpr0wvd/EqPAPtHxNO5jgjIBivmF6cZhrOyGQsx+t/zJfZty6GH+Un6eVtPTmHSKQJWngR8OYn2yvOAgLLOVkRuxbAVodObSPj/nPIbRJs7gnYTpnjLNM1GXkIKNloKTePEwNVDIyOh7Ze68MwXBNJ7qczYIAcZ2sgi4BSQfhy2nC6VcrKBqM0MMgZ8cYsJZDRRDHi1FkbU5nKQgnzWzSTMcYhkAiU9hImPmsggwHD9nwMWpaWd20hRg28HsJDGHQAf0nb2+ckqpOsoPZaMhPac5zucJb2tdc2/cafeojjLM/Xh7IHo4D25Xf9j6MqEfC2fLLg4DzC1PVj0x/DeQ0s4o+28FrKLUV7X/pLtjL6l7115cqVZQzol9ZZtIv+tHN0zFbpL32IttupMz1uRJ2jnLRZgX05/0fLhZ5G6DlrW9E1TAvI1IDQEKct2ay/1rJCtWsUh8oGjk13k+9d1Z3Sb/jwJrqjPLcQJ0p7jk6FT5AtaFTfy2qhn8ZplvKUcvzQ2KbLJaPF9T7zn2yhuAPOmAQb/M90Sb97CF325/3fh+95H7poNot5+9vffp3TusLLKmVNx5aJK9uNrq+v9AGnBznMobQWsNrG92QZVfp+dMyqU6AecILKeutBdgPepQ2tbME38D/n3vrWt/a3nQeE+mCBgmtO0n3k44jc5bCU0SSAhd6rEKefd12DBNJaXerY9MduWrOd4FYmlbFmHIN96Q/N+HDycITed999y5g3U4J8qNDf6JilY1Xor8JPwgfX6K/txyzXsatcApxr5Xp6tjwLfdm4h4vYxGk/nuCcWQyZfdK2ye9RHLZj8th0N/neVb5X0ffa/tzFn5IFqb93yVLyLEDOtv3umegSzaNT+nhgRE9O2UOPJ3Oc8VYTphSXHggug7iN/vVlnsz/FHPOrxj97bNzjrAIQ3E9RmxblrJlwWXKwhrzb8v6jdkkqtJf2/VfmzEPhiQFTlYHoDhNmBg4BAOj46F9hihVxorz1iZqN+mgyLlujLSOdGVbAU05q5R1P6Ds+nCaeYaxoZ6scYThOs9wIcxbhut+zg/rTbmHUt2OIxlFa5Fz98Uh5TrliODpnVpxTCm7Lw/Rdkoj6BelXk6efeFRnA1RpHLekVLdAqPGQtJA22W4CmjAC0WHg2UNEt1pr6Vu/erZIkL6kcCjaBGaAijK9RGhtp78jmEVvOW8Y6Zp9EbyMenPc+I87RVF1wRR4KE1MEboz73az3HUKgfOo1f04brMwPb6CP2Njtkq/SULQd8K1mwCRkzG8Rr9tfeNlqvQc3gPxzcZ3AN9I4ZDu7ZqW24Uh+09x6S7yfee0J2MbzyErthnUQT/bcTbOfjrwbmc5wiL7On5h/tMBUk2JF4VR4JrGQd+A7QSBzo9l74L1Gu9v03QT91ry+3D9wRuZEsHTOVnbMQYdb7Cyypl1U2uypLFD7Ul/NI5QK6AUb6Hj1b6fnTM0pPJUCCz3KeHTPlBBy1txUFKhvV0oI7025o90/PCfeXjLrmrHdaNTBAfnesbARizU2Rr7YI4isk1uOoz6YO/1kg+Nv15T7SCjtrna3vkRtrh3Cj9oWuZd5xlaKYF+OGcv/x4cLRCf+n7XbpKlf4q/CS23xr9eU/8jy6WOneVyzutlevpOUEwNLMmd8ll520ctmljijxvFw7bPjs23U2+d1X+hb+N6Httf+ziTxVZilaNQTSV9rTPMobZQuilTVIY0ZPbeg75fTLHGcaHiSc60DYy5yJk22s34jdDt1Ws2zbEWCWAHjubpoNR96ADGaw6nGedEfFkgGi36TOUd4zE8x/ZkCnxZLRnPuOpgYHR8ZC3ZTiIPAGGAeOcA4tjJouu2sHR9GGOdLvuodVApjr479mVsu7hqNEG0e9WKUoWGyaM15g2KCLr2drQOvVbZYzQoGRQ3Jz3u89OsOaJLM84ODg+8AFR63aNQe1K1qpopnG6Dw+hoMQA3GT441GyoKLMwE0gylX+3/L4Oj4Ez5WzTCc4CkQBanGSa6ZRyAgMEJgckQDv08dxmpny2hpwqTf3bjrGuIQ3Ck0UPuWTfRJly7lj0x+nYRalt6xA+74Mam0C6cdR+nMPg1I/mmbSB1fgDeiTKv2NjtmqDGMkoSn9zKhtgYHDEWq8kEXKGmPJFGvLWpMC7Vh7B52MlKvQM4cvOuhBdhEFDE61s+3LvuwoDnPfselu8r0ndCeZP6Gjnu44ijiI0HIL+hl/bXlOZgjg+RyxnPgcIbJKlG/5Xgxnyjojncxwn/FqU6t2vLo3AQPnjQWAZ5IJLc9yv/XHyBu0vwmqfE89dq/2THLPc/Bi5/BeUOFllbLqJvNlN1mSwVql0ee1IZkLWcN0lO+pt9L3o2PW+A+/9owWMpUR/xJEamlCOTQF1qZpOi/DTtDNe6OJNsiWHQOVA/vKx11yl0MjTrN2CrJn5v38XnOGOA/QvXdH28ZXO+6ytqrrrew9Jv2h33vvvXehY2sJZi3Vq627dJ7VGQdfhf44FQVV6GbZ/CP1xuiPY65Cf6Njtkp/odURfpKy+rbXl2TRyXSTYWezHrCr3Gte85phehbYaOk9ODX+tQXPI3O3JYiM4jB1Ox6T7ibfe4LvVfS9tj928aeKLFUv+Un/V691+lrIEg2PPvpoWU9u6znk99UQzCE1bLiX94/SweN86623npfy0ll/q40kUqQ5gLLN+vkNR/7BqPScdg65dWy0lTLF8A0QFs4xoEXxKEiYSv/Jtt+Eimt2b3oyIFunE5gENocdBjlhYuAQDIyOhzyDEKNsUbwsmJ3pRqasJMsjDibOgZYfUPhuu+22pSrRaWOtUtaNHFgMl6TCO0eJzZpHGSeEdwyhbEOuLOBQo7i5Hr6UaDmHT2uU41U24rDGVZTsrOdGUdaeQN6NY42Svy8PYaAAOO4jsXlWFi3GY9vpQpfPIqk+LeR91JV3cJ2zMQq4Pu1B38Vx5Jpdm+CN8sTRkamUrrXTdOBE3UD5bSCiBF8gWXF+qyOBjNCY88emP/2PDoCFglswLUn7KYTB9yj9qYfzBqCp1jHJmE/WVLIdK/Q3Omar9BelRZ/b0bAF0yONb45vRkfaraw12wL+x4Dk7BgtF/yO0LP1p3q57H/wzej1P0p62tYeR3GYe45Nd5PvPaE7BRd4bauroSW8m0Idx1r6wzEOb7/pRMlwjcOK4YwvGcP6L4DXpSzjAcg6kmUE8KE224KTM3yPoY2u8UDnsnnUcuPZl3oZZ7KclN0EVb5nnTVjA6+yZpddlP12LhsnVHhZpax3IHMZyJY4CHh/OxYCBnN4WMbhCN+r9P3omOW4WOMPzoXXWzbF/1a+kpPh07Jm1oDeTR5491avQKvZsCv37Ssfd8ld+k4gDiD/OUmyaUT+p9zaMbj3vDjZHKOzhSe799j0px8yPixDk0CSZwn0xYHJFgMV+hOwAcag8RGQxJGMzfRvcDDCe0bHbJX+KvyEU4JeiP6e+cxn5tWWY/rN+BstV6Fny0ysjavQiWe6Hh38msY9/mcUh7n32HQ3+d4TfK+i76U/HHfxp4osVV/GuLEpESNgNkbGbxIUImNG7LTUc+jxZBlnBipvNOPStEYOKecoQQSRqEWQ4yVuv/32RdGR2ZEslUNfbu1+yL18ZkRqS6JymA6DAlMVxcg87ezwgjlgfDcbiKB6jxBSiP5ma+dsz8XCQGU8WHg5jpbsfkaZ5bih2HA03HnnncvaEYQoJ4N7RML8NxYpSJw32f1JZtZoWZiVgUVBNX7vuuuuxemSTBMpwsYvoJg9/PDDC5NX9pWvfOUShaY8UXKBHQApwYAibbqpiKR6GV8UVO8FKDdh3tos6i9KYnF31yjJxqbnVhYjXirvvpIxFodSd3n5SwGxVhbcMvQon94l97b3aC8ccdZcOcs449SDh7ybssFJe1+LC2Xj7HnooYcWgwM+THmhxMEZZ5oy6nYOMFL1efDc1p/f+oxxKtNDH3hvtEJ2cH5EuT0F/aFFU3fhT7Qeneh703yDE++b9o/Sn3cTVMmW8Pfff/9Sr+BMnAAMdYocqNBfZcwulQ9+oQtyXD/oV0EvfYF2OCZa2jZm0RUnge3oyVG4TN9zWBh/lKjRcqP0PPg6W4tVcHgKuoOTyfeudpFABEc7x4XxwgnDoWHtHDxAX60FKNGlaWmcX/heaPTBs93mAngL+sRPbFqhbmXxNmO6zUgRFJUtxWl2zz33LOM1fF19Wa9F20zR5CQTnLVJjbFD3sWpYX2sNiM67WmPo3wPX4qeKjPIOwDTDGXluiYLVDZKhZdVyuojzkcyzzpR2oBHyryGR0G0QIXvVfq+MmbTlsoxgRp8jL69CWQQ03XwPnKP8wF9tc4f9+4rH3fJXc/juEXvgln6HU1mDKTd5Ay63ARo2Hp5+jT0nrHRjrlT0R/bT0DGe8j2Jnczo0eb8chkWFXoz3sxwuHxZS972bJ+IQejd6OXkE1ZRL9Cf9o0OmaVHYUqP7HhBp1PQIFda9kH70bXIqPzbqPlRul59H12lRvF4anobvK9qz1U0ffaPt3Fn5QdlaXK6o+nPe1p5/Kfc8yYyGwavg6+D1DRk5cbjvD17meKxhX1SI2LA+YI9S5VSNPG9BA74yD1Uw5tV08YBRi+hAzjT+rzGnDCUYwo7tucREnFF0HoI8wceCIXMrN0ToAhxgFA8BMaPgCj3rZmhTLqU6+OZRjsAtkjIvXev3UebrtPe5KxwTERgFfCHXNEQNoAeGllAjBStq1Nk3oqx9ZZV7lvlr05MDDSfyPjAd2L8FM8jBPp9QHGtnFoTGN2lPh3vOMdy29RaooRfsAIEmWkJFH+ApWyxrm6GOgMHIaNeqX0232nTSXnmHDN+NA2zh9toejb0IChHqBYiZzhO8aZ93X0vhi5XTmNrwCjRVmGkrJ4n/FIKYCfbbCLhxAi6tX+OI3W6uN0kbVLQYYHH23QH/gbhUpwgkOLYqUsnLnm3fBmRibHJ2HIKeQdw5/hBz+Hb/fhYZSy8GMBBufwN/hVP95FMaWMfdiHfdiizCmXLJC194BfZUy1Cu7hXcCFQaZNp6Q/U//QhXf1Ht7Xf7TkPdBzoEJ/ZJd3E80Wydc/nHH4t6wouA9U6W9kzKbu/riN/shxfR05riyF3JixG1VL2wxDdGrMO+p7/WZsG4sxPkfLjdJz/z75z4EAv8aNzy4YweEp6W7yvSd6iEMInzb20BEHPFpCQz/2Yz92rjgbl3ZGZtjjKXiZe9AovfaBBx64ZqF/xgE+h1fjcWSAMY7f4C2ZCqYl+Bb6Z3Tj5/re0bPMqmin8NNdBWqM7dTraGyjq116pOeN8D3lBGe8N0dc66DC59G8a3it9TArvKxS1pRYfUHPxMvgxpFuTReF40CF77lntO+VHRmzym0C2RLoyg5t+rsFchANes9tWTP4m3svnznLOHvRnzptOEHORtevysdRuavNj50F6wQE0xfaADhs0QNZg3aT8btc7L7IVTyX8wzPz9hA74J/MVZPRX/0E2OI3DcmIx/pMGg5s3s0u0J/8E+XQ6v6x0fd+gjeLPPhHQMV+hsds6m7P26ivwo/4bSmG+E94VMJMAgEJONutNwoPffvkv/age7p9pumOKes4ygOT0V3k+89wfcq+l6FP1VkKZpge5ipQK5E/hvH+H0CVspV9WT3jABeh0+swbudRaeWOSmYVaIrawUPPUeRpsAyMuLcObTOU9yP2XBsET4Y6pz6eD2WGSCnpJXrnzjPHBMDlf47xXgwtiiYjtrSOrb696yUpRymXsbRtug+hYkTnpFl3RsK2zZQN77ACGKcbeMLFE5lCQplORhuBFD4PXtbhJlTikPEe+mLkbaiCZFN+E2mQ/9+lN44Tzjp2jV/+rK7/usnRgBFMor7rnu2Xa/QFDqR5eL56IThswkq9KcODmS4Z6h7N32wCSr0d4oxm3YZLxzUjGKfbW1OvwlexWGWetrjaLkRem7rPeT3KXBYobtK2QrdXUS+py8YYZRnvKxdw2ytj/FfCrfMlEz9WivnHKUc/TEsKeDbIHxdNtsuPsRpbHaF5+ORbYBl2zPaaxkXx+J7FV5WKat/8HqOCHZE6zBr3ye/K3yv0venGLNpc+XI2SRgtM3W2Uc+er9dcle/Cf7j03SPXXrNtvfSn+QTh8wmOb/t/v5ahabcC49kL/m4jaaq9McOZb9wlNFNto3NCv1p87HHrDpBhZ/QVfBLfdYGo6/W9MT3aLkRen6i1sN/HRuHFbqrlK3S3UXkexV9b4Q/hToqslQAjCxlJybbNPX0x4qe3N/b/2cPef81eNIcZ2sPn+cuJgYqjpeL+YZP7VbP/ntq9+98u4mBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgRoGtjnOTrY5QK2Js/TEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAxMDEwMTAzYWB6Ti7ufpjtmZiYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBiYGJgYmBi4CbBwP8Ho0PqEzBlrg8AAAAASUVORK5CYII=) Ran out of memory again, even with just df_12GB. It's malloc'ing many chunks of 5 GB each, in the XGBoost training. Save and submit the model_small (trained on small dataset)Do model_small as a first step Save and load an sklearn model using pickle (easiest way): https://stackoverflow.com/questions/56107259/how-to-save-a-trained-model-by-scikit-learn ###Code import pickle # save with open(f'{ROOT_DIR}/models/model_XGBoostHparamSet1_trainDatasetSmall.pkl','wb') as f: pickle.dump(model_small, f) del model_small # load with open(f'{ROOT_DIR}/models/model_XGBoostHparamSet1_trainDatasetSmall.pkl', 'rb') as f: model = pickle.load(f) ds = generate_dataset(dc, df_small) feature_columns = [elem for elem in list(ds.train.columns) if elem != 'target'] X_train = ds.train.loc[:, feature_columns] model.predict(X_train) ###Output _____no_output_____ ###Markdown Seems like loading the model works! Next: submit this to Kaggle ###Code ###Output _____no_output_____
Metodo_de_Bayes_RISCO_DE_CREDITO.ipynb
###Markdown **MÉTODO DE BAYES - RISCO DE CRÉDITO**Primeiramente é carregado a base de dados, depois disso os atributos previsores são separados dos atributos classificatórios em dois "arrays" diferentes.Nesse caso não foi separado um "data set" para teste, somente para treino. Foi utlizado o "Label Encoder" para evitar problemas com atributos "nominais".Depois disso, a tabela foi salva num arquivo do tipo ".pkl". Através do "fit" ajustamos os pontos ao modelo preditivo e no final é feito a previsão de um suposto exemplo.[LINK NO GOOGLE COLAB](https://colab.research.google.com/drive/1YNSIVZWduGCasitBBnvCEiRqq03kSTm5?usp=sharing) ###Code import pandas as pd risco_credito_table = pd.read_csv("/content/risco_credito.csv") risco_credito_table ###Output _____no_output_____ ###Markdown Separando uma tabela para os valores previsores e outro para as classes. ###Code risco_credito_previsores = risco_credito_table.iloc[:,0:-1].values # O primeiro e o ultimo parametro é exclusivo (ou seja, não entra). # Cria uma tabela só com as colunas história, divida, garantias e renda. # O atributa .values retorna um tabela do tipo nparray risco_credito_previsores[0] risco_credito_classes = risco_credito_table.iloc[:,-1].values # Cria uma tabela com a coluna risco, somente. # O atributo .values retorna um tabela do tipo nparray ###Output _____no_output_____ ###Markdown Aplicando os Label Encoder nos valores categóricos nominais. ###Code from sklearn.preprocessing import LabelEncoder ###Output _____no_output_____ ###Markdown Será necessário criar um objeto Ecoder para cada atributo previsor não numérico. ###Code l_encoder_historia = LabelEncoder() l_encoder_divida = LabelEncoder() l_encoder_garantias = LabelEncoder() l_encoder_renda = LabelEncoder() risco_credito_previsores[:,0] = l_encoder_historia.fit_transform(risco_credito_previsores[:,0]) risco_credito_previsores[:,1] = l_encoder_divida.fit_transform(risco_credito_previsores[:,1]) risco_credito_previsores[:,2] = l_encoder_garantias.fit_transform(risco_credito_previsores[:,2]) risco_credito_previsores[:,3] = l_encoder_renda.fit_transform(risco_credito_previsores[:,3]) ###Output _____no_output_____ ###Markdown **Primeira Coluna (historia):**0 - Boa | 1 - Desconehcida | 2 - Ruim **Segunda Coluna (divida):**0 - Alta | 1 - Baixa**Terceira Coluna (garantias):**0 - Adequada | 1 - Nenhuma**Quarta Coluna (renda):**0 - 0_15 | 1 - 15_35 | 2 - acima_35 ###Code risco_credito_previsores ###Output _____no_output_____ ###Markdown Salvando atributos previsores e classes num arquivo .pickle ###Code import pickle with open("risco_credito.pkl","wb") as f: pickle.dump([risco_credito_previsores,risco_credito_classes],f) ###Output _____no_output_____ ###Markdown Começando a usar o método... ###Code from sklearn.naive_bayes import GaussianNB # A realização do Algorítimo de Naive Bayes será feita pelo objeto GaussianNB naive_baye_risco_credito = GaussianNB() naive_baye_risco_credito.fit(risco_credito_previsores,risco_credito_classes) ###Output _____no_output_____ ###Markdown Para descobrir o que cada valor do encoder significa é preciso comparar os valores com a tabela original. Testando Previsões ###Code # historia - Boa(0), divida - Alta(0), garatias - Nenhuma(1), renda - maior_que_35(2) previsao = naive_baye_risco_credito.predict([[0,0,1,2]]) previsao ###Output _____no_output_____ ###Markdown Ou seja, o risco é baixo para uma pessoa com:historia - Boa(0), divida - Alta(0), garatias - Nenhuma(1), renda - maior_que_35(2) Mostrar as Classes ###Code naive_baye_risco_credito.classes_ ###Output _____no_output_____ ###Markdown Mostrar número de resgistros por classe ###Code naive_baye_risco_credito.class_count_ ###Output _____no_output_____ ###Markdown Ou seja, 6 registros com risco alto, 5 registors com risco baixo e 3 registros com risco moderado Procentagem de cada classe no banco de dados ###Code naive_baye_risco_credito.class_prior_ ###Output _____no_output_____
Tutorials/PythonNumpyPandas.ipynb
###Markdown Python, Numpy, and Pandas**by [Richard W. Evans](https://sites.google.com/site/rickecon/), July 2019**Python has three main environments in which we will manipulate data. First, Python has its own native data structures that can be effectively used. However, these Python structures are often more general than a researcher might want. `Numpy` provides some extra structure on numerical arrays that is often helpful in scientific computing. For traditional data analysis where the unit of analysis is an obervation, the `pandas` library is a great environment in Python for working with data. 1. PythonPython objects consist of the types of the elements within objects (e.g., int, long, float, complex, string) and the types of objects that contain other objects (e.g., list, tuple, set, dictionary) called sequence data types. 1.1 Python Element Types: Numerical TypesThe `type()` built-in function allows the user to check what is the type of an object. The following two examples show the difference between an integer and a float. ###Code type(3) type(3.) ###Output _____no_output_____ ###Markdown You can perform traditional float division. ###Code 15.0 / 4.0 15 / 4 ###Output _____no_output_____ ###Markdown You can perform integer division, which rounds to the nearest integer. ###Code 15 // 4 ###Output _____no_output_____ ###Markdown You can perform modular division, which gives you the remainder. ###Code 7 % 3 ###Output _____no_output_____ ###Markdown We won't use complex numbers in this class, but you can create and analyze complex numbers with the `complex()`, `real()`, and `imag()` functions. ###Code x = complex(2,3) print(x) print(x.real) y = 4 + 5j print(y) print(y.imag) ###Output _____no_output_____ ###Markdown 1.2 Python Element Types: StringsStrings are an important data type. They can be created by enclosing characters in double quotes "" or single quotes ''. And you can do different operations on those strings. ###Code str1 = "I love" str2 = 'the OSE Lab' str3 = str1 + ' ' + str2 + '!' print(str3) ###Output _____no_output_____ ###Markdown You can pull out particular elements of a string. For example, the 10th element of `str3` is index 9 and should be the "e" in "the". ###Code str3[9] ###Output _____no_output_____ ###Markdown The last element of `str3` should be the exclamation point "!" which is index -1, and the second-to-last element should be the "m" in "program" which is index -2. ###Code print(str3[-1]) print(str3[-2]) ###Output _____no_output_____ ###Markdown We can also pull out slices of strings ###Code print(str3[2:9]) print(str3[:-4]) print(str3[-4:]) ###Output _____no_output_____ ###Markdown And double colons will give us every nth element. ###Code print(str3[::2]) ###Output _____no_output_____ ###Markdown 1.3 Python Sequence Types: ListA Python `list` is created by enclosing comma-separated values with square brackets []. Entries of a list do not have to be of the same type. Accessing entries of a list uses the same indexing and slicing operations as were demonstrated with strings. ###Code my_list = ["Hello", 93.8, "world", 10] my_list print(my_list[0]) print(my_list[-1]) print(my_list[-2]) ###Output _____no_output_____ ###Markdown Common list methods (functions) include `append()`, `insert()`, `remove()`, and `pop()`. ###Code next_list = [1,2] print(next_list) next_list.append(4) print(next_list) ###Output _____no_output_____ ###Markdown You can use the `.insert(x, y)` function to insert element `y` in position `x` of the list. ###Code next_list.insert(2, 3) print(next_list) ###Output _____no_output_____ ###Markdown You can use the `.remove(y)` function to remove the first instance of the element `y` from a list. ###Code your_list = [1, 'hey', 7, 'hey', 'cool', 36] print(your_list) your_list.remove('hey') print(your_list) ###Output _____no_output_____ ###Markdown The `.pop(x)` function will remove and return the `x`th element of a list. If you leave the argument blank, it gives the last element of the list. ###Code num_list = [10, 20, 30, 40, 50] print(num_list) print(num_list.pop(3)) print(num_list.pop()) ###Output _____no_output_____ ###Markdown A last note about lists is that they are mutable objects. That is, when you replace, change, add to, or take away from the list, it changes the single instance of that object in the computer's memory. Other objects, such as tuples that we will cover soon (and strings covered previously), are immutable. This distinction is important for functional and object oriented programming. You often want immutable objects as the input to and output of a function. For this reason, tuples are the go-to container object for passing arguments to functions. 1.4 Python Sequence Types: SetA Python `set` is an unordered collection of distinct objects. Objects can be addedto or removed from a set after its creation (mutable). Initialize a set with curly braces { },separating the values by commas, or use set() to create an empty set. ###Code gym_members = {'Doe, John', 'Doe, John', 'Smith, Jane', 'Brown, Bob'} gym_members ###Output _____no_output_____ ###Markdown Like mathematical sets, Python `sets` have operations like `union` and `intersection`. ###Code gym_members.intersection({'Brown, Bob', 'Smith, Jane', 'Jones, William'}) gym_members.union({'Brown, Bob', 'Smith, Jane', 'Jones, William'}) ###Output _____no_output_____ ###Markdown 1.5 Python Sequence Types: TupleA Python `tuple` (pronounced "tuh-pul") is created by enclosing comma-separated values with parenthesis (). Entries of a `tuple` do not have to be of the same type. A `tuple` has fewer built-in operations than a `list`. Also, the tuple is an immutable object in that it cannot be changed after assignment. Any operations that behave like they are changing the `tuple` are actually making copies of the `tuple` with the changes. Accessing entries of a `tuple` uses the same indexing and slicing operations as were demonstrated with `lists` and `strings`.The immutability of the `tuple` makes it the ideal object for passing arguments into functions and returning objects from functions. A tuple can be a collection of any object. It can be a collection of `lists`, `dicts`, `Series`, or `DataFrames`. ###Code tup1 = (1, 'three', float(6.2), 'five', int(100)) tup1 tup1[:2] ###Output _____no_output_____ ###Markdown You could dig out the fourth element of the string that is the second element of the tuple with some advanced slicing. ###Code tup1[1][3] ###Output _____no_output_____ ###Markdown You can unpack the contents of a tuple with a comma-separated sequence of values. ###Code mynumber, sixovertwo, wishheight, siblings, oldage = tup1 (timetogo, milestoMich) = tup1[3:] print(timetogo) print(milestoMich) ###Output _____no_output_____ ###Markdown 1.6 Python Sequence Types: DictionaryLike a `list`, a Python `dict` (dictionary) is an unordered data type. A dictionary stores key-value pairs, called items. The values of a dictionary are indexed by its `keys`. Dictionaries are initialized with curly braces, colons, and commas. Use dict()or {} to create an empty dictionary. Dictionaries are a good way to organize objects that are associated with keywords or names. ###Code data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]} data ###Output _____no_output_____ ###Markdown You can list all the `keys` of a dictionary by using the `.keys()` method. The keys that are returned are sorted. ###Code data.keys() ###Output _____no_output_____ ###Markdown You can list all of the values of the `keys` using the `values` method. ###Code data.values() ###Output _____no_output_____ ###Markdown You can select the values associated with a particular `key` in the `dict.` ###Code data['pop'] ###Output _____no_output_____ ###Markdown And you can select particular element values of within the key. ###Code data['pop'][-2:] ###Output _____no_output_____ ###Markdown 2. Numpy`Numpy` is a powerful Python package for manipulating numerical data structures called arrays. The name `numpy` stands for "numerical Python".A vector and a matrix are both examples of arrays. A vector is a one-dimensional array, and a matrix is a two-dimensional array. In numerical methods, one will often want to organize and manipulate data that has many dimensions. Arrays are the ideal Euclidean structure for numerical data.`Numpy` is an essential package for working with numerical data. Even though you can often perform the operations you would like to execute with Python's native data types, `numpy` will often provide the most convenient and efficient functionality.We can create a 1-dimensional `numpy` array by importing numpy and using the `array()` function. ###Code import numpy as np arr1 = np.array([8, 4, 6, 0, 2]) arr1 ###Output _____no_output_____ ###Markdown A 2-dimensional array looks a little more cumbersome to input manually. ###Code arr2 = np.array([[1, 2, 3], [4, 5, 6]]) arr2 arr2.shape ###Output _____no_output_____ ###Markdown You can slice elements of the array with `numpy`'s indexing. The indexing is such that the first dimension is rows, the second dimension is columns, and every other dimension is not geometrically represented. As with all of Python, the first element's index number is 0. If you want a slice from the `m`th element to the `n`th element, you must use `m: n+1`. Further* a scalar `m` means the `m+1`th element* an empty `:` means the entire dimension* two colons followed by an integer `::p` means every `p`th element* a colon followed by an integer `:n` gives a slice from the first element to the `n`th element.* an integer followd by a colon `m:` gives a slice from the `m+1`th element to the last element. ###Code arr2[:1, 1:] ###Output _____no_output_____ ###Markdown We can generate some uniformly distributed random numbers between 0 and 1 to fill a 3-dimensional array. ###Code threeD = np.random.uniform(0, 1, (3, 3, 3)) threeD threeD.shape threeD[:, :, 0] ###Output _____no_output_____ ###Markdown `Numpy` has a lot of great commands for slicing matrices. ###Code np.diag(threeD[:, :, 0]) ###Output _____no_output_____ ###Markdown You can also take noncontiguous and nonlinear slices using Boolean masks ###Code (threeD[:, :, 0] < 0.3) | (threeD[:, :, 0] > 0.9) threeD[:,:,0][(threeD[:, :, 0] < 0.3) | (threeD[:, :, 0] > 0.9)] ###Output _____no_output_____ ###Markdown Notice here that if you made an identity matrix (=1 on diagonal, =0 otherwise) that had Boolean values (True or False), you could pull out the diagonal elements with that object, exactly as the `np.diag()` function did. ###Code ident_num = np.eye(3) print(ident_num) ident_bool = np.eye(3, dtype=bool) print(ident_bool) threeD[:, :, 0][ident_bool] ###Output _____no_output_____ ###Markdown Of course, `numpy` has matrix algebra operations. But `numpy`'s default is elementwise operations. This is what you want, because most numerical operations on arrays are elementwise. ###Code print(arr2) print(arr2.T) newvec = np.array([0.5, 2]) print(newvec) np.dot(arr2.T, newvec) print(arr2) arr2 + np.ones((2, 3)) arr2 + 1 arr2 + np.ones(3) ###Output _____no_output_____ ###Markdown The following "broadcasting" will not work. (More on broadcasting later) ###Code arr2 + np.ones(2) ###Output _____no_output_____ ###Markdown 3. Pandas`Pandas` is Python library for high-level data structures, created by Wes McKinney. The package name `pandas` is derived from the term "panel data". In his book, *Python for Data Analysis*, McKinney (2013) states:> I started building pandas in early 2008 during my tenure at AQR, a quantitative investment management firm. At the time, I had a distinct set of requirements that were not well-addressed by an single tool at my disposal.* Data structures with labeled axes supporting automatic or explicit data alignment. This prevents common errors resulting from misaligned data and working with differently-indexed data coming from different sources.* Integrated time series functionality* The same data structures handle both time series data and non-time series data.* Arithmetic operations and reductions (like summing across an axis) would pass on the metadata (axis labels).* Flexible handling of missing data.* Merge and other relational operations found in popular database databases (SQL-based, for example)> I wanted to be able to do all of these things in one place, preferably in a language well-suited to general purpose software development. Python was a good candidate language for this, but at that time there was not an integrated set of data structures and tools providing this functionality. (p. 111)Pandas two main data structures are the `Series` object and the `DataFrame` object. 3.1 Pandas: SeriesA `Series` is a one-dimensional array-like oject containing an array of data (of any `numpy` data type) and an associated array of data labels, called its *index*. [Note: To run many of the pandas operations in the following cells, you will need to execute the `import pandas as pd` command and the `from pandas import Series, DataFrame` command.] ###Code import pandas as pd from pandas import Series, DataFrame obj = Series([4, 7, -5, 3]) obj obj.values obj.index ###Output _____no_output_____ ###Markdown You can create a `Series` with a customized index, as opposed to the default of simple index numbers, by supplying a list of index labels. ###Code obj2 = Series([4, 7, -5, 3], index=['d', 'b', 'a', 'c']) obj2 obj2.values obj2.index ###Output _____no_output_____ ###Markdown It is these customized indices that set the pandas `Series` object apart from the `numpy` array. With `Series`, you can use values in the index when selecting single vlaues or a set of values. ###Code obj2['a'] obj2[['c', 'a', 'd']] ###Output _____no_output_____ ###Markdown `Numpy` array operations, such as filtering with a boolean array, scalar multiplication or applying math functions, will preserve the index-value link. ###Code obj2 obj2 > 0 obj2[obj2 > 0] obj2 * 2 import numpy as np np.exp(obj2) np.log(obj2) ###Output _____no_output_____ ###Markdown The `Series` object has many of the same properties as a fixed-length, ordered `dict`. The `Series` is a one-to-one mapping of index values to data values. It can be stubstitutted into many functions that expect a `dict`. ###Code 'b' in obj2 'e' in obj2 ###Output _____no_output_____ ###Markdown Data stored as a Python `dict` can be easily transformed into a pandas `Series`. Note in the example below that the object displays with the indices in sorted order. ###Code sdata = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000} sdata obj3 = Series(sdata) obj3 ###Output _____no_output_____ ###Markdown If we create another object with the `dict` named `sdata`, but we label indices that do not match up exactly with the `keys` of the `dict`, the `Series` object will select the indices that do match up and place the missing value of `NaN` for the indices that do not match up. ###Code states = ['California', 'Ohio', 'Oregon', 'Texas'] obj4 = Series(sdata, index=states) obj4 ###Output _____no_output_____ ###Markdown Two useful pandas methods (functions) for detecting missing values `NaN`s are the `.isnull()` method and the `.notnull()` method. ###Code pd.isnull(obj4) obj4.isnull() obj4.notnull() obj4[obj4.notnull()] ###Output _____no_output_____ ###Markdown As was mentioned earlier, one of the main benefits of `pandas` is that its indices are treated as a key associative feature. In contrast to `numpy` arrays, a pandas `Series` will automatically align index numbers in arithmetic operations. ###Code obj3 obj4 obj3 + obj4 ###Output _____no_output_____ ###Markdown We can assign a `.name` attribute to a `Series` object as a whole as well as to the index of the `Series`. This is valuable for labeling data. It is also valuable for efficient manipulation of index values. ###Code obj4.name = 'population' obj4.index.name = 'state' obj4 ###Output _____no_output_____ ###Markdown You can change the index values in place if wanted. You might use this function if your data comes with index values that are not as descriptive as you would like. ###Code obj obj.index = ['Bob', 'Steve', 'Jeff', 'Ryan'] obj ###Output _____no_output_____ ###Markdown 3.2 Pandas: DataFrameMcKinney (2013) describes the pandas `DataFrame` object.> A `DataFrame` represents a tabular, spreadsheet-like data structure containing an ordered collection of columns, each of which can be a different value type (numeric, string, boolean, etc.). The `DataFrame` has both row and column index; it can be thought of as a `dict` of `Series` (one for all sharing the same index). Compared with other such `DataFrame`-like structures you may have used before (like R's `data.frame`), row-oriented and column-oriented operations in `DataFrame` are treated roughly symmetrically. (p. 115)The `DataFrame` is the standard data structure that you would think of when using programs like Stata, SAS, or R. As with the univariate `Series` object, the `DataFrame` allows for traditional data analysis facility while interacting with all of Python's other functionality. You will notice that many of the methods available to pandas `DataFrames` are also available in `numpy`. Their methods are usually equivalent, but the advantage of performing operations with the `DataFrame` is its respect for the index values. ###Code data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002], 'pop': [1.5, 1.7, 3.6, 2.4, 2.9]} frame = DataFrame(data) frame ###Output _____no_output_____ ###Markdown You can reorder the columns by passing a list of exactly which columns you would like and in what order. ###Code DataFrame(data, columns=['year', 'state', 'pop']) ###Output _____no_output_____ ###Markdown If you would like to change the names of the columns, you can just pass a new list into the `column` attribute of the `DataFrame`. ###Code frame.columns = ['Pop', 'State', 'Year'] frame ###Output _____no_output_____ ###Markdown When creating the object, we can pass in a column that is not contained in `data`. This will result in a column with that name filled with missing values `NaN`. Further, we can do the same operations with the `index` labels as we did with the `column` labels. ###Code print(DataFrame(data)) frame2 = DataFrame(data, columns=['year', 'state', 'pop', 'debt'], index=['one', 'two', 'three', 'four', 'five']) frame2 frame2.index frame2.columns ###Output _____no_output_____ ###Markdown You can retrieve a column from a `DataFrame` as a `Series` object either by using a `dict`-like notation or by attribute. ###Code frame2['state'] frame2.state ###Output _____no_output_____ ###Markdown You can also create a `Series` from a row from a `DataFrame` by using the `.ix` method. ###Code frame2.ix['three'] ###Output _____no_output_____ ###Markdown We can fill in the `debt` column values using `numpy` arithmetic operations. ###Code frame2.debt = 16.5 frame2 frame2['debt'] = np.arange(5) frame2 ###Output _____no_output_____ ###Markdown The following example shows how nicely data can be combined based on index values. Suppose we know some `debt` values that are associated with certain `index` values and we want to incorporate that information into the `DataFrame`. ###Code val = Series([-1.2, -1.5, -1.7], index=['two', 'four', 'five']) val frame2.debt = val frame2 ###Output _____no_output_____ ###Markdown We can create new columns in the `DataFrame` based on other columns. ###Code frame2['eastern'] = frame2.state == 'Ohio' frame2 ###Output _____no_output_____ ###Markdown And we can delete columns using the `del` keyword. ###Code del frame2['eastern'] frame2 ###Output _____no_output_____ ###Markdown You can take slices of your data by including a list with the columns you want along with a standard `numpy`-type slicing argument. ###Code frame2[['year', 'state', 'pop']][:-2] ###Output _____no_output_____ ###Markdown You could also explicitly list the particular observations that you want using a `DataFrame` call. ###Code DataFrame(frame2, columns=['year', 'state', 'pop'], index=['two', 'five']) ###Output _____no_output_____
examples/fortran/ipython-integration-demo.ipynb
###Markdown Loopy IPython Integration Demo ###Code %load_ext loopy.ipython_ext ###Output _____no_output_____ ###Markdown Without transform code ###Code %%fortran_kernel subroutine fill(out, a, n) implicit none real*8 a, out(n) integer n, i do i = 1, n out(i) = a end do end print(prog) ###Output _____no_output_____ ###Markdown With transform code ###Code split_amount = 128 %%transformed_fortran_kernel subroutine tr_fill(out, a, n) implicit none real*8 a, out(n) integer n, i do i = 1, n out(i) = a end do end !$loopy begin ! ! tr_fill = lp.parse_fortran(SOURCE) ! tr_fill = lp.split_iname(tr_fill, "i", split_amount, ! outer_tag="g.0", inner_tag="l.0") ! RESULT = tr_fill ! !$loopy end print(prog) ###Output _____no_output_____ ###Markdown Loopy IPython Integration Demo ###Code %load_ext loopy.ipython_ext ###Output _____no_output_____ ###Markdown Without transform code ###Code %%fortran_kernel subroutine fill(out, a, n) implicit none real*8 a, out(n) integer n, i do i = 1, n out(i) = a end do end print(fill) ###Output _____no_output_____ ###Markdown With transform code ###Code split_amount = 128 %%transformed_fortran_kernel subroutine tr_fill(out, a, n) implicit none real*8 a, out(n) integer n, i do i = 1, n out(i) = a end do end !$loopy begin ! ! tr_fill, = lp.parse_fortran(SOURCE) ! tr_fill = lp.split_iname(tr_fill, "i", split_amount, ! outer_tag="g.0", inner_tag="l.0") ! RESULT = [tr_fill] ! !$loopy end print(tr_fill) ###Output _____no_output_____
Lesson01/Exercise01-05 and 07.ipynb
###Markdown Exercise 1: Interacting with the Python Shell Using the IPython Commands ###Code import numpy as np vec = np.random.randint(0, 100, size=5) print(vec) for j in np.arange(1, vec.size): v = vec[j] i = j while i > 0 and vec[i-1] > v: vec[i] = vec[i-1] i = i - 1 vec[i] = v ###Output _____no_output_____ ###Markdown Exercise 2: Getting Started with the Jupyter Notebook ###Code x = 2 print(x*2) def mean(a,b): return (a+b)/2 mean(10,20) ###Output _____no_output_____ ###Markdown Exercise 3: Reading Data with Pandas ###Code import pandas as pd df = pd.read_csv("https://raw.githubusercontent.com/TrainingByPackt/Big-Data-Analysis-with-Python/master/Lesson01/imports-85.csv") df.head() ###Output _____no_output_____ ###Markdown Exercise 4: Data Selection and the .loc Method ###Code import numpy as np import pandas as pd url = "https://raw.githubusercontent.com/TrainingByPackt/Big-Data-Analysis-with-Python/master/Lesson01/RadNet_Laboratory_Analysis.csv" df = pd.read_csv(url) df['State'].head() df[df.State == "MN"] df[(df.State == 'CA') & (df['Sample Type'] == 'Drinking Water')] df[(df.State == "MN") ]["I-131"] df.loc[df.State == "MN", "I-131"] df[['I-132']].head() ###Output _____no_output_____ ###Markdown Exercise 5: Exploring Data Types ###Code import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns url = "https://raw.githubusercontent.com/TrainingByPackt/Big-Data-Analysis-with-Python/master/Lesson01/RadNet_Laboratory_Analysis.csv" df = pd.read_csv(url) df.dtypes df['Date Posted'] = pd.to_datetime(df['Date Posted']) df['Date Collected'] = pd.to_datetime(df['Date Collected']) columns = df.columns id_cols = ['State', 'Location', "Date Posted", 'Date Collected', 'Sample Type', 'Unit'] columns = list(set(columns) - set(id_cols)) columns df['Cs-134'] = df['Cs-134'].apply(lambda x: np.nan if x == "Non-detect" else x) df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'Non-detect' else x) df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'ND' else x) for col in columns: df[col] = pd.to_numeric(df[col]) df.dtypes df['State'] = df['State'].astype('category') df['Location'] = df['Location'].astype('category') df['Unit'] = df['Unit'].astype('category') df['Sample Type'] = df['Sample Type'].astype('category') df.dtypes ###Output _____no_output_____ ###Markdown Exercise 7: Exporting Data in Different Formats ###Code import numpy as np import pandas as pd url = "https://raw.githubusercontent.com/TrainingByPackt/Big-Data-Analysis-with-Python/master/Lesson01/RadNet_Laboratory_Analysis.csv" df = pd.read_csv(url) columns = df.columns id_cols = ['State', 'Location', "Date Posted", 'Date Collected', 'Sample Type', 'Unit'] columns = list(set(columns) - set(id_cols)) columns df['Cs-134'] = df['Cs-134'].apply(lambda x: np.nan if x == "Non-detect" else x) df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'Non-detect' else x) df.loc[:, columns] = df.loc[:, columns].applymap(lambda x: np.nan if x == 'ND' else x) df.loc[:, ['State', 'Location', 'Sample Type', 'Unit']] = df.loc[:, ['State', 'Location', 'Sample Type', 'Unit']].applymap(lambda x: x.strip()) df['Date Posted'] = pd.to_datetime(df['Date Posted']) df['Date Collected'] = pd.to_datetime(df['Date Collected']) for col in columns: df[col] = pd.to_numeric(df[col]) df['State'] = df['State'].astype('category') df['Location'] = df['Location'].astype('category') df['Unit'] = df['Unit'].astype('category') df['Sample Type'] = df['Sample Type'].astype('category') df.to_csv('radiation_clean.csv', index=False, sep=';', encoding='utf-8') df.to_parquet('radiation_clean.prq', index=False) ###Output _____no_output_____
archived-notebooks/predict-taxi-trip-duration/NYC Taxi 1 - Mean Duration.ipynb
###Markdown Mean DurationThis tutorial illustrates a simple submission for the NYC Taxi Trip Duration competition on Kaggle. In this notebook, we read the dataset and use *only* the mean trip duration to make a submission. This gives us a (not very good) baseline score while allowing the opportunity to talk about the process to read in data and submit.Step 1: Download and Prepare data The first step is to download the raw data from the Kaggle website. For the purposes of this tutorial only two files are necessary: `test.csv` and `train.csv`. You should download them and save into the `data` folder. We begin by importing the necessary packages. We use the `pandas` data analysis library to read in the data in a usable format for python and `numpy` for some mathematical functions. ###Code import pandas as pd import numpy as np import taxi_utils ###Output _____no_output_____ ###Markdown Next, we use the function `read_data` which you can find in the `taxi_utils.py` file in this folder. In this case, `read_data` will create a *dataframe* which stores our tabular data. A dataframe has the `head()` method, which gives only the first five elements of the dataframe. We can use that to get a sense of what the dataframe looks like. ###Code TRAIN_DIR = "data/train.csv" TEST_DIR = "data/test.csv" data_train, data_test = taxi_utils.read_data(TRAIN_DIR, TEST_DIR) data_train.head(5) ###Output _____no_output_____ ###Markdown Step 2: Make a Submission The form of our submission is a csv with the trip `id` and `trip_duration`. We take a guess that every trip will be about the average length of a trip. That turns out to be a fairly poor estimation. ###Code data_test['trip_duration'] = data_train.trip_duration.mean() data_test[['id', 'trip_duration']].head(5) data_test[['id', 'trip_duration']].to_csv('trip_duration_average.csv', index=False) ###Output _____no_output_____
_notebooks/2022-02-08-dacon_airport.ipynb
###Markdown "[DACON] 항공사 고객 만족도 예측 예측 경진대회"- author: Seong Yeon Kim - categories: [DACON, jupyter, EDA, Classifier] - image: images/220208.png 데이터 불러오기 ###Code from google.colab import drive drive.mount('/content/drive') import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy import stats import warnings warnings.filterwarnings("ignore") path = '/content/drive/MyDrive/airport/' train = pd.read_csv(path + 'train.csv') test = pd.read_csv(path + 'test.csv') sample_submission = pd.read_csv(path + 'sample_submission.csv') train.head() ###Output _____no_output_____ ###Markdown 파일이 저장된 위치를 path로 지정하여 불러왔습니다. 이 코드를 사용하신다면 path 값을 파일 저장 위치로 지정하시면 잘 작동됩니다.데이터를 간단히 살펴보면 만족여부를 판단하는 분류문제이며, 범주형 변수가 상당히 많은 것을 알 수 있습니다. ###Code print(train.shape) print(test.shape) ###Output (3000, 24) (2000, 23) ###Markdown 트레인 데이터는 3천개, 테스트 데이터는 2천개이며 특성 개수는 총 23개입니다. ###Code train.info() ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 3000 entries, 0 to 2999 Data columns (total 24 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 3000 non-null int64 1 Gender 3000 non-null object 2 Customer Type 3000 non-null object 3 Age 3000 non-null int64 4 Type of Travel 3000 non-null object 5 Class 3000 non-null object 6 Flight Distance 3000 non-null int64 7 Seat comfort 3000 non-null int64 8 Departure/Arrival time convenient 3000 non-null int64 9 Food and drink 3000 non-null int64 10 Gate location 3000 non-null int64 11 Inflight wifi service 3000 non-null int64 12 Inflight entertainment 3000 non-null int64 13 Online support 3000 non-null int64 14 Ease of Online booking 3000 non-null int64 15 On-board service 3000 non-null int64 16 Leg room service 3000 non-null int64 17 Baggage handling 3000 non-null int64 18 Checkin service 3000 non-null int64 19 Cleanliness 3000 non-null int64 20 Online boarding 3000 non-null int64 21 Departure Delay in Minutes 3000 non-null int64 22 Arrival Delay in Minutes 3000 non-null float64 23 target 3000 non-null int64 dtypes: float64(1), int64(19), object(4) memory usage: 562.6+ KB ###Markdown 결측치는 관찰되지 않습니다. 간단히 변수 관찰 ###Code train.describe() Categorical = ['Gender', 'Customer Type', 'Type of Travel', 'Class'] Order = ['Seat comfort', 'Departure/Arrival time convenient', 'Food and drink','Gate location', 'Inflight wifi service', 'Inflight entertainment', 'Online support', 'Ease of Online booking', 'On-board service', 'Leg room service', 'Baggage handling', 'Checkin service', 'Cleanliness', 'Online boarding'] Continuous = ['Age', 'Flight Distance', 'Departure Delay in Minutes', 'Arrival Delay in Minutes'] ###Output _____no_output_____ ###Markdown 변수를 질적 변수, 이산형 변수, 연속형 변수로 구분해서 관찰할 수 있을 거 같아요.질적 변수Gender : 성별 (M, F), Customer Type : Loyal 여부 (Disloyal 또는 Loyal), Type of Travel : 여행 목적 (Business 또는 Personal Travel), Class : 좌석 종류 ( Eco < Eco Plus < Business)이산형 변수 (모두 만족도 변수이기 때문에 값이 0~5 사이 입니다.)Seat comfort : 좌석 만족도, Departure/Arrival time convenient : 출발/도착 시간 편의성 만족도, Food and drink : 식음료 만족도, Gate location : 게이트 위치 만족도, Inflight wifi service : 기내 와이파이 서비스 만족도, Inflight entertainment : 기내 엔터테인먼트 만족도, Online support : 온라인 지원 만족도, Ease of Online booking : 온라인 예매 편리성 만족도, On-board service : 탑승 서비스 만족도, Leg room service : Leg room 서비스 만족도, Baggage handling : 수하물 처리 만족도, Checkin service : 체크인 서비스 만족도,Cleanliness : 청결도 만족도, Online boarding : 온라인보딩 만족도연속형 변수Age : 나이, Flight Distance : 비행거리, Departure Delay in Minutes : 출발 지연 시간, Arrival Delay in Minutes : 도착 지연 시간 데이터 시각적으로 관찰하기 타겟 변수 관찰 ###Code plt.figure(figsize=[12,8]) plt.text(s="Target variables",x=0,y=1.3, va='bottom',ha='center',color='#189AB4',fontsize=25) plt.pie(train['target'].value_counts(),autopct='%1.1f%%', pctdistance=1.1) plt.legend(['Good', 'Bad'], loc = "upper right",title="Programming Languages", prop={'size': 15}) plt.show() ###Output _____no_output_____ ###Markdown 제 개인적으로 가장 먼저 관찰해야한다고 생각하는 변수인 반응변수 입니다. 만족하는 비율이 조금 높긴 하지만 빈도차이가 크지 않습니다.따로 오버샘플링 등 조치를 취할 필요는 없을 것 같습니다. ###Code train_0 = train[train['target']==0] train_1 = train[train['target']==1] ###Output _____no_output_____ ###Markdown 향후 코드 분석을 위해 타겟 값에 따라 트레인 데이터를 두 그룹으로 분리합니다. 범주형 변수 관찰 ###Code def cat_plot(column): f, ax = plt.subplots(1, 3, figsize=(16, 6)) sns.countplot(x = column, data = train, ax = ax[0], order = train[column].value_counts().index) ax[0].tick_params(labelsize=12) ax[0].set_title('Full train data') ax[0].set_ylabel('count') ax[0].tick_params(rotation=50) sns.countplot(x = column, data = train_1, ax = ax[1], order = train_1[column].value_counts().index) ax[1].tick_params(labelsize=12) ax[1].set_title('target = 1') ax[1].set_ylabel('count') ax[1].tick_params(rotation=50) sns.countplot(x = column, data = train_0, ax = ax[2], order = train_0[column].value_counts().index) ax[2].tick_params(labelsize=12) ax[2].set_title('target = 0') ax[2].set_ylabel('count') ax[2].tick_params(rotation=50) plt.subplots_adjust(wspace=0.3, hspace=0.3) plt.show() cat_plot("Gender") ###Output _____no_output_____ ###Markdown 범주형 변수들을 먼저 시각적으로 관찰하겠습니다. 주요 목표는 이 변수가 과연 타겟값에 영향을 주는지 입니다.이를 확인하는 방법으로 원본 데이터, 타겟 값이 1인 데이터, 타겟 값이 0인 데이터 각각에서 특정 변수가 다른 모양을 가지고 있는지를 관찰합니다.먼저 성별 변수를 관찰해보면 타겟이 1인 데이터에서는 여성이 많고, 타겟이 0인 데이터에서는 남성이 많습니다.그래프 차이가 눈에 띄기 때문에 성별 변수는 타겟 변수에 유의미한 영향이 있다, 여성이 긍정적 응답을 유의미하게 많이 했다라고 판단할 수 있겠습니다. ###Code cat_plot("Customer Type") ###Output _____no_output_____ ###Markdown Customer Type 로얄 여부 변수 입니다. 우선 전체 데이터에서 Loyal 항목이 disloyal 항목보다 훨씬 많습니다.다만 타겟이 1인 데이터와 타겟이 0인 데이터를 비교하면 타겟이 0인 데이터에서 disloyal 항목의 빈도가 높게 나옵니다.통계적으로 검증까진 하진 않았지만, 시각적으로 봐도 두 집단 간 유의미한 차이가 있어보입니다. 즉 이 변수는 유의미한 변수입니다. ###Code cat_plot("Type of Travel") ###Output _____no_output_____ ###Markdown Type of Travel 여행 목적 변수 입니다. 전체 데이터를 보면 비지니스 목적의 비행이 더 많습니다.다만 타겟이 0인 데이터, 즉 불만족하다는 응답을 준 데이터를 살펴보면 개인적인 여행을 한 사람에 비중이 조금 높아지는데요.여행을 목적으로 비행을 한 손님은 만족의 기준이 상대적으로 높다고 볼 수 있겠네요.이 변수 역시 통계적 검증은 하지 않았지만 시각적으로 봤을때 유의미하게 타겟 값에 영향을 주는 변수인 것 같습니다. ###Code cat_plot("Class") ###Output _____no_output_____ ###Markdown Class 항공 좌석 변수입니다. 우선 변수에 대해 설명을 하면 이코노미 < 이코노미 플러스 < 비지니스 순으로 높은 등급의 좌석입니다.데이터를 살펴보면 비지니스 좌석을 사용한 사람은 대부분 만족하고, 이코노미 좌석을 사용한 사람은 대부분 불만족하는 것 같습니다.이코노미 플러스 좌석 같은 경우 중간 등급 좌석이기 때문에 두 그룹간 차이가 눈에 띄진 않으나 불만족한 비율이 조금 높군요.타겟 변수에 따른 두 그룹간 그래프에 모양이 아에 달라지기 때문에 이 변수는 상당히 유의미한 변수 입니다. ###Code train = pd.get_dummies(train) test = pd.get_dummies(test) train.head() ###Output _____no_output_____ ###Markdown 모든 범주형 변수가 유의미 하기 때문에 제외하는 변수 없이 사용하고자 합니다.밑에 다루는 이산형 변수들은 순서라는게 있지만, 범주형 변수들은 순서가 없는 쌩 범주이기 때문에 라벨인코딩 보다는 원핫인코딩이 적절합니다.판다스에 get_dummies 함수를 사용하면 데이터 내 범주형 변수만 알아서 뽑아서 자동으로 원핫인코딩을 해줍니다. 이산형 변수 관찰 ###Code cat_plot("Seat comfort") ###Output _____no_output_____ ###Markdown 이산형 변수에 경우 0~5까지 크기 순서가 정해저 있다는 점에서 범주형 변수와 차이가 있으나 항목 별로 그래프를 관찰할 수 있다는 점에서 범주형 변수와 동일하게 생각할 수 있습니다. 그렇기 때문에 범주형 변수와 동일하게 시각화하여 관찰하겠습니다.Seat comfort 좌석 만족도 변수인데요. 우리가 보통 만족도 설문조사를 할 때 양 극단으로 답변(5, 0)하는 개인의 소신을 보여주는 기입은 잘 하지 않고 평범한 대답(2,3,4)을 하려는 특성이 있습니다.여기서도 앞서 말한 특성이 잘 들어나 있는 것 같습니다. 다만 분석에서 중요한 건 이 부분이 아니라 타겟 변수가 다를 때 응답의 형태가 다른가 인데요.좌석 만족도가 4/5로 높은 경우 대부분 만족한다는 응답을 많이 보였고, 좌석 만족도가 1/2/3인 경우 고객 만족도가 불만족인 경우가 많습니다.특이한 점은 0인데요. 0을 응답한 거의 대부분의 사람이 고객 만족도에서 만족한다는 응답을 보였습니다.0이 가장 안좋은 응답이라고 생각 했었는데, 안좋은 응답이라기 보단 결측치를 표기한 것으로 생각 됩니다. 변수를 더 확인해야겠습니다. ###Code cat_plot('Departure/Arrival time convenient') ###Output _____no_output_____ ###Markdown 다음 변수는 Departure/Arrival time convenient 출발/도착시간 만족도 입니다. 대부분 4,5점을 주어 만족한다는 응답입니다.전체 트레인 데이터 분포로 보아 출발/도착시간이 연착되지 않았다면 대부분 만족한다고 답변한 것 같습니다.다만 두 그룹 간 그래프 차이가 크게 나지 않는데요. 2, 3번 항목이 순서가 뒤바뀐 것 이외에는 눈에 띄는 차이가 안보입니다.특히 1번 항목은 출발/도착시간 만족도가 형편 없었다는 건데, 실제 만족도는 더 높은 것을 보면 이 변수가 의미가 없다고 생각되네요.앞서 말한 0번 항목은 여기선 반반 분포가 되있습니다. 조금 이상한데, 다음 변수를 또 봐야할 것 같습니다. ###Code cat_plot('Food and drink') ###Output _____no_output_____ ###Markdown Food and drink 식/음료 만족도 변수 입니다.값이 클 수록 타겟 1에 속할 확률이 늘어나는 모습을 보입니다. 하지만 1 항목의 경우 타겟 1일 확률과 0일 확률이 반반입니다.또 0 항목의 경우 전체 개수가 100개 이상으로 적지 않은 표본임에도 타겟 1일 확률이 매우 높은 것이 5 항목과 유사한 정도입니다.직관적으로 이해되진 않으나 조정이 필요해보입니다. ###Code cat_plot('Gate location') ###Output _____no_output_____ ###Markdown Gate location 게이트 위치 만족도 변수 입니다. 앞서 말한 대로 이 변수 또한 가운데 응답이 몰려있습니다.조금 특이한 것은 3번 항목은 대부분 타겟 0 데이터이고, 1/2 항목은 타겟 1 데이터 입니다.타겟 별 데이터의 그래프가 눈에 띄게 다르지만 해석하는데는 어려움이 있습니다. 답변자가 응답을 성실히 했는지도 의심해봐야겠습니다.또 특이한건 0 응답이 없습니다. 이 부분도 조금 이상하네요. ###Code cat_plot('Inflight wifi service') ###Output _____no_output_____ ###Markdown Inflight wifi service 기내 와이파이 만족도 변수 입니다. 그래프를 관찰하면 대채로 와이파이 만족도가 높을 수록 타겟 값이 1일 확률이 유의미하게 높은 것 같습니다.특이한 점은 0 항목이 적은 수로 존재하는데 저 값은 이상치로 추정되므로 다른 값으로 대체해야겠습니다. ###Code cat_plot('Inflight entertainment') ###Output _____no_output_____ ###Markdown Inflight entertainment 기내 엔터테이먼트 만족도 변수 입니다.와이파이 만족도 변수와 비슷하게 4/5 항목일수록 타겟 값이 1일 확률이 높아집니다.이 변수에서도 0이 일부 관찰되는데, 0의 활용을 고민해야겠습니다. 가장 많이 나온 4로 대체하는 것도 하나에 방법입니다. ###Code cat_plot('Online support') ###Output _____no_output_____ ###Markdown Online support 온라인 지원 만족도 변수 입니다.앞선 두 변수와 비슷하게 전반적으로 4/5 항목이 많으며 4/5 항목일수록 타겟 1일 확률이 많이 높아집니다. ###Code cat_plot('Ease of Online booking') ###Output _____no_output_____ ###Markdown Ease of Online booking 온라인 예매 편의성 만족도 변수 입니다.앞선 세 변수와 마찬가지로 4/5 항목일수록 타겟 1일 확률이 높아집니다.여기서도 0 항목이 극소수로 존재하는데 빈도가 높은 항목인 4 항목으로 대체하는게 좋을 것 같아요. ###Code cat_plot('On-board service') ###Output _____no_output_____ ###Markdown On-board service 탑승 서비스 만족도 변수 입니다.높은 점수를 받을 수록 타겟 1을 받을 확률이 점점 높아지는 형태가 뚜렷한 것을 그래프를 보면 알 수 있습니다.계속 같은 말을 반복하는데, 이 말을 하는 것은 타겟을 판단하는데 굉장히 좋은 변수라는 것 입니다. ###Code cat_plot('Leg room service') ###Output _____no_output_____ ###Markdown Leg room service 발이 편안했는지 묻는 변수 입니다.일반적으로는 숫자가 커질수록 타겟 1이 될 가능성이 높습니다만, 여기서는 조금 의외인 점이 3 항목 보다 2 항목이 타겟 1이 될 확률이 높습니다.표본이 조금 튄 것으로 생각할 수 있겠습니다만, 다르게 말하면 2나 3이나 타겟을 가리는데 별 차이가 없다고도 생각할 수 있겠죠.2/3 항목을 병합하는 것도 좋은 아이디어인 것 같아요. 여기서도 0이 극소수 관찰되는데 가장 큰 빈도인 4로 바꿔주겠습니다. ###Code cat_plot('Baggage handling') ###Output _____no_output_____ ###Markdown Baggage handling 수하물 처리 만족도 변수 입니다.이 변수 또한 윗 변수와 비슷하게 숫자가 커질수록 타겟 1일 확률이 늘어나나 2/3 항목은 다소 뒤바뀐 결과입니다.이 변수 또한 2/3 항목을 병합하겠습니다. ###Code cat_plot('Checkin service') ###Output _____no_output_____ ###Markdown Checkin service 체크인 서비스 만족도 변수 입니다.전반적으로 높은 만족도 점수를 기록하며, 점수가 클 수록 타겟 1 값을 가질 확률이 뚜렷히 높아지는 것을 확인할 수 있습니다. ###Code cat_plot('Cleanliness') ###Output _____no_output_____ ###Markdown Cleanliness 청결도 만족도 변수 입니다.윗 변수와 마찬가지로 대체로 높은 만족도 점수를 기록하며, 점수가 클수록 타겟 1 값을 가질 확률이 높아지는 것으로 보입니다. ###Code cat_plot('Online boarding') ###Output _____no_output_____ ###Markdown Online boarding 온라인 보딩 만족도 변수 입니다.이 변수 또한 값이 커질수록 타겟 1 값을 가질 확률이 높아집니다.0 값이 관찰되고 있는데, 마찬가지로 최고 빈도 항목으로 대체하겠습니다. ###Code # 깔끔한 변수들 train['Seat comfort'][train['Seat comfort'] == 0] = 5 test['Seat comfort'][test['Seat comfort'] == 0] = 5 train['Inflight wifi service'][train['Inflight wifi service'] == 0] = 4 test['Inflight wifi service'][test['Inflight wifi service'] == 0] = 4 train['Ease of Online booking'][train['Ease of Online booking'] == 0] = 4 test['Ease of Online booking'][test['Ease of Online booking'] == 0] = 4 train['On-board service'][train['On-board service'] == 0] = 4 test['On-board service'][test['On-board service'] == 0] = 4 # 1,2 항목 병합 필요한 변수들 train['Inflight entertainment'][train['Inflight entertainment'] == 1] = 2 train['Inflight entertainment'][train['Inflight entertainment'] == 0] = 4 test['Inflight entertainment'][test['Inflight entertainment'] == 1] = 2 test['Inflight entertainment'][test['Inflight entertainment'] == 0] = 4 train['Online support'][train['Online support'] == 1] = 2 train['Online support'][train['Online support'] == 0] = 4 test['Online support'][test['Online support'] == 1] = 2 test['Online support'][test['Online support'] == 0] = 4 train['Checkin service'][train['Checkin service'] == 1] = 2 train['Checkin service'][train['Checkin service'] == 0] = 4 test['Checkin service'][test['Checkin service'] == 1] = 2 test['Checkin service'][test['Checkin service'] == 0] = 4 train['Cleanliness'][train['Cleanliness'] == 1] = 2 train['Cleanliness'][train['Cleanliness'] == 0] = 4 test['Cleanliness'][test['Cleanliness'] == 1] = 2 test['Cleanliness'][test['Cleanliness'] == 0] = 4 train['Online boarding'][train['Online boarding'] == 1] = 2 train['Online boarding'][train['Online boarding'] == 0] = 4 test['Online boarding'][test['Online boarding'] == 1] = 2 test['Online boarding'][test['Online boarding'] == 0] = 4 # 2,3 항목 변환 필요한 변수들 train['Leg room service'][train['Leg room service'] == 2] = 3 train['Leg room service'][train['Leg room service'] == 0] = 4 test['Leg room service'][test['Leg room service'] == 2] = 3 test['Leg room service'][test['Leg room service'] == 0] = 4 train['Baggage handling'][train['Baggage handling'] == 2] = 3 train['Baggage handling'][train['Baggage handling'] == 0] = 4 test['Baggage handling'][test['Baggage handling'] == 2] = 3 test['Baggage handling'][test['Baggage handling'] == 0] = 4 # 조금 특별한 변환 필요한 변수들 train['Food and drink'][train['Food and drink'] == 1] = -1 train['Food and drink'][train['Food and drink'] == 2] = 1 train['Food and drink'][train['Food and drink'] == 3] = 2 train['Food and drink'][train['Food and drink'] == -1] = 3 train['Food and drink'][train['Food and drink'] == 0] = 5 test['Food and drink'][test['Food and drink'] == 1] = -1 test['Food and drink'][test['Food and drink'] == 2] = 1 test['Food and drink'][test['Food and drink'] == 3] = 2 test['Food and drink'][test['Food and drink'] == -1] = 3 test['Food and drink'][test['Food and drink'] == 0] = 5 train['Gate location'][train['Gate location'] == 1] = 5 train['Gate location'][train['Gate location'] == 2] = 5 train['Gate location'][train['Gate location'] == 0] = 3 test['Gate location'][test['Gate location'] == 1] = 5 test['Gate location'][test['Gate location'] == 2] = 5 test['Gate location'][test['Gate location'] == 0] = 3 # 삭제할 변수 train.drop(['Departure/Arrival time convenient'], axis = 1, inplace = True) test.drop(['Departure/Arrival time convenient'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown 우선 0이 관찰되는 변수도 있고 아닌 변수도 있는데, 관찰되지 않더라도 테스트 데이터 있을 수 있으므로 공통적으로 적용하겠습니다.대부분 변수의 가장 큰 빈도인 항목이 4입니다. 특별한 언급이 없는 변수는 0 항목을 4로 대체하였습니다.이산형 변수를 시각적으로 다루면서 느낀점은 역시 사람이 하는 설문조사다 보니깐 데이터의 질이 높진 못한 것 같아요.이산형 변수를 앞서 관찰한 결과를 통해 다섯 가지 종류로 나눠서 전처리 하였습니다. 다섯 가지 종류는 다음과 같습니다.깔끔한 변수들Seat comfort (0은 5로), Inflight wifi service, Ease of Online booking, On-board service1,2 항목 병합 필요한 변수들Inflight entertainment, Online support, Checkin service, Cleanliness, Online boarding2,3 항목 병합 필요한 변수들Leg room service, Baggage handling형태가 다소 이상하지만 유의미한 변수들Food and drink(1은 3으로, 3은 2로, 2는 1로, 0은 5로), Gate location(2,1을 5로, 0은 3으로)유의미 하지 않은 변수(사용하지 않을 변수)Departure/Arrival time convenient 연속형 변수 관찰 ###Code def num_plot(column): fig, axes = plt.subplots(1, 3, figsize=(16, 6)) sns.distplot(train[column], ax = axes[0]) axes[0].tick_params(labelsize=12) axes[0].set_title('Full train data') axes[0].set_ylabel('count') sns.distplot(train_1[column], ax = axes[1]) axes[1].tick_params(labelsize=12) axes[1].set_title('target = 1') axes[1].set_ylabel('count') sns.distplot(train_0[column], ax = axes[2]) axes[2].tick_params(labelsize=12) axes[2].set_title('target = 0') axes[2].set_ylabel('count') plt.subplots_adjust(wspace=0.3, hspace=0.3) print('타겟 1 데이터의 평균 :', (train_1[column]).mean()) print('타겟 0 데이터의 평균 :', (train_0[column]).mean()) print('데이터의 표준오차 :', train[column].std() / np.sqrt(3000)) num_plot("Age") ###Output 타겟 1 데이터의 평균 : 40.65047961630695 타겟 0 데이터의 평균 : 37.390390390390394 데이터의 표준오차 : 0.2758477133981643 ###Markdown 연속형 변수는 데이터를 타겟이 1과 0인 두 그룹으로 나눠서 그룹 별 히스토그램이 차이가 있는지를 시각적으로 관찰하겠습니다.우선 전체 트레인 데이터의 나이 변수는 정규분포와 유사합니다. 변수로써 좋은 성질이죠.타겟 값이 1인 데이터의 나이 평균은 40살, 타겟 값이 0인 데이터의 나이 평균은 37살로 크게 차이나진 않습니다.다만 표본이 3천개면 통계적으로 상당히 많은 편인데(요즘 많이 수행하는 대통령 여론조사도 천명 뽑습니다.) 3살 차이는 두 그룹간 나이 평균이 유의미하게 난다고 볼 수 있습니다.또 그래프도 봉우리가 있는 위치가 조금 다른 것이 보이기도 합니다. 그렇기 때문에 이 변수는 사용하겠습니다. ###Code num_plot("Flight Distance") ###Output 타겟 1 데이터의 평균 : 1935.2583932853718 타겟 0 데이터의 평균 : 2042.9632132132133 데이터의 표준오차 : 18.770618492960658 ###Markdown Flight Distance 비행거리 변수 입니다. 단순하게 생각했을때 비행 거리가 길면 만족도가 떨어질 확률이 높겠죠.전체 트레인 데이터의 분포가 정규분포와 유사하나 우측 꼬리가 조금 길어보입니다. 로그변환 해주는 것도 좋겠군요.실제 데이터도 예측한대로 만족도가 1인 그룹의 평균 비행 시간이 짧게 나옵니다. 그래프를 봐도 타겟 1인 그래프가 앞쪽에 값이 많이있어보이죠.저 차이가 유의미한 것인지도 생각해야하는데, 표준오차 대비 두 그룹간 차이가 꽤 있으므로 이 변수도 유의미한 변수로 취급하겠습니다. ###Code num_plot("Departure Delay in Minutes") num_plot("Arrival Delay in Minutes") print('두 변수간 상관계수:', train['Arrival Delay in Minutes'].corr(train["Departure Delay in Minutes"])) print('출발 지연시간이 0인 값:', sum(train['Departure Delay in Minutes'] == 0)) print('도착 지연시간이 0인 값:', sum(train['Arrival Delay in Minutes'] == 0)) ###Output 두 변수간 상관계수: 0.9768732919464286 출발 지연시간이 0인 값: 1705 도착 지연시간이 0인 값: 1661 ###Markdown Departure Delay in Minutes, Arrival Delay in Minutes 출발 지연 시간, 도착 지연 시간 변수 입니다.근데 비행기가 출발이 지연되면 도착도 자연스럽게 지연이 되겠죠? 즉 두 변수간 상관계수가 매우 높을 것으로 추축되고 실제로도 그러합니다.이 말은 굳이 두 변수를 사용할 필요가 없다, 오히려 다중공선성 문제를 가져오게 됩니다. 0.97이면 거의 한 변수나 다름 없죠. 또 지연이 됬는지 안됬는지를 구분할 수도 있습니다. 실제로 절반 이상의 값이 0을 기록했는데 지연이 안됬음을 의미합니다.다만 출발은 정상적으로 했는데, 도착은 연착될 수도 있으므로 도착 지연 시간 변수를 사용하도록 하겠습니다.그리고 큰 값은 엄청 큰 우측 꼬리가 긴 분포형태이기 때문에 로그변환을 해야합니다. 이때 0은 로그변환이 안되므로 전체 값에 1을 더한 뒤 로그변환 하는 log1p 함수를 사용해야 한다는 것도 유의해야합니다. ###Code train['Flight Distance'] = np.log1p(train['Flight Distance']) train['Arrival Delay in Minutes'] = np.log1p(train['Arrival Delay in Minutes']) test['Flight Distance'] = np.log1p(test['Flight Distance']) test['Arrival Delay in Minutes'] = np.log1p(test['Arrival Delay in Minutes']) train.drop(['Departure Delay in Minutes'], axis = 1, inplace = True) test.drop(['Departure Delay in Minutes'], axis = 1, inplace = True) ###Output _____no_output_____ ###Markdown 연속형 변수를 시각적으로 관찰하면서 해결하려 했던 부분을 적은 코드입니다. 간단한 랜덤포레스트 모델 적합 ###Code train_label = train['target'] train.drop(['id', 'target'], axis = 1, inplace= True) test.drop(['id'], axis = 1, inplace= True) ###Output _____no_output_____ ###Markdown 타겟 값을 라벨이라는 변수에 따로 뺀 뒤 분석에 의미 없는 변수인 id와 함께 지웁니다. ###Code from sklearn.ensemble import RandomForestClassifier rf = RandomForestClassifier(random_state = 0, n_estimators = 100) rf.fit(train,train_label) sample_submission['target'] = rf.predict(test) sample_submission.to_csv('airport_1.csv',index=False) ###Output _____no_output_____
AWS-RoseTTAFold.ipynb
###Markdown AWS-RoseTTAFold I. Introduction This notebook runs the [RoseTTAFold](https://www.ipd.uw.edu/2021/07/rosettafold-accurate-protein-structure-prediction-accessible-to-all/) algorithm developed by Minkyung Baek et al. and described in [M. Baek et al., Science 10.1126/science.abj8754 2021](https://www.ipd.uw.edu/wp-content/uploads/2021/07/Baek_etal_Science2021_RoseTTAFold.pdf) on AWS. The AWS workflow depends on a Batch compute environment. II. Environment setup ###Code ## Install dependencies %pip install -q -q -r requirements.txt ## Import helper functions at rfutils/rfutils.py from rfutils import rfutils ## Load additional dependencies from Bio import SeqIO from Bio.Seq import Seq from Bio.SeqRecord import SeqRecord import boto3 import glob import json import pandas as pd import sagemaker pd.set_option("max_colwidth", None) # Get service clients session = boto3.session.Session() sm_session = sagemaker.session.Session() region = session.region_name role = sagemaker.get_execution_role() s3 = boto3.client("s3", region_name=region) account_id = boto3.client("sts").get_caller_identity().get("Account") bucket = sm_session.default_bucket() print(f"S3 bucket name is {bucket}") ###Output _____no_output_____ ###Markdown III. Input Protein Sequence Enter a protein sequence manually ###Code seq = SeqRecord( Seq("MKQHKAMIVALIVICITAVVAALVTRKDLCEVHIRTGQTEVAVF"), id="YP_025292.1", name="HokC", description="toxic membrane protein, small", ) ###Output _____no_output_____ ###Markdown Or provide the path to a fasta file ###Code seq = SeqIO.read("data/T1078.fa", "fasta") print(f"Protein sequence for analysis is \n{seq}") ###Output _____no_output_____ ###Markdown IV. Submit RoseTTAFold Jobs Generate Job Name ###Code job_name = rfutils.create_job_name(seq.id) print(f"Automatically-generated job name is: {job_name}") ###Output _____no_output_____ ###Markdown Upload fasta file to S3 ###Code input_uri = rfutils.upload_fasta_to_s3(seq, bucket, job_name) ###Output _____no_output_____ ###Markdown Submit jobs to AWS Batch queues Select the job definitions and Batch queues for your job. ###Code batch_resources = rfutils.get_rosettafold_batch_resources(region=region) cpu_queue = batch_resources["CPUJobQueue"][0] gpu_queue = batch_resources["GPUJobQueue"][0] cpu_data_prep_job_def = batch_resources["CPUDataPrepJobDefinition"][0] cpu_predict_job_def = batch_resources["CPUPredictJobDefinition"][0] gpu_predict_job_def = batch_resources["GPUPredictJobDefinition"][0] batch_resources ###Output _____no_output_____ ###Markdown Because our test sequence is small (less than 400 residues) we will run the prediction step on a GPU to decrease the job duration from hours to minutes. ###Code two_step_response = rfutils.submit_2_step_job( bucket=bucket, job_name=job_name, data_prep_job_definition=cpu_data_prep_job_def, data_prep_queue=cpu_queue, data_prep_cpu=8, data_prep_mem=32, predict_job_definition=gpu_predict_job_def, # Change this to the cpu_predict_job_def for large proteins predict_queue=gpu_queue, # Change this to the cpu_queue for large proteins predict_cpu=4, predict_mem=16, predict_gpu=True, # Change this to False for large proteins ) data_prep_jobId = two_step_response[0]["jobId"] predict_jobId = two_step_response[1]["jobId"] ###Output _____no_output_____ ###Markdown V. Check Status of Data Prep and Prediction Jobs ###Code rfutils.get_rf_job_info( cpu_queue, gpu_queue, hrs_in_past=1, ) ###Output _____no_output_____ ###Markdown VI. View Data Prep Results Pause while the data prep job starts up ###Code rfutils.wait_for_job_start(data_prep_jobId) ###Output _____no_output_____ ###Markdown Get logs for data prep job (Run this multiple times to see how the job progresses) ###Code data_prep_logStreamName = rfutils.get_batch_job_info(data_prep_jobId)["logStreamName"] rfutils.get_batch_logs(data_prep_logStreamName).tail(n=5) ###Output _____no_output_____ ###Markdown Retrieve and Display Multiple Sequence Alignment (MSA) Results ###Code rfutils.display_msa(data_prep_jobId, bucket) ###Output _____no_output_____ ###Markdown VII. View Prediction Results Pause while the predict job starts up ###Code rfutils.wait_for_job_start(predict_jobId) ###Output _____no_output_____ ###Markdown Get logs for prediction job (Run this multiple times to see how the job progresses) ###Code data_prep_logStreamName = rfutils.get_batch_job_info(data_prep_jobId)["logStreamName"] rfutils.get_batch_logs(data_prep_logStreamName).tail(n=5) ###Output _____no_output_____ ###Markdown VIII. View Job Metrics ###Code metrics = rfutils.get_rf_job_metrics(job_name, bucket, region) print(f'Number of sequences in MSA: {metrics["DATA_PREP"]["MSA_COUNT"]}') print(f'Number of templates: {metrics["DATA_PREP"]["TEMPLATE_COUNT"]}') print(f'MSA duration (sec): {metrics["DATA_PREP"]["MSA_DURATION"]}') print(f'SS duration (sec): {metrics["DATA_PREP"]["SS_DURATION"]}') print(f'Template search duration (sec): {metrics["DATA_PREP"]["TEMPLATE_DURATION"]}') print( f'Total data prep duration (sec): {metrics["DATA_PREP"]["TOTAL_DATA_PREP_DURATION"]}' ) print(f'Total predict duration (sec): {metrics["PREDICT"]["TOTAL_PREDICT_DURATION"]}') ###Output _____no_output_____ ###Markdown IX. Retrieve and Display Predicted Structure ###Code rfutils.display_structure(predict_jobId, bucket, vmin=0.5, vmax=0.9) ###Output _____no_output_____
1. Beginner/Pytorch4_4_Pooling_Layer.ipynb
###Markdown Max-Pooling ###Code import torch import torch.nn as nn import torchvision.transforms as transforms import PIL.Image as Image import numpy as np import matplotlib.pyplot as plt img = Image.open('../data/example.jpg') maxpool = nn.MaxPool2d(kernel_size=(2,2), stride=2) img_tensor = transforms.ToTensor()(img) img_maxpool = maxpool(img_tensor) img_size = img.size maxpool_size = np.array(img_maxpool).shape[1:] plt.figure(figsize=(10,10)) plt.subplot(1,2,1) plt.imshow(img) plt.axis('off') plt.title(f'Original {img_size}') plt.subplot(1,2,2) plt.imshow(np.array(img_maxpool.permute(1,2,0))) plt.axis('off') plt.title(f'MaxPool {maxpool_size}') plt.savefig('maxpool_result.jpg', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Average-Pooling ###Code img = Image.open('../data/example.jpg') avgpool = nn.AvgPool2d(kernel_size=(2,2), stride=2) img_tensor = transforms.ToTensor()(img) img_avgpool = avgpool(img_tensor) img_size = img.size avgpool_size = np.array(img_avgpool).shape[1:] plt.figure(figsize=(10,10)) plt.subplot(1,2,1) plt.imshow(img) plt.axis('off') plt.title(f'Original {img_size}') plt.subplot(1,2,2) plt.imshow(np.array(img_avgpool.permute(1,2,0))) plt.axis('off') plt.title(f'AvgPool {avgpool_size}') plt.savefig('avgpool_result.jpg', bbox_inches='tight') ###Output _____no_output_____ ###Markdown Global Average-Pooling ###Code img = Image.open('../data/example.jpg') img_tensor = transforms.ToTensor()(img) input_image = img_tensor.unsqueeze(0) print(input_image.shape) conv = nn.Conv2d(3,10,3)(input_image) conv_shape = conv.shape print(conv_shape) global_avgpool = nn.AvgPool2d(kernel_size=conv_shape[2:]) print(global_avgpool(conv)) ###Output torch.Size([1, 3, 295, 295]) torch.Size([1, 10, 293, 293]) tensor([[[[ 0.2456]], [[ 0.1059]], [[ 0.2954]], [[ 0.2536]], [[-0.0453]], [[-0.0066]], [[-0.0869]], [[-0.2485]], [[ 0.3848]], [[-0.4621]]]], grad_fn=<AvgPool2DBackward>) ###Markdown Adaptive Pooling Layer ###Code adap_pool = nn.AdaptiveAvgPool2d(output_size=(10)) plt.figure(figsize=(10,10)) plt.subplot(1,2,1) plt.imshow(img) plt.axis('off') plt.title('Original (295, 295)') plt.subplot(1,2,2) plt.imshow(np.array(adap_pool(img_tensor).permute(1,2,0))) plt.axis('off') plt.title('Adaptive Average Pooling (10,10)') plt.savefig('adaptive_pool.jpg', bbox_inches='tight') ###Output _____no_output_____
hdp-6-STD (1).ipynb
###Markdown **Recuerde no agregar o quitar celdas en este notebook, ni modificar su tipo. Si lo hace, el sistema automaticamente lo calificará con cero punto cero (0.0)** Obtenga los 5 registros con valores más pequeños en la tercera columna. ###Code %%writefile input.txt B 1999-08-28 14 E 1999-12-06 121 E 1993-07-21 17 C 1991-02-12 2 E 1995-04-25 161 A 1992-08-22 14 B 1999-06-11 12 E 1993-01-27 8 E 1999-09-10 11 E 1990-05-03 16 E 1994-02-14 101 A 1988-04-27 9 A 1990-10-06 10 E 1985-02-12 16 E 1998-09-14 7 B 1994-08-30 17 A 1997-12-15 13 B 1995-08-23 101 B 1998-11-22 13 B 1997-04-09 6 E 1993-12-27 181 E 1999-01-14 15 A 1992-09-19 18 B 1993-03-02 14 B 1999-10-21 131 A 1990-08-31 12 C 1994-01-25 10 E 1990-02-09 18 A 1990-09-26 5 A 1993-05-08 16 B 1995-09-06 141 E 1991-02-18 14 A 1993-01-11 141 A 1990-07-22 4 C 1994-09-09 151 C 1994-07-27 1 D 1990-10-10 151 A 1990-09-05 11 B 1991-10-01 151 A 1994-10-25 13 ###Output Overwriting input.txt ###Markdown Mapper ###Code %%writefile mapper.py #! /usr/bin/env python3 import sys import itertools class Mapper(): def __init__(self,stream): self.stream=stream def emit(self,key,value): sys.stdout.write("{},{}\n".format(key,value)) def __iter__(self): for line in self.stream: key=line.split(" ")[0] val=float(line.split(" ")[6]) yield(key,val) def map(self): for key, val in self: self.emit(key=key,value=val) if __name__ == "__main__": mapper=Mapper(sys.stdin) mapper.map() ###Output Overwriting mapper.py ###Markdown Reducer ###Code %%writefile reducer.py #!/usr/bin/env python import sys import itertools class Reducer(): def __init__(self,stream): self.stream=stream def emit(self,key,value): sys.stdout.write("{},{}\n".format(key,value)) def __iter__(self): for line in self.stream: key=line.split(",")[0] value=float(line.split(",")[1]) yield(key,value) def reduce(self): lista=[] for key, group in itertools.groupby(self,lambda x:x[0]): for key, value in group: lista.append((key,value)) lista.sort(key=lambda x: x[1],reverse=False) for i in lista[:5]: self.emit(key=i[0],value=i[1]) if __name__ == '__main__': reducer=Reducer(sys.stdin) reducer.reduce() ###Output Overwriting reducer.py ###Markdown Ejecución ###Code %%bash rm -rf output STREAM=$HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar chmod +x mapper.py chmod +x reducer.py hadoop jar $STREAM -input input.txt -output output -mapper mapper.py -reducer reducer.py cat output/part-00000 ###Output C,1.0 C,2.0 A,4.0 A,5.0 B,6.0
examples/notebooks/Creating Models/5-a-simple-SEI-model.ipynb
###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output _____no_output_____ ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", {"Solvent concentration [mol.m-3]": cc}) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry() geometry.add_domain("SEI layer", {"primary": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}}) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output _____no_output_____ ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output _____no_output_____ ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", cc) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry() geometry.add_domain("SEI layer", {"primary": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}}) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output _____no_output_____ ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code %pip install pybamm -q # install PyBaMM if it is not installed import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output _____no_output_____ ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", {"Solvent concentration [mol.m-3]": cc}) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry( {"SEI layer": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}} ) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output 2020-05-30 11:30:30,931 - [WARNING] processed_variable.get_spatial_scale(497): No scale set for spatial variable x. Using default of 1 [m]. ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output _____no_output_____ ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", {"Solvent concentration [mol.m-3]": cc}) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry( {"SEI layer": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}} ) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output 2020-05-30 11:30:30,931 - [WARNING] processed_variable.get_spatial_scale(497): No scale set for spatial variable x. Using default of 1 [m]. ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/master/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code %pip install pybamm -q # install PyBaMM if it is not installed import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output Note: you may need to restart the kernel to use updated packages. ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", {"Solvent concentration [mol.m-3]": cc}) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry( {"SEI layer": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}} ) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output 2021-01-24 19:29:11,759 - [WARNING] processed_variable.get_spatial_scale(518): No length scale set for SEI layer. Using default of 1 [m]. ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Formally adding your model The purpose of this notebook has been to go through the steps involved in getting a simple model working within PyBaMM. However, if you plan on reusing your model and want greater flexibility then we recommend that you create a new class for your model. We have set out instructions on how to do this in the "Adding a Model" tutorial in the documentation. ReferencesThe relevant papers for this notebook are: ###Code pybamm.print_citations() ###Output [1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4. [2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2. [3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj. [4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2. ###Markdown Creating a Simple Model for SEI GrowthBefore adding a new model, please read the [contribution guidelines](https://github.com/pybamm-team/PyBaMM/blob/develop/CONTRIBUTING.md) In this notebook, we will run through the steps involved in creating a new model within pybamm. We will then solve and plot the outputs of the model. We have chosen to implement a very simple model of SEI growth. We first give a brief derivation of the model and discuss how to nondimensionalise the model so that we can show the full process of model conception to solution within a single notebook. Note: if you run the entire notebook and then try to evaluate the earlier cells, you will likely receive an error. This is because the state of objects is mutated as it is passed through various stages of processing. In this case, we recommend that you restart the Kernel and then evaluate cells in turn through the notebook. A Simple Model of Solid Electrolyte Interphase (SEI) Growth The SEI is a porous layer that forms on the surfaces of negative electrode particles from the products of electrochemical reactions which consume lithium and electrolyte solvents. In the first few cycles of use, a lithium-ion battery loses a large amount of capacity; this is generally attributed to lithium being consumed to produce SEI. However, after a few cycles, the rate of capacity loss slows at a rate often (but not always) reported to scale with the square root of time. SEI growth is therefore often considered to be limited in some way by a diffusion process. Dimensional Model We shall first state our model in dimensional form, but to enter the model in pybamm, we strongly recommend converting models into dimensionless form. The main reason for this is that dimensionless models are typically better conditioned than dimensional models and so several digits of accuracy can be gained. To distinguish between the dimensional and dimensionless models, we shall always employ a superscript $*$ on dimensional variables. ![SEI.png](SEI.png "SEI Model Schematic") In our simple SEI model, we consider a one-dimensional SEI which extends from the surface of a planar negative electrode at $x^*=0$ until $x^*=L^*$, where $L^*$ is the thickness of the SEI. Since the SEI is porous, there is some electrolyte within the region $x^*\in[0, L^*]$ and therefore some concentration of solvent, $c^*$. Within the porous SEI, the solvent is transported via a diffusion process according to:$$\frac{\partial c^*}{\partial t^*} = - \nabla^* \cdot N^*, \quad N^* = - D^*(c^*) \nabla^* c^* \label{dim:eqn:solvent-diffusion}\tag{1}\\$$where $t^*$ is the time, $N^*$ is the solvent flux, and $D^*(c^*)$ is the effective solvent diffusivity (a function of the solvent concentration).On the electrode-SEI surface ($x^*=0$) the solvent is consumed by the SEI growth reaction, $R^*$. We assume that diffusion of solvent in the bulk electrolyte ($x^*>L^*$) is fast so that on the SEI-electrolyte surface ($x^*=L^*$) the concentration of solvent is fixed at the value $c^*_{\infty}$. Therefore, the boundary conditions are$$ N^*|_{x^*=0} = - R^*, \quad c^*|_{x^*=L^*} = c^*_{\infty},$$We also assume that the concentration of solvent within the SEI is initially uniform and equal to the bulk electrolyte solvent concentration, so that the initial condition is$$ c^*|_{t^*=0} = c^*_{\infty}$$Since the SEI is growing, we require an additional equation for the SEI thickness. The thickness of the SEI grows at a rate proportional to the SEI growth reaction $R^*$, where the constant of proportionality is the partial molar volume of the reaction products, $\hat{V}^*$. We also assume that the SEI is initially of thickness $L^*_0$. Therefore, we have$$ \frac{d L^*}{d t^*} = \hat{V}^* R^*, \quad L^*|_{t^*=0} = L^*_0$$Finally, we assume for the sake of simplicity that the SEI growth reaction is irreversible and that the potential difference across the SEI is constant. The reaction is also assumed to be proportional to the concentration of solvent at the electrode-SEI surface ($x^*=0$). Therefore, the reaction flux is given by$$ R^* = k^* c^*|_{x^*=0}$$where $k^*$ is the reaction rate constant (which is in general dependent upon the potential difference across the SEI). Non-dimensionalisation To convert the model into dimensionless form, we scale the dimensional variables and dimensional functions. For this model, we choose to scale $x^*$ by the current SEI thickness, the current SEI thickness by the initial SEI thickness, solvent concentration with the bulk electrolyte solvent concentration, and the solvent diffusion with the solvent diffusion in the electrolyte. We then use these scalings to infer the scaling for the solvent flux. Therefore, we have$$x^* = L^* x, \quad L^*= L^*_0 L \quad c^* = c^*_{\infty} c, \quad D^*(c^*) = D^*(c^*_{\infty}) D(c), \quad N^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0}N.$$We also choose to scale time by the solvent diffusion timescale so that $$t^* = \frac{(L^*_0)^2}{D^*(c^*_{\infty})}t.$$Finally, we choose to scale the reaction flux in the same way as the solvent flux so that we have$$ R^* = \frac{D^*(c^*_{\infty}) c^*_{\infty}}{L^*_0} R.$$We note that there are multiple possible choices of scalings. Whilst they will all give the ultimately give the same answer, some choices are better than others depending on the situation under study. Dimensionless Model After substituting in the scalings from the previous section, we obtain the dimensionless form of the model given by: Solvent diffusion through SEI:\begin{align}\frac{\partial c}{\partial t} = \frac{\hat{V} R}{L} x \cdot \nabla c - \frac{1}{L}\nabla \cdot N, \quad N = - \frac{1}{L}D(c) \nabla c, \label{eqn:solvent-diffusion}\tag{1}\\N|_{x=0} = - R, \quad c|_{x=1} = 1 \label{bc:solvent-diffusion}\tag{2} \quadc|_{t=0} = 1; \end{align}Growth reaction:$$R = k c|_{x=0}; \label{eqn:reaction}\tag{3}$$SEI thickness:$$\frac{d L}{d t} = \hat{V} R, \quad L|_{t=0} = 1; \label{eqn:SEI-thickness}\tag{4}$$where the dimensionless parameters are given by$$ k = \frac{k^* L^*_0}{D^*(c^*_{\infty})}, \quad \hat{V} = \hat{V}^* c^*_{\infty}, \quad D(c) = \frac{D^*(c^*)}{D^*(c^*_{\infty})}. \label{parameters}\tag{5}$$In the above, the additional advective term in the diffusion equation arises due to our choice to scale the spatial coordinate $x^*$ with the time-dependent SEI layer thickness $L^*$. Entering the Model into PyBaMM As always, we begin by importing pybamm and changing our working directory to the root of the pybamm folder. ###Code %pip install pybamm -q # install PyBaMM if it is not installed import pybamm import numpy as np import os os.chdir(pybamm.__path__[0]+'/..') ###Output Note: you may need to restart the kernel to use updated packages. ###Markdown A model is defined in six steps:1. Initialise model2. Define parameters and variables3. State governing equations4. State boundary conditions5. State initial conditions6. State output variablesWe shall proceed through each step to enter our simple SEI growth model. 1. Initialise model We first initialise the model using the `BaseModel` class. This sets up the required structure for our model. ###Code model = pybamm.BaseModel() ###Output _____no_output_____ ###Markdown 2. Define parameters and variables In our SEI model, we have two dimensionless parameters, $k$ and $\hat{V}$, and one dimensionless function $D(c)$, which are all given in terms of the dimensional parameters, see (5). In pybamm, inputs are dimensional, so we first state all the dimensional parameters. We then define the dimensionless parameters, which are expressed an non-dimensional groupings of dimensional parameters. To define the dimensional parameters, we use the `Parameter` object to create parameter symbols. Parameters which are functions are defined using `FunctionParameter` object and should be defined within a python function as shown. ###Code # dimensional parameters k_dim = pybamm.Parameter("Reaction rate constant") L_0_dim = pybamm.Parameter("Initial thickness") V_hat_dim = pybamm.Parameter("Partial molar volume") c_inf_dim = pybamm.Parameter("Bulk electrolyte solvent concentration") def D_dim(cc): return pybamm.FunctionParameter("Diffusivity", {"Solvent concentration [mol.m-3]": cc}) # dimensionless parameters k = k_dim * L_0_dim / D_dim(c_inf_dim) V_hat = V_hat_dim * c_inf_dim def D(cc): c_dim = c_inf_dim * cc return D_dim(c_dim) / D_dim(c_inf_dim) ###Output _____no_output_____ ###Markdown We now define the dimensionless variables in our model. Since these are the variables we solve for directly, we do not need to write them in terms of the dimensional variables. We simply use `SpatialVariable` and `Variable` to create the required symbols: ###Code x = pybamm.SpatialVariable("x", domain="SEI layer", coord_sys="cartesian") c = pybamm.Variable("Solvent concentration", domain="SEI layer") L = pybamm.Variable("SEI thickness") ###Output _____no_output_____ ###Markdown 3. State governing equations We can now use the symbols we have created for our parameters and variables to write out our governing equations. Note that before we use the reaction flux and solvent flux, we must derive new symbols for them from the defined parameter and variable symbols. Each governing equation must also be stated in the explicit form `d/dt = rhs` since pybamm only stores the right hand side (rhs) and assumes that the left hand side is the time derivative. The governing equations are then simply ###Code # SEI reaction flux R = k * pybamm.BoundaryValue(c, "left") # solvent concentration equation N = - (1 / L) * D(c) * pybamm.grad(c) dcdt = (V_hat * R) * pybamm.inner(x / L, pybamm.grad(c)) - (1 / L) * pybamm.div(N) # SEI thickness equation dLdt = V_hat * R ###Output _____no_output_____ ###Markdown Once we have stated the equations, we can add them to the `model.rhs` dictionary. This is a dictionary whose keys are the variables being solved for, and whose values correspond right hand sides of the governing equations for each variable. ###Code model.rhs = {c: dcdt, L: dLdt} ###Output _____no_output_____ ###Markdown 4. State boundary conditions We only have boundary conditions on the solvent concentration equation. We must state where a condition is Neumann (on the gradient) or Dirichlet (on the variable itself). The boundary condition on the electrode-SEI (x=0) boundary is: $$ N|_{x=0} = - R, \quad N|_{x=0} = - \frac{1}{L} D(c|_{x=0} )\nabla c|_{x=0}$$which is a Neumann condition. To implement this boundary condition in pybamm, we must first rearrange the equation so that the gradient of the concentration, $\nabla c|_{x=0}$, is the subject. Therefore we have$$ \nabla c|_{x=0} = \frac{L R}{D(c|_{x=0} )}$$which we enter into pybamm as ###Code # electrode-SEI boundary condition (x=0) (lbc = left boundary condition) D_left = pybamm.BoundaryValue(D(c), "left") # pybamm requires BoundaryValue(D(c)) and not D(BoundaryValue(c)) grad_c_left = L * R / D_left ###Output _____no_output_____ ###Markdown On the SEI-electrolyte boundary (x=1), we have the boundary condition$$ c|_{x=1} = 1$$ which is a Dirichlet condition and is just entered as ###Code c_right = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown We now load these boundary conditions into the `model.boundary_conditions` dictionary in the following way, being careful to state the type of boundary condition: ###Code model.boundary_conditions = {c: {"left": (grad_c_left, "Neumann"), "right": (c_right, "Dirichlet")}} ###Output _____no_output_____ ###Markdown 5. State initial conditions There are two initial conditions in our model:$$ c|_{t=0} = 1, \quad L|_{t=0} = 1$$ which are simply written in pybamm as ###Code c_init = pybamm.Scalar(1) L_init = pybamm.Scalar(1) ###Output _____no_output_____ ###Markdown and then included into the `model.initial_conditions` dictionary: ###Code model.initial_conditions = {c: c_init, L: L_init} ###Output _____no_output_____ ###Markdown 6. State output variables We already have everything required in model for the model to be used and solved, but we have not yet stated what we actually want to output from the model. PyBaMM allows users to output any combination of symbols as an output variable therefore allowing the user the flexibility to output important quantities without further tedious postprocessing steps. Some useful outputs for this simple model are:- the SEI thickness- the SEI growth rate- the solvent concentrationThese are added to the model by adding entries to the `model.variables` dictionary ###Code model.variables = {"SEI thickness": L, "SEI growth rate": dLdt, "Solvent concentration": c} ###Output _____no_output_____ ###Markdown We can also output the dimensional versions of these variables by multiplying by the scalings used to non-dimensionalise. By convention, we recommend including the units in the output variables name so that they do not overwrite the dimensionless output variables. To add new entries to the dictionary we used the method `.update()`. ###Code L_dim = L_0_dim * L dLdt_dim = (D_dim(c_inf_dim) / L_0_dim ) * dLdt c_dim = c_inf_dim * c model.variables.update({ "SEI thickness [m]": L_dim, "SEI growth rate [m/s]": dLdt_dim, "Solvent concentration [mols/m^3]": c_dim } ) ###Output _____no_output_____ ###Markdown The model is now fully defined and ready to be used. If you plan on reusing the model several times, you can additionally set model defaults which may include: a default geometry to run the model on, a default set of parameter values, a default solver, etc. Using the Model The model will now behave in the same way as any of the inbuilt PyBaMM models. However, to demonstrate that the model works we display the steps involved in solving the model but we will not go into details within this notebook. ###Code # define geometry geometry = pybamm.Geometry( {"SEI layer": {x: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}}} ) def Diffusivity(cc): return cc * 10**(-5) # parameter values (not physically based, for example only!) param = pybamm.ParameterValues( { "Reaction rate constant": 20, "Initial thickness": 1e-6, "Partial molar volume": 10, "Bulk electrolyte solvent concentration": 1, "Diffusivity": Diffusivity, } ) # process model and geometry param.process_model(model) param.process_geometry(geometry) # mesh and discretise submesh_types = {"SEI layer": pybamm.Uniform1DSubMesh} var_pts = {x: 100} mesh = pybamm.Mesh(geometry, submesh_types, var_pts) spatial_methods = {"SEI layer": pybamm.FiniteVolume()} disc = pybamm.Discretisation(mesh, spatial_methods) disc.process_model(model) # solve solver = pybamm.ScipySolver() t = np.linspace(0, 100, 100) solution = solver.solve(model, t) # Extract output variables L_out = solution["SEI thickness"] c_out = solution["Solvent concentration"] x = np.linspace(0, 1, 100) ###Output 2021-01-24 19:29:11,759 - [WARNING] processed_variable.get_spatial_scale(518): No length scale set for SEI layer. Using default of 1 [m]. ###Markdown Using these outputs, we can now plot the SEI thickness as a function of time and also the solvent concentration profile within the SEI. We use a slider to plot the concentration profile at different times. ###Code import matplotlib.pyplot as plt def plot(t): f, (ax1, ax2) = plt.subplots(1, 2 ,figsize=(10,5)) ax1.plot(solution.t, L_out(solution.t)) ax1.plot([t], [L_out(t)], 'r.') plot_c, = ax2.plot(x * L_out(t), c_out(t, x)) ax1.set_ylabel('SEI thickness') ax1.set_xlabel('t') ax2.set_ylabel('Solvent concentration') ax2.set_xlabel('x') ax2.set_ylim(0, 1.1) ax2.set_xlim(0, x[-1]*L_out(solution.t[-1])) plt.show() import ipywidgets as widgets widgets.interact(plot, t=widgets.FloatSlider(min=0,max=solution.t[-1],step=0.1,value=0)); ###Output _____no_output_____ ###Markdown Formally adding your model The purpose of this notebook has been to go through the steps involved in getting a simple model working within PyBaMM. However, if you plan on reusing your model and want greater flexibility then we recommend that you create a new class for your model. We have set out instructions on how to do this in the "Adding a Model" tutorial in the documentation. ReferencesThe relevant papers for this notebook are: ###Code pybamm.print_citations() ###Output [1] Joel A. E. Andersson, Joris Gillis, Greg Horn, James B. Rawlings, and Moritz Diehl. CasADi – A software framework for nonlinear optimization and optimal control. Mathematical Programming Computation, 11(1):1–36, 2019. doi:10.1007/s12532-018-0139-4. [2] Charles R. Harris, K. Jarrod Millman, Stéfan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, and others. Array programming with NumPy. Nature, 585(7825):357–362, 2020. doi:10.1038/s41586-020-2649-2. [3] Valentin Sulzer, Scott G. Marquis, Robert Timms, Martin Robinson, and S. Jon Chapman. Python Battery Mathematical Modelling (PyBaMM). ECSarXiv. February, 2020. doi:10.1149/osf.io/67ckj. [4] Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, and others. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nature Methods, 17(3):261–272, 2020. doi:10.1038/s41592-019-0686-2.
examples/03_evaluate/als_movielens_diversity_metrics.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Apply Diversity Metrics -- Compare ALS and Random Recommenders on MovieLens (PySpark)In this notebook, we demonstrate how to evaluate a recommender using metrics other than commonly used rating/ranking metrics.Such metrics include:- Coverage - We use following two metrics defined by \[Shani and Gunawardana\]: - (1) catalog_coverage, which measures the proportion of items that get recommended from the item catalog; - (2) distributional_coverage, which measures how equally different items are recommended in the recommendations to all users.- Novelty - A more novel item indicates it is less popular, i.e. it gets recommended less frequently.We use the definition of novelty from \[Castells et al.\]- Diversity - The dissimilarity of items being recommended.We use a definition based on _intralist similarity_ by \[Zhang et al.]- Serendipity - The "unusualness" or "surprise" of recommendations to a user.We use a definition based on cosine similarity by \[Zhang et al.]We evaluate the results obtained with two approaches: using the ALS recommender algorithm vs. a baseline of random recommendations. - Matrix factorization by [ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.htmlALS) (Alternating Least Squares) is a well known collaborative filtering algorithm. - We also define a process which randomly recommends unseen items to each user. The comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the long-tail less popular items having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. Usually there is a trade-off between one type of metric vs. another. One should decide which set of metrics to optimize based on business scenarios. **Coverage**We define _catalog coverage_ as the proportion of items showing in all users’ recommendations: $$\textrm{catalog coverage} = \frac{|N_r|}{|N_t|}$$where $N_r$ denotes the set of items in the recommendations (`reco_df` in the code below) and $N_t$ the set of items in the historical data (`train_df`)._Distributional coverage_ measures how equally different items are recommended to users when a particular recommender system is used.If $p(i|R)$ denotes the probability that item $i$ is observed among all recommendation lists, we define distributional coverage as$$\textrm{distributional coverage} = -\sum_{i \in N_t} p(i|R) \log_2 p(i)$$where $$p(i|R) = \frac{|M_r (i)|}{|\textrm{reco_df}|}$$and $M_r (i)$ denotes the users who are recommended item $i$. **Diversity**Diversity represents the variety present in a list of recommendations._Intra-List Similarity_ aggregates the pairwise similarity of all items in a set. A recommendation list with groups of very similar items will score a high intra-list similarity. Lower intra-list similarity indicates higher diversity.To measure similarity between any two items we use _cosine similarity_:$$\textrm{Cosine Similarity}(i,j)= \frac{|M_t^{l(i,j)}|} {\sqrt{|M_t^{l(i)}|} \sqrt{|M_t^{l(j)}|} }$$where $M_t^{l(i)}$ denotes the set of users who liked item $i$ and $M_t^{l(i,j)}$ the users who liked both $i$ and $j$.Intra-list similarity is then defined as $$\textrm{IL} = \frac{1}{|M|} \sum_{u \in M} \frac{1}{\binom{N_r(u)}{2}} \sum_{i,j \in N_r (u),\, i<j} \textrm{Cosine Similarity}(i,j)$$where $M$ is the set of users and $N_r(u)$ the set of recommendations for user $u$. Finally, diversity is defined as$$\textrm{diversity} = 1 - \textrm{IL}$$ **Novelty**The novelty of an item is inverse to its _popularity_. If $p(i)$ represents the probability that item $i$ is observed (or known, interacted with etc.) by users, then $$p(i) = \frac{|M_t (i)|} {|\textrm{train_df}|}$$where $M_t (i)$ is the set of users who have interacted with item $i$ in the historical data. The novelty of an item is then defined as$$\textrm{novelty}(i) = -\log_2 p(i)$$and the novelty of the recommendations across all users is defined as$$\textrm{novelty} = \sum_{i \in N_r} \frac{|M_r (i)|}{|\textrm{reco_df}|} \textrm{novelty}(i)$$ **Serendipity**Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items and can be imagined as the distance between recommended items and their expected contents (Zhang et al.) Lower cosine similarity indicates lower expectedness and higher serendipity.We define the expectedness of an unseen item $i$ for user $u$ as the average similarity between every already seen item $j$ in the historical data and $i$:$$\textrm{expectedness}(i|u) = \frac{1}{|N_t (u)|} \sum_{j \in N_t (u)} \textrm{Cosine Similarity}(i,j)$$The serendipity of item $i$ is (1 - expectedness) multiplied by _relevance_, where relevance indicates whether the item turns out to be liked by the user or not. For example, in a binary scenario, if an item in `reco_df` is liked (purchased, clicked) in `test_df`, its relevance equals one, otherwise it equals zero. Aggregating over all users and items, the overall serendipity is defined as$$\textrm{serendipity} = \frac{1}{|M|} \sum_{u \in M_r}\frac{1}{|N_r (u)|} \sum_{i \in N_r (u)} \big(1 - \textrm{expectedness}(i|u) \big) \, \textrm{relevance}(i)$$ **Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.mddependencies-setup) to install the PySpark environment. ###Code # set the environment path to find Recommenders import sys import pyspark from pyspark.ml.recommendation import ALS import pyspark.sql.functions as F from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField from pyspark.sql.types import StringType, FloatType, IntegerType, LongType from recommenders.utils.timer import Timer from recommenders.datasets import movielens from recommenders.utils.notebook_utils import is_jupyter from recommenders.datasets.spark_splitters import spark_random_split from recommenders.evaluation.spark_evaluation import SparkRatingEvaluation, SparkRankingEvaluation, SparkDiversityEvaluation from recommenders.utils.spark_utils import start_or_get_spark from pyspark.sql.window import Window import numpy as np import pandas as pd print("System version: {}".format(sys.version)) print("Spark version: {}".format(pyspark.__version__)) ###Output System version: 3.7.10 (default, Jun 4 2021, 14:48:32) [GCC 7.5.0] Spark version: 2.4.8 ###Markdown Set the default parameters. ###Code # top k items to recommend TOP_K = 10 # Select MovieLens data size: 100k, 1m, 10m, or 20m MOVIELENS_DATA_SIZE = '100k' # user, item column names COL_USER="UserId" COL_ITEM="MovieId" COL_RATING="Rating" ###Output _____no_output_____ ###Markdown Set up Spark contextThe following settings work well for debugging locally on VM - change when running on a cluster. We set up a giant single executor with many threads and specify memory cap. ###Code # the following settings work well for debugging locally on VM - change when running on a cluster # set up a giant single executor with many threads and specify memory cap spark = start_or_get_spark("ALS PySpark", memory="16g") spark.conf.set("spark.sql.crossJoin.enabled", "true") ###Output _____no_output_____ ###Markdown Download the MovieLens dataset ###Code # Note: The DataFrame-based API for ALS currently only supports integers for user and item ids. schema = StructType( ( StructField(COL_USER, IntegerType()), StructField(COL_ITEM, IntegerType()), StructField(COL_RATING, FloatType()), StructField("Timestamp", LongType()), ) ) data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema) data.show() ###Output 100%|██████████| 4.81k/4.81k [00:00<00:00, 5.98kKB/s] ###Markdown Split the data using the Spark random splitter provided in utilities ###Code train, test = spark_random_split(data, ratio=0.75, seed=123) print ("N train", train.cache().count()) print ("N test", test.cache().count()) ###Output N train 75193 N test 24807 ###Markdown Get all possible user-item pairs Note: We assume that training data contains all users and all catalog items. ###Code users = train.select(COL_USER).distinct() items = train.select(COL_ITEM).distinct() user_item = users.crossJoin(items) ###Output _____no_output_____ ###Markdown Train the ALS model on the training data, and get the top-k recommendations for our testing dataTo predict movie ratings, we use the rating data in the training set as users' explicit feedback. The hyperparameters used in building the model are referenced from [here](http://mymedialite.net/examples/datasets.html). We do not constrain the latent factors (`nonnegative = False`) in order to allow for both positive and negative preferences towards movies.Timing will vary depending on the machine being used to train. ###Code header = { "userCol": COL_USER, "itemCol": COL_ITEM, "ratingCol": COL_RATING, } als = ALS( rank=10, maxIter=15, implicitPrefs=False, regParam=0.05, coldStartStrategy='drop', nonnegative=False, seed=42, **header ) with Timer() as train_time: model = als.fit(train) print("Took {} seconds for training.".format(train_time.interval)) ###Output Took 2.598099071998149 seconds for training. ###Markdown In the movie recommendation use case, recommending movies that have been rated by the users does not make sense. Therefore, the rated movies are removed from the recommended items.In order to achieve this, we recommend all movies to all users, and then remove the user-movie pairs that exist in the training dataset. ###Code # Score all user-item pairs dfs_pred = model.transform(user_item) # Remove seen items. dfs_pred_exclude_train = dfs_pred.alias("pred").join( train.alias("train"), (dfs_pred[COL_USER] == train[COL_USER]) & (dfs_pred[COL_ITEM] == train[COL_ITEM]), how='outer' ) top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \ .select('pred.' + COL_USER, 'pred.' + COL_ITEM, 'pred.' + "prediction") print(top_all.count()) window = Window.partitionBy(COL_USER).orderBy(F.col("prediction").desc()) top_k_reco = top_all.select("*", F.row_number().over(window).alias("rank")).filter(F.col("rank") <= TOP_K).drop("rank") print(top_k_reco.count()) ###Output 1477928 9430 ###Markdown Random RecommenderWe define a recommender which randomly recommends unseen items to each user. ###Code train_df = train.select(COL_USER, COL_ITEM, COL_RATING) # random recommender window = Window.partitionBy(COL_USER).orderBy(F.rand()) # randomly generated recommendations for each user pred_df = ( train_df # join training data with all possible user-item pairs (seen in training) .join(user_item, on=[COL_USER, COL_ITEM], how="right" ) # get user-item pairs that were not seen in the training data .filter(F.col(COL_RATING).isNull()) # count items for each user (randomly sorting them) .withColumn("score", F.row_number().over(window)) # get the top k items per user .filter(F.col("score") <= TOP_K) .drop(COL_RATING) ) ###Output _____no_output_____ ###Markdown 5. ALS vs Random Recommenders Performance Comparison ###Code def get_ranking_results(ranking_eval): metrics = { "Precision@k": ranking_eval.precision_at_k(), "Recall@k": ranking_eval.recall_at_k(), "NDCG@k": ranking_eval.ndcg_at_k(), "Mean average precision": ranking_eval.map_at_k() } return metrics def get_diversity_results(diversity_eval): metrics = { "catalog_coverage":diversity_eval.catalog_coverage(), "distributional_coverage":diversity_eval.distributional_coverage(), "novelty": diversity_eval.novelty(), "diversity": diversity_eval.diversity(), "serendipity": diversity_eval.serendipity() } return metrics def generate_summary(data, algo, k, ranking_metrics, diversity_metrics): summary = {"Data": data, "Algo": algo, "K": k} if ranking_metrics is None: ranking_metrics = { "Precision@k": np.nan, "Recall@k": np.nan, "nDCG@k": np.nan, "MAP": np.nan, } summary.update(ranking_metrics) summary.update(diversity_metrics) return summary ###Output _____no_output_____ ###Markdown ALS Recommender Performance Results ###Code als_ranking_eval = SparkRankingEvaluation( test, top_all, k = TOP_K, col_user="UserId", col_item="MovieId", col_rating="Rating", col_prediction="prediction", relevancy_method="top_k" ) als_ranking_metrics = get_ranking_results(als_ranking_eval) als_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, col_user="UserId", col_item="MovieId" ) als_diversity_metrics = get_diversity_results(als_diversity_eval) als_results = generate_summary(MOVIELENS_DATA_SIZE, "als", TOP_K, als_ranking_metrics, als_diversity_metrics) ###Output _____no_output_____ ###Markdown Random Recommender Performance Results ###Code random_ranking_eval = SparkRankingEvaluation( test, pred_df, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="score", k=TOP_K, ) random_ranking_metrics = get_ranking_results(random_ranking_eval) random_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, col_user=COL_USER, col_item=COL_ITEM ) random_diversity_metrics = get_diversity_results(random_diversity_eval) random_results = generate_summary(MOVIELENS_DATA_SIZE, "random", TOP_K, random_ranking_metrics, random_diversity_metrics) ###Output _____no_output_____ ###Markdown Result Comparison ###Code cols = ["Data", "Algo", "K", "Precision@k", "Recall@k", "NDCG@k", "Mean average precision","catalog_coverage", "distributional_coverage","novelty", "diversity", "serendipity" ] df_results = pd.DataFrame(columns=cols) df_results.loc[1] = als_results df_results.loc[2] = random_results df_results ###Output _____no_output_____ ###Markdown ReferencesThe metric definitions / formulations are based on the following references:- P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011- G. Shani and A. Gunawardana, Evaluating recommendation systems, Recommender Systems Handbook pp. 257-297, 2010.- E. Yan, Serendipity: Accuracy’s unpopular best friend in recommender Systems, eugeneyan.com, April 2020- Y.C. Zhang, D.Ó. Séaghdha, D. Quercia and T. Jambor, Auralist: introducing serendipity into music recommendation, WSDM 2012 ###Code # cleanup spark instance spark.stop() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Apply Diversity Metrics -- Compare ALS and Random Recommenders on MovieLens (PySpark)In this notebook, we demonstrate how to evaluate a recommender using metrics other than commonly used rating/ranking metrics.Such metrics include:- Coverage - We use following two metrics defined by \[Shani and Gunawardana\]: - (1) catalog_coverage, which measures the proportion of items that get recommended from the item catalog; - (2) distributional_coverage, which measures how equally different items are recommended in the recommendations to all users.- Novelty - A more novel item indicates it is less popular, i.e. it gets recommended less frequently.We use the definition of novelty from \[Castells et al.\]- Diversity - The dissimilarity of items being recommended.We use a definition based on _intralist similarity_ by \[Zhang et al.]- Serendipity - The "unusualness" or "surprise" of recommendations to a user.We use a definition based on cosine similarity by \[Zhang et al.]We evaluate the results obtained with two approaches: using the ALS recommender algorithm vs. a baseline of random recommendations. - Matrix factorization by [ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.htmlALS) (Alternating Least Squares) is a well known collaborative filtering algorithm. - We also define a process which randomly recommends unseen items to each user. - We show two options to calculate item-item similarity: (1) based on item co-occurrence count; and (2) based on item feature vectors. The comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the [long-tail items](https://github.com/microsoft/recommenders/blob/main/GLOSSARY.md) having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. From the algorithmic point of view, items in the tail suffer from the cold-start problem, making them hard for recommendation systems to use. However, from the business point of view, oftentimes the items in the tail can be highly profitable, since, depending on supply, business can apply a higher margin to them. Recommendation systems that optimize metrics like novelty and diversity, can help to find users willing to get these long tail items. Usually there is a trade-off between one type of metric vs. another. One should decide which set of metrics to optimize based on business scenarios. **Coverage**We define _catalog coverage_ as the proportion of items showing in all users’ recommendations: $$\textrm{CatalogCoverage} = \frac{|N_r|}{|N_t|}$$where $N_r$ denotes the set of items in the recommendations (`reco_df` in the code below) and $N_t$ the set of items in the historical data (`train_df`)._Distributional coverage_ measures how equally different items are recommended to users when a particular recommender system is used.If $p(i|R)$ denotes the probability that item $i$ is observed among all recommendation lists, we define distributional coverage as$$\textrm{DistributionalCoverage} = -\sum_{i \in N_t} p(i|R) \log_2 p(i)$$where $$p(i|R) = \frac{|M_r (i)|}{|\textrm{reco_df}|}$$and $M_r (i)$ denotes the users who are recommended item $i$. **Diversity**Diversity represents the variety present in a list of recommendations._Intra-List Similarity_ aggregates the pairwise similarity of all items in a set. A recommendation list with groups of very similar items will score a high intra-list similarity. Lower intra-list similarity indicates higher diversity.To measure similarity between any two items we use _cosine similarity_:$$\textrm{Cosine Similarity}(i,j)= \frac{|M_t^{l(i,j)}|} {\sqrt{|M_t^{l(i)}|} \sqrt{|M_t^{l(j)}|} }$$where $M_t^{l(i)}$ denotes the set of users who liked item $i$ and $M_t^{l(i,j)}$ the users who liked both $i$ and $j$.Intra-list similarity is then defined as $$\textrm{IL} = \frac{1}{|M|} \sum_{u \in M} \frac{1}{\binom{N_r(u)}{2}} \sum_{i,j \in N_r (u),\, i<j} \textrm{Cosine Similarity}(i,j)$$where $M$ is the set of users and $N_r(u)$ the set of recommendations for user $u$. Finally, diversity is defined as$$\textrm{diversity} = 1 - \textrm{IL}$$ **Novelty**The novelty of an item is inverse to its _popularity_. If $p(i)$ represents the probability that item $i$ is observed (or known, interacted with etc.) by users, then $$p(i) = \frac{|M_t (i)|} {|\textrm{train_df}|}$$where $M_t (i)$ is the set of users who have interacted with item $i$ in the historical data. The novelty of an item is then defined as$$\textrm{novelty}(i) = -\log_2 p(i)$$and the novelty of the recommendations across all users is defined as$$\textrm{novelty} = \sum_{i \in N_r} \frac{|M_r (i)|}{|\textrm{reco_df}|} \textrm{novelty}(i)$$ **Serendipity**Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items and can be imagined as the distance between recommended items and their expected contents (Zhang et al.) Lower cosine similarity indicates lower expectedness and higher serendipity.We define the expectedness of an unseen item $i$ for user $u$ as the average similarity between every already seen item $j$ in the historical data and $i$:$$\textrm{expectedness}(i|u) = \frac{1}{|N_t (u)|} \sum_{j \in N_t (u)} \textrm{Cosine Similarity}(i,j)$$The serendipity of item $i$ is (1 - expectedness) multiplied by _relevance_, where relevance indicates whether the item turns out to be liked by the user or not. For example, in a binary scenario, if an item in `reco_df` is liked (purchased, clicked) in `test_df`, its relevance equals one, otherwise it equals zero. Aggregating over all users and items, the overall serendipity is defined as$$\textrm{serendipity} = \frac{1}{|M|} \sum_{u \in M_r}\frac{1}{|N_r (u)|} \sum_{i \in N_r (u)} \big(1 - \textrm{expectedness}(i|u) \big) \, \textrm{relevance}(i)$$ **Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.mddependencies-setup) to install the PySpark environment. ###Code # set the environment path to find Recommenders %load_ext autoreload %autoreload 2 import sys import pyspark from pyspark.ml.recommendation import ALS import pyspark.sql.functions as F from pyspark.sql.types import FloatType, IntegerType, LongType, StructType, StructField from pyspark.ml.feature import Tokenizer, StopWordsRemover from pyspark.ml.feature import HashingTF, CountVectorizer, VectorAssembler from recommenders.utils.timer import Timer from recommenders.datasets import movielens from recommenders.datasets.spark_splitters import spark_random_split from recommenders.evaluation.spark_evaluation import SparkRankingEvaluation, SparkDiversityEvaluation from recommenders.utils.spark_utils import start_or_get_spark from pyspark.sql.window import Window import pyspark.sql.functions as F import numpy as np import pandas as pd print("System version: {}".format(sys.version)) print("Spark version: {}".format(pyspark.__version__)) ###Output System version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] Spark version: 2.4.8 ###Markdown Set the default parameters. ###Code # top k items to recommend TOP_K = 10 # Select MovieLens data size: 100k, 1m, 10m, or 20m MOVIELENS_DATA_SIZE = '100k' # user, item column names COL_USER="UserId" COL_ITEM="MovieId" COL_RATING="Rating" COL_TITLE="Title" COL_GENRE="Genre" ###Output _____no_output_____ ###Markdown 1. Set up Spark contextThe following settings work well for debugging locally on VM - change when running on a cluster. We set up a giant single executor with many threads and specify memory cap. ###Code # the following settings work well for debugging locally on VM - change when running on a cluster # set up a giant single executor with many threads and specify memory cap spark = start_or_get_spark("ALS PySpark", memory="16g") spark.conf.set("spark.sql.crossJoin.enabled", "true") ###Output _____no_output_____ ###Markdown 2. Download the MovieLens dataset ###Code # Note: The DataFrame-based API for ALS currently only supports integers for user and item ids. schema = StructType( ( StructField(COL_USER, IntegerType()), StructField(COL_ITEM, IntegerType()), StructField(COL_RATING, FloatType()), StructField("Timestamp", LongType()), ) ) data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema, title_col=COL_TITLE, genres_col=COL_GENRE) data.show() ###Output 100%|██████████| 4.81k/4.81k [00:00<00:00, 20.1kKB/s] ###Markdown Split the data using the Spark random splitter provided in utilities ###Code train_df, test_df = spark_random_split(data.select(COL_USER, COL_ITEM, COL_RATING), ratio=0.75, seed=123) print ("N train_df", train_df.cache().count()) print ("N test_df", test_df.cache().count()) ###Output N train_df 75066 N test_df 24934 ###Markdown Get all possible user-item pairs Note: We assume that training data contains all users and all catalog items. ###Code users = train_df.select(COL_USER).distinct() items = train_df.select(COL_ITEM).distinct() user_item = users.crossJoin(items) ###Output _____no_output_____ ###Markdown 3. Train the ALS model on the training data, and get the top-k recommendations for our testing dataTo predict movie ratings, we use the rating data in the training set as users' explicit feedback. The hyperparameters used in building the model are referenced from [here](http://mymedialite.net/examples/datasets.html). We do not constrain the latent factors (`nonnegative = False`) in order to allow for both positive and negative preferences towards movies.Timing will vary depending on the machine being used to train. ###Code header = { "userCol": COL_USER, "itemCol": COL_ITEM, "ratingCol": COL_RATING, } als = ALS( rank=10, maxIter=15, implicitPrefs=False, regParam=0.05, coldStartStrategy='drop', nonnegative=False, seed=42, **header ) with Timer() as train_time: model = als.fit(train_df) print("Took {} seconds for training.".format(train_time.interval)) ###Output Took 4.189040212018881 seconds for training. ###Markdown In the movie recommendation use case, recommending movies that have been rated by the users does not make sense. Therefore, the rated movies are removed from the recommended items.In order to achieve this, we recommend all movies to all users, and then remove the user-movie pairs that exist in the training dataset. ###Code # Score all user-item pairs dfs_pred = model.transform(user_item) # Remove seen items. dfs_pred_exclude_train = dfs_pred.alias("pred").join( train_df.alias("train"), (dfs_pred[COL_USER] == train_df[COL_USER]) & (dfs_pred[COL_ITEM] == train_df[COL_ITEM]), how='outer' ) top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \ .select('pred.' + COL_USER, 'pred.' + COL_ITEM, 'pred.' + "prediction") print(top_all.count()) window = Window.partitionBy(COL_USER).orderBy(F.col("prediction").desc()) top_k_reco = top_all.select("*", F.row_number().over(window).alias("rank")).filter(F.col("rank") <= TOP_K).drop("rank") print(top_k_reco.count()) ###Output 1464853 9430 ###Markdown 4. Random RecommenderWe define a recommender which randomly recommends unseen items to each user. ###Code # random recommender window = Window.partitionBy(COL_USER).orderBy(F.rand()) # randomly generated recommendations for each user pred_df = ( train_df # join training data with all possible user-item pairs (seen in training) .join(user_item, on=[COL_USER, COL_ITEM], how="right" ) # get user-item pairs that were not seen in the training data .filter(F.col(COL_RATING).isNull()) # count items for each user (randomly sorting them) .withColumn("score", F.row_number().over(window)) # get the top k items per user .filter(F.col("score") <= TOP_K) .drop(COL_RATING) ) ###Output _____no_output_____ ###Markdown 5. ALS vs Random Recommenders Performance Comparison ###Code def get_ranking_results(ranking_eval): metrics = { "Precision@k": ranking_eval.precision_at_k(), "Recall@k": ranking_eval.recall_at_k(), "NDCG@k": ranking_eval.ndcg_at_k(), "Mean average precision": ranking_eval.map_at_k() } return metrics def get_diversity_results(diversity_eval): metrics = { "catalog_coverage":diversity_eval.catalog_coverage(), "distributional_coverage":diversity_eval.distributional_coverage(), "novelty": diversity_eval.novelty(), "diversity": diversity_eval.diversity(), "serendipity": diversity_eval.serendipity() } return metrics def generate_summary(data, algo, k, ranking_metrics, diversity_metrics): summary = {"Data": data, "Algo": algo, "K": k} if ranking_metrics is None: ranking_metrics = { "Precision@k": np.nan, "Recall@k": np.nan, "nDCG@k": np.nan, "MAP": np.nan, } summary.update(ranking_metrics) summary.update(diversity_metrics) return summary ###Output _____no_output_____ ###Markdown ALS Recommender Performance Results ###Code als_ranking_eval = SparkRankingEvaluation( test_df, top_all, k = TOP_K, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="prediction", relevancy_method="top_k" ) als_ranking_metrics = get_ranking_results(als_ranking_eval) als_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, col_user = COL_USER, col_item = COL_ITEM ) als_diversity_metrics = get_diversity_results(als_diversity_eval) als_results = generate_summary(MOVIELENS_DATA_SIZE, "als", TOP_K, als_ranking_metrics, als_diversity_metrics) ###Output _____no_output_____ ###Markdown Random Recommender Performance Results ###Code random_ranking_eval = SparkRankingEvaluation( test_df, pred_df, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="score", k=TOP_K, ) random_ranking_metrics = get_ranking_results(random_ranking_eval) random_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, col_user = COL_USER, col_item = COL_ITEM ) random_diversity_metrics = get_diversity_results(random_diversity_eval) random_results = generate_summary(MOVIELENS_DATA_SIZE, "random", TOP_K, random_ranking_metrics, random_diversity_metrics) ###Output _____no_output_____ ###Markdown Result Comparison ###Code cols = ["Data", "Algo", "K", "Precision@k", "Recall@k", "NDCG@k", "Mean average precision","catalog_coverage", "distributional_coverage","novelty", "diversity", "serendipity" ] df_results = pd.DataFrame(columns=cols) df_results.loc[1] = als_results df_results.loc[2] = random_results df_results ###Output _____no_output_____ ###Markdown ConclusionThe comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the long-tail less popular items having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. 6. Calculate diversity metrics using item feature vector based item-item similarityIn the above section we calculate diversity metrics using item co-occurrence count based item-item similarity. In the scenarios when item features are available, we may want to calculate item-item similarity based on item feature vectors. In this section, we show how to calculate diversity metrics using item feature vector based item-item similarity. ###Code # Get movie features "title" and "genres" movies = ( data.groupBy(COL_ITEM, COL_TITLE, COL_GENRE).count() .na.drop() # remove rows with null values .withColumn(COL_GENRE, F.split(F.col(COL_GENRE), "\|")) # convert to array of genres .withColumn(COL_TITLE, F.regexp_replace(F.col(COL_TITLE), "[\(),:^0-9]", "")) # remove year from title .drop("count") # remove unused columns ) # tokenize "title" column title_tokenizer = Tokenizer(inputCol=COL_TITLE, outputCol="title_words") tokenized_data = title_tokenizer.transform(movies) # remove stop words remover = StopWordsRemover(inputCol="title_words", outputCol="text") clean_data = remover.transform(tokenized_data).drop(COL_TITLE, "title_words") # convert text input into feature vectors # step 1: perform HashingTF on column "text" text_hasher = HashingTF(inputCol="text", outputCol="text_features", numFeatures=1024) hashed_data = text_hasher.transform(clean_data) # step 2: fit a CountVectorizerModel from column "genres". count_vectorizer = CountVectorizer(inputCol=COL_GENRE, outputCol="genres_features") count_vectorizer_model = count_vectorizer.fit(hashed_data) vectorized_data = count_vectorizer_model.transform(hashed_data) # step 3: assemble features into a single vector assembler = VectorAssembler( inputCols=["text_features", "genres_features"], outputCol="features", ) feature_data = assembler.transform(vectorized_data).select(COL_ITEM, "features") feature_data.show(10, False) ###Output +------+---------------------------------------------+ |ItemId|features | +------+---------------------------------------------+ |167 |(1043,[128,544,1025],[1.0,1.0,1.0]) | |1343 |(1043,[38,300,1024],[1.0,1.0,1.0]) | |1607 |(1043,[592,821,1024],[1.0,1.0,1.0]) | |966 |(1043,[389,502,1028],[1.0,1.0,1.0]) | |9 |(1043,[11,342,1014,1024],[1.0,1.0,1.0,1.0]) | |1230 |(1043,[597,740,902,1025],[1.0,1.0,1.0,1.0]) | |1118 |(1043,[702,1025],[1.0,1.0]) | |673 |(1043,[169,690,1027,1040],[1.0,1.0,1.0,1.0]) | |879 |(1043,[909,1026,1027,1034],[1.0,1.0,1.0,1.0])| |66 |(1043,[256,1025,1028],[1.0,1.0,1.0]) | +------+---------------------------------------------+ only showing top 10 rows ###Markdown The *features* column is represented with a SparseVector object. For example, in the feature vector (1043,[128,544,1025],[1.0,1.0,1.0]), 1043 is the vector length, indicating the vector consisting of 1043 item features. The values at index positions 128,544,1025 are 1.0, and the values at other positions are all 0. ###Code als_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) als_diversity=als_eval.diversity() als_serendipity=als_eval.serendipity() print(als_diversity) print(als_serendipity) random_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) random_diversity=random_eval.diversity() random_serendipity=random_eval.serendipity() print(random_diversity) print(random_serendipity) ###Output 0.8982144953920664 0.8941807579293202 ###Markdown It's interesting that the value of diversity and serendipity changes when using different item-item similarity calculation approach, for both ALS algorithm and random recommender. The diversity and serendipity of random recommender are still higher than ALS algorithm. ReferencesThe metric definitions / formulations are based on the following references:- P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011- G. Shani and A. Gunawardana, Evaluating recommendation systems, Recommender Systems Handbook pp. 257-297, 2010.- E. Yan, Serendipity: Accuracy’s unpopular best friend in recommender Systems, eugeneyan.com, April 2020- Y.C. Zhang, D.Ó. Séaghdha, D. Quercia and T. Jambor, Auralist: introducing serendipity into music recommendation, WSDM 2012 ###Code # cleanup spark instance spark.stop() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Apply Diversity Metrics -- Compare ALS and Random Recommenders on MovieLens (PySpark)In this notebook, we demonstrate how to evaluate a recommender using metrics other than commonly used rating/ranking metrics.Such metrics include:- Coverage - We use following two metrics defined by \[Shani and Gunawardana\]: - (1) catalog_coverage, which measures the proportion of items that get recommended from the item catalog; - (2) distributional_coverage, which measures how equally different items are recommended in the recommendations to all users.- Novelty - A more novel item indicates it is less popular, i.e. it gets recommended less frequently.We use the definition of novelty from \[Castells et al.\]- Diversity - The dissimilarity of items being recommended.We use a definition based on _intralist similarity_ by \[Zhang et al.]- Serendipity - The "unusualness" or "surprise" of recommendations to a user.We use a definition based on cosine similarity by \[Zhang et al.]We evaluate the results obtained with two approaches: using the ALS recommender algorithm vs. a baseline of random recommendations. - Matrix factorization by [ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.htmlALS) (Alternating Least Squares) is a well known collaborative filtering algorithm. - We also define a process which randomly recommends unseen items to each user. - We show two options to calculate item-item similarity: (1) based on item co-occurrence count; and (2) based on item feature vectors. The comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the [long-tail items](https://github.com/microsoft/recommenders/blob/main/GLOSSARY.md) having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. From the algorithmic point of view, items in the tail suffer from the cold-start problem, making them hard for recommendation systems to use. However, from the business point of view, oftentimes the items in the tail can be highly profitable, since, depending on supply, business can apply a higher margin to them. Recommendation systems that optimize metrics like novelty and diversity, can help to find users willing to get these long tail items. Usually there is a trade-off between one type of metric vs. another. One should decide which set of metrics to optimize based on business scenarios. **Coverage**We define _catalog coverage_ as the proportion of items showing in all users’ recommendations: $$\textrm{CatalogCoverage} = \frac{|N_r|}{|N_t|}$$where $N_r$ denotes the set of items in the recommendations (`reco_df` in the code below) and $N_t$ the set of items in the historical data (`train_df`)._Distributional coverage_ measures how equally different items are recommended to users when a particular recommender system is used.If $p(i|R)$ denotes the probability that item $i$ is observed among all recommendation lists, we define distributional coverage as$$\textrm{DistributionalCoverage} = -\sum_{i \in N_t} p(i|R) \log_2 p(i)$$where $$p(i|R) = \frac{|M_r (i)|}{|\textrm{reco_df}|}$$and $M_r (i)$ denotes the users who are recommended item $i$. **Diversity**Diversity represents the variety present in a list of recommendations._Intra-List Similarity_ aggregates the pairwise similarity of all items in a set. A recommendation list with groups of very similar items will score a high intra-list similarity. Lower intra-list similarity indicates higher diversity.To measure similarity between any two items we use _cosine similarity_:$$\textrm{Cosine Similarity}(i,j)= \frac{|M_t^{l(i,j)}|} {\sqrt{|M_t^{l(i)}|} \sqrt{|M_t^{l(j)}|} }$$where $M_t^{l(i)}$ denotes the set of users who liked item $i$ and $M_t^{l(i,j)}$ the users who liked both $i$ and $j$.Intra-list similarity is then defined as $$\textrm{IL} = \frac{1}{|M|} \sum_{u \in M} \frac{1}{\binom{N_r(u)}{2}} \sum_{i,j \in N_r (u),\, i<j} \textrm{Cosine Similarity}(i,j)$$where $M$ is the set of users and $N_r(u)$ the set of recommendations for user $u$. Finally, diversity is defined as$$\textrm{diversity} = 1 - \textrm{IL}$$ **Novelty**The novelty of an item is inverse to its _popularity_. If $p(i)$ represents the probability that item $i$ is observed (or known, interacted with etc.) by users, then $$p(i) = \frac{|M_t (i)|} {|\textrm{train_df}|}$$where $M_t (i)$ is the set of users who have interacted with item $i$ in the historical data. The novelty of an item is then defined as$$\textrm{novelty}(i) = -\log_2 p(i)$$and the novelty of the recommendations across all users is defined as$$\textrm{novelty} = \sum_{i \in N_r} \frac{|M_r (i)|}{|\textrm{reco_df}|} \textrm{novelty}(i)$$ **Serendipity**Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items and can be imagined as the distance between recommended items and their expected contents (Zhang et al.) Lower cosine similarity indicates lower expectedness and higher serendipity.We define the expectedness of an unseen item $i$ for user $u$ as the average similarity between every already seen item $j$ in the historical data and $i$:$$\textrm{expectedness}(i|u) = \frac{1}{|N_t (u)|} \sum_{j \in N_t (u)} \textrm{Cosine Similarity}(i,j)$$The serendipity of item $i$ is (1 - expectedness) multiplied by _relevance_, where relevance indicates whether the item turns out to be liked by the user or not. For example, in a binary scenario, if an item in `reco_df` is liked (purchased, clicked) in `test_df`, its relevance equals one, otherwise it equals zero. Aggregating over all users and items, the overall serendipity is defined as$$\textrm{serendipity} = \frac{1}{|M|} \sum_{u \in M_r}\frac{1}{|N_r (u)|} \sum_{i \in N_r (u)} \big(1 - \textrm{expectedness}(i|u) \big) \, \textrm{relevance}(i)$$ **Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.mddependencies-setup) to install the PySpark environment. ###Code # set the environment path to find Recommenders %load_ext autoreload %autoreload 2 import sys import pyspark from pyspark.ml.recommendation import ALS import pyspark.sql.functions as F from pyspark.sql import SparkSession from pyspark.sql.types import StringType, FloatType, IntegerType, LongType, StructType, StructField from pyspark.ml.feature import Tokenizer, RegexTokenizer, StopWordsRemover from pyspark.ml.feature import HashingTF, CountVectorizer, VectorAssembler from recommenders.utils.timer import Timer from recommenders.datasets import movielens from recommenders.utils.notebook_utils import is_jupyter from recommenders.datasets.spark_splitters import spark_random_split from recommenders.evaluation.spark_evaluation import SparkRatingEvaluation, SparkRankingEvaluation, SparkDiversityEvaluation from recommenders.utils.spark_utils import start_or_get_spark from pyspark.sql.window import Window import pyspark.sql.functions as F import numpy as np import pandas as pd print("System version: {}".format(sys.version)) print("Spark version: {}".format(pyspark.__version__)) ###Output System version: 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59) [GCC 7.5.0] Spark version: 2.4.8 ###Markdown Set the default parameters. ###Code # top k items to recommend TOP_K = 10 # Select MovieLens data size: 100k, 1m, 10m, or 20m MOVIELENS_DATA_SIZE = '100k' # user, item column names COL_USER="UserId" COL_ITEM="MovieId" COL_RATING="Rating" ###Output _____no_output_____ ###Markdown 1. Set up Spark contextThe following settings work well for debugging locally on VM - change when running on a cluster. We set up a giant single executor with many threads and specify memory cap. ###Code # the following settings work well for debugging locally on VM - change when running on a cluster # set up a giant single executor with many threads and specify memory cap spark = start_or_get_spark("ALS PySpark", memory="16g") spark.conf.set("spark.sql.crossJoin.enabled", "true") ###Output _____no_output_____ ###Markdown 2. Download the MovieLens dataset ###Code # Note: The DataFrame-based API for ALS currently only supports integers for user and item ids. schema = StructType( ( StructField(COL_USER, IntegerType()), StructField(COL_ITEM, IntegerType()), StructField(COL_RATING, FloatType()), StructField("Timestamp", LongType()), ) ) data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema, title_col="title", genres_col="genres") data.show() ###Output 100%|██████████| 4.81k/4.81k [00:00<00:00, 17.1kKB/s] ###Markdown Split the data using the Spark random splitter provided in utilities ###Code train_df, test_df = spark_random_split(data.select(COL_USER, COL_ITEM, COL_RATING), ratio=0.75, seed=123) print ("N train_df", train_df.cache().count()) print ("N test_df", test_df.cache().count()) ###Output N train_df 75066 N test_df 24934 ###Markdown Get all possible user-item pairs Note: We assume that training data contains all users and all catalog items. ###Code users = train_df.select(COL_USER).distinct() items = train_df.select(COL_ITEM).distinct() user_item = users.crossJoin(items) ###Output _____no_output_____ ###Markdown 3. Train the ALS model on the training data, and get the top-k recommendations for our testing dataTo predict movie ratings, we use the rating data in the training set as users' explicit feedback. The hyperparameters used in building the model are referenced from [here](http://mymedialite.net/examples/datasets.html). We do not constrain the latent factors (`nonnegative = False`) in order to allow for both positive and negative preferences towards movies.Timing will vary depending on the machine being used to train. ###Code header = { "userCol": COL_USER, "itemCol": COL_ITEM, "ratingCol": COL_RATING, } als = ALS( rank=10, maxIter=15, implicitPrefs=False, regParam=0.05, coldStartStrategy='drop', nonnegative=False, seed=42, **header ) with Timer() as train_time: model = als.fit(train_df) print("Took {} seconds for training.".format(train_time.interval)) ###Output Took 4.012367556002573 seconds for training. ###Markdown In the movie recommendation use case, recommending movies that have been rated by the users does not make sense. Therefore, the rated movies are removed from the recommended items.In order to achieve this, we recommend all movies to all users, and then remove the user-movie pairs that exist in the training dataset. ###Code # Score all user-item pairs dfs_pred = model.transform(user_item) # Remove seen items. dfs_pred_exclude_train = dfs_pred.alias("pred").join( train_df.alias("train"), (dfs_pred[COL_USER] == train_df[COL_USER]) & (dfs_pred[COL_ITEM] == train_df[COL_ITEM]), how='outer' ) top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \ .select('pred.' + COL_USER, 'pred.' + COL_ITEM, 'pred.' + "prediction") print(top_all.count()) window = Window.partitionBy(COL_USER).orderBy(F.col("prediction").desc()) top_k_reco = top_all.select("*", F.row_number().over(window).alias("rank")).filter(F.col("rank") <= TOP_K).drop("rank") print(top_k_reco.count()) ###Output 1464853 9430 ###Markdown 4. Random RecommenderWe define a recommender which randomly recommends unseen items to each user. ###Code # random recommender window = Window.partitionBy(COL_USER).orderBy(F.rand()) # randomly generated recommendations for each user pred_df = ( train_df # join training data with all possible user-item pairs (seen in training) .join(user_item, on=[COL_USER, COL_ITEM], how="right" ) # get user-item pairs that were not seen in the training data .filter(F.col(COL_RATING).isNull()) # count items for each user (randomly sorting them) .withColumn("score", F.row_number().over(window)) # get the top k items per user .filter(F.col("score") <= TOP_K) .drop(COL_RATING) ) ###Output _____no_output_____ ###Markdown 5. ALS vs Random Recommenders Performance Comparison ###Code def get_ranking_results(ranking_eval): metrics = { "Precision@k": ranking_eval.precision_at_k(), "Recall@k": ranking_eval.recall_at_k(), "NDCG@k": ranking_eval.ndcg_at_k(), "Mean average precision": ranking_eval.map_at_k() } return metrics def get_diversity_results(diversity_eval): metrics = { "catalog_coverage":diversity_eval.catalog_coverage(), "distributional_coverage":diversity_eval.distributional_coverage(), "novelty": diversity_eval.novelty(), "diversity": diversity_eval.diversity(), "serendipity": diversity_eval.serendipity() } return metrics def generate_summary(data, algo, k, ranking_metrics, diversity_metrics): summary = {"Data": data, "Algo": algo, "K": k} if ranking_metrics is None: ranking_metrics = { "Precision@k": np.nan, "Recall@k": np.nan, "nDCG@k": np.nan, "MAP": np.nan, } summary.update(ranking_metrics) summary.update(diversity_metrics) return summary ###Output _____no_output_____ ###Markdown ALS Recommender Performance Results ###Code als_ranking_eval = SparkRankingEvaluation( test_df, top_all, k = TOP_K, col_user="UserId", col_item="MovieId", col_rating="Rating", col_prediction="prediction", relevancy_method="top_k" ) als_ranking_metrics = get_ranking_results(als_ranking_eval) als_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, col_user = COL_USER, col_item = COL_ITEM ) als_diversity_metrics = get_diversity_results(als_diversity_eval) als_results = generate_summary(MOVIELENS_DATA_SIZE, "als", TOP_K, als_ranking_metrics, als_diversity_metrics) ###Output _____no_output_____ ###Markdown Random Recommender Performance Results ###Code random_ranking_eval = SparkRankingEvaluation( test_df, pred_df, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="score", k=TOP_K, ) random_ranking_metrics = get_ranking_results(random_ranking_eval) random_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, col_user = COL_USER, col_item = COL_ITEM ) random_diversity_metrics = get_diversity_results(random_diversity_eval) random_results = generate_summary(MOVIELENS_DATA_SIZE, "random", TOP_K, random_ranking_metrics, random_diversity_metrics) ###Output _____no_output_____ ###Markdown Result Comparison ###Code cols = ["Data", "Algo", "K", "Precision@k", "Recall@k", "NDCG@k", "Mean average precision","catalog_coverage", "distributional_coverage","novelty", "diversity", "serendipity" ] df_results = pd.DataFrame(columns=cols) df_results.loc[1] = als_results df_results.loc[2] = random_results df_results ###Output _____no_output_____ ###Markdown ConclusionThe comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the long-tail less popular items having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. 6. Calculate diversity metrics using item feature vector based item-item similarityIn the above section we calculate diversity metrics using item co-occurrence count based item-item similarity. In the scenarios when item features are available, we may want to calculate item-item similarity based on item feature vectors. In this section, we show how to calculate diversity metrics using item feature vector based item-item similarity. ###Code # Get movie features "title" and "genres" movies = ( data.groupBy("MovieId", "title", "genres").count() .na.drop() # remove rows with null values .withColumn("genres", F.split(F.col("genres"), "\|")) # convert to array of genres .withColumn("title", F.regexp_replace(F.col("title"), "[\(),:^0-9]", "")) # remove year from title .drop("count") # remove unused columns ) # tokenize "title" column title_tokenizer = Tokenizer(inputCol="title", outputCol="title_words") tokenized_data = title_tokenizer.transform(movies) # remove stop words remover = StopWordsRemover(inputCol="title_words", outputCol="text") clean_data = remover.transform(tokenized_data).drop("title", "title_words") # convert text input into feature vectors # step 1: perform HashingTF on column "text" text_hasher = HashingTF(inputCol="text", outputCol="text_features", numFeatures=1024) hashed_data = text_hasher.transform(clean_data) # step 2: fit a CountVectorizerModel from column "genres". count_vectorizer = CountVectorizer(inputCol="genres", outputCol="genres_features") count_vectorizer_model = count_vectorizer.fit(hashed_data) vectorized_data = count_vectorizer_model.transform(hashed_data) # step 3: assemble features into a single vector assembler = VectorAssembler( inputCols=["text_features", "genres_features"], outputCol="features", ) feature_data = assembler.transform(vectorized_data).select("MovieId", "features") feature_data.show(10, False) ###Output +-------+---------------------------------------------+ |MovieId|features | +-------+---------------------------------------------+ |167 |(1043,[128,544,1025],[1.0,1.0,1.0]) | |1343 |(1043,[38,300,1024],[1.0,1.0,1.0]) | |1607 |(1043,[592,821,1024],[1.0,1.0,1.0]) | |966 |(1043,[389,502,1028],[1.0,1.0,1.0]) | |9 |(1043,[11,342,1014,1024],[1.0,1.0,1.0,1.0]) | |1230 |(1043,[597,740,902,1025],[1.0,1.0,1.0,1.0]) | |1118 |(1043,[702,1025],[1.0,1.0]) | |673 |(1043,[169,690,1027,1040],[1.0,1.0,1.0,1.0]) | |879 |(1043,[909,1026,1027,1034],[1.0,1.0,1.0,1.0])| |66 |(1043,[256,1025,1028],[1.0,1.0,1.0]) | +-------+---------------------------------------------+ only showing top 10 rows ###Markdown The *features* column is represented with a SparseVector object. For example, in the feature vector (1043,[128,544,1025],[1.0,1.0,1.0]), 1043 is the vector length, indicating the vector consisting of 1043 item features. The values at index positions 128,544,1025 are 1.0, and the values at other positions are all 0. ###Code als_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) als_diversity=als_eval.diversity() als_serendipity=als_eval.serendipity() print(als_diversity) print(als_serendipity) random_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) random_diversity=random_eval.diversity() random_serendipity=random_eval.serendipity() print(random_diversity) print(random_serendipity) ###Output 0.8978120851519519 0.8937850286817351 ###Markdown It's interesting that the value of diversity and serendipity changes when using different item-item similarity calculation approach, for both ALS algorithm and random recommender. The diversity and serendipity of random recommender are still higher than ALS algorithm. ReferencesThe metric definitions / formulations are based on the following references:- P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011- G. Shani and A. Gunawardana, Evaluating recommendation systems, Recommender Systems Handbook pp. 257-297, 2010.- E. Yan, Serendipity: Accuracy’s unpopular best friend in recommender Systems, eugeneyan.com, April 2020- Y.C. Zhang, D.Ó. Séaghdha, D. Quercia and T. Jambor, Auralist: introducing serendipity into music recommendation, WSDM 2012 ###Code # cleanup spark instance spark.stop() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Apply Diversity Metrics -- Compare ALS and Random Recommenders on MovieLens (PySpark)We demonstrate how to evaluate a recommender using diversity metrics in addition to commonly used rating/ranking metrics.Diversity metrics include:- Coverage - The proportion of items that can be recommended. It includes two metrics: - (1) catalog_coverage, which measures the proportion of items that get recommended from the item catalog; - (2) distributional_coverage, which measures how unequally different items are recommended in the recommendations to all users.- Novelty - A more novel item indicates it is less popular, i.e., it gets recommended less frequently.- Diversity - The dissimilarity of items being recommended.- Serendipity - The "unusualness" or "surprise" of recommendations to a user.We compare the performance of two algorithms: ALS recommender and a random recommender. - Matrix factorization by [ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.htmlALS) (Alternating Least Squares) is a well known collaborative filtering algorithm. - We also define a random recommender which randomly recommends unseen items to each user. The comparision results show that ALS recommender outperforms random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while random recommender outperforms ALS recommender on diversity metrics. Why ALS performs better on ranking metrics while worse on diversity metrics? ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including precision, recall, etc. Ranking metrics are built upoin these accuracy metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the long-tail less popular items having less chance to get introduced to the users. This is the reason why ALS is not as well performing as a random recommender on diversity metrics. We understand that there is usually a trade-off between one metric and the other. We should decide which set of metrics to optimize based on business scenarios. **Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.mddependencies-setup) to install the PySpark environment. ###Code # set the environment path to find Recommenders import sys import pyspark from pyspark.ml.recommendation import ALS import pyspark.sql.functions as F from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField from pyspark.sql.types import StringType, FloatType, IntegerType, LongType from reco_utils.common.timer import Timer from reco_utils.dataset import movielens from reco_utils.common.notebook_utils import is_jupyter from reco_utils.dataset.spark_splitters import spark_random_split from reco_utils.evaluation.spark_evaluation import SparkRatingEvaluation, SparkRankingEvaluation from reco_utils.common.spark_utils import start_or_get_spark from reco_utils.evaluation.spark_diversity_evaluation import DiversityEvaluation from pyspark.sql.window import Window import numpy as np import pandas as pd print("System version: {}".format(sys.version)) print("Spark version: {}".format(pyspark.__version__)) ###Output System version: 3.6.11 | packaged by conda-forge | (default, Nov 27 2020, 18:57:37) [GCC 9.3.0] Spark version: 2.4.5 ###Markdown Set the default parameters. ###Code # top k items to recommend TOP_K = 10 # Select MovieLens data size: 100k, 1m, 10m, or 20m MOVIELENS_DATA_SIZE = '100k' # user, item column names COL_USER="UserId" COL_ITEM="MovieId" COL_RATING="Rating" ###Output _____no_output_____ ###Markdown Set up Spark contextThe following settings work well for debugging locally on VM - change when running on a cluster. We set up a giant single executor with many threads and specify memory cap. ###Code # the following settings work well for debugging locally on VM - change when running on a cluster # set up a giant single executor with many threads and specify memory cap spark = start_or_get_spark("ALS PySpark", memory="16g") spark.conf.set("spark.sql.crossJoin.enabled", "true") ###Output _____no_output_____ ###Markdown Download the MovieLens dataset ###Code # Note: The DataFrame-based API for ALS currently only supports integers for user and item ids. schema = StructType( ( StructField(COL_USER, IntegerType()), StructField(COL_ITEM, IntegerType()), StructField(COL_RATING, FloatType()), StructField("Timestamp", LongType()), ) ) data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema) data.show() ###Output 100%|██████████| 4.81k/4.81k [00:00<00:00, 17.5kKB/s] ###Markdown Split the data using the Spark random splitter provided in utilities ###Code train, test = spark_random_split(data, ratio=0.75, seed=123) print ("N train", train.cache().count()) print ("N test", test.cache().count()) ###Output N train 75193 N test 24807 ###Markdown Get all possible user-item pairs Note: We have the assumption that training data contains all users and all catalog items. ###Code users = train.select(COL_USER).distinct() items = train.select(COL_ITEM).distinct() user_item = users.crossJoin(items) ###Output _____no_output_____ ###Markdown Train the ALS model on the training data, and get the top-k recommendations for our testing dataTo predict movie ratings, we use the rating data in the training set as users' explicit feedback. The hyperparameters used in building the model are referenced from [here](http://mymedialite.net/examples/datasets.html). We do not constrain the latent factors (`nonnegative = False`) in order to allow for both positive and negative preferences towards movies.Timing will vary depending on the machine being used to train. ###Code header = { "userCol": COL_USER, "itemCol": COL_ITEM, "ratingCol": COL_RATING, } als = ALS( rank=10, maxIter=15, implicitPrefs=False, regParam=0.05, coldStartStrategy='drop', nonnegative=False, seed=42, **header ) with Timer() as train_time: model = als.fit(train) print("Took {} seconds for training.".format(train_time.interval)) ###Output Took 3.2652852770006575 seconds for training. ###Markdown In the movie recommendation use case, recommending movies that have been rated by the users does not make sense. Therefore, the rated movies are removed from the recommended items.In order to achieve this, we recommend all movies to all users, and then remove the user-movie pairs that exist in the training dataset. ###Code # Score all user-item pairs dfs_pred = model.transform(user_item) # Remove seen items. dfs_pred_exclude_train = dfs_pred.alias("pred").join( train.alias("train"), (dfs_pred[COL_USER] == train[COL_USER]) & (dfs_pred[COL_ITEM] == train[COL_ITEM]), how='outer' ) top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \ .select('pred.' + COL_USER, 'pred.' + COL_ITEM, 'pred.' + "prediction") print(top_all.count()) window = Window.partitionBy(COL_USER).orderBy(F.col("prediction").desc()) top_k_reco = top_all.select("*", F.row_number().over(window).alias("rank")).filter(F.col("rank") <= 10).drop("rank") print(top_k_reco.count()) ###Output 1477928 9430 ###Markdown Random RecommenderWe define a random recommender which randomly recommends unseen items to each user. ###Code train_df = train.select(COL_USER, COL_ITEM, COL_RATING) # random recommender window = Window.partitionBy(COL_USER).orderBy(F.rand()) # randomly generated recommendations for each user pred_df = ( train_df # join training data with all possible user-item pairs (seen in training) .join(user_item, on=[COL_USER, COL_ITEM], how="right" ) # get user-item pairs that were not seen in the training data .filter(F.col(COL_RATING).isNull()) # count items for each user (randomly sorting them) .withColumn("score", F.row_number().over(window)) # get the top k items per user .filter(F.col("score") <= TOP_K) .drop(COL_RATING) ) ###Output _____no_output_____ ###Markdown 5. ALS vs Random Recommenders Performance Comparison ###Code def get_ranking_results(ranking_eval): metrics = { "Precision@k": ranking_eval.precision_at_k(), "Recall@k": ranking_eval.recall_at_k(), "NDCG@k": ranking_eval.ndcg_at_k(), "Mean average precision": ranking_eval.map_at_k() } return metrics def get_diversity_results(diversity_eval): metrics = { "catalog_coverage":diversity_eval.catalog_coverage(), "distributional_coverage":diversity_eval.distributional_coverage(), "novelty": diversity_eval.novelty().first()[0], "diversity": diversity_eval.diversity().first()[0], "serendipity": diversity_eval.serendipity().first()[0] } return metrics def generate_summary(data, algo, k, ranking_metrics, diversity_metrics): summary = {"Data": data, "Algo": algo, "K": k} if ranking_metrics is None: ranking_metrics = { "Precision@k": np.nan, "Recall@k": np.nan, "nDCG@k": np.nan, "MAP": np.nan, } summary.update(ranking_metrics) summary.update(diversity_metrics) return summary ###Output _____no_output_____ ###Markdown ALS Recommender Performance Results ###Code als_ranking_eval = SparkRankingEvaluation( test, top_all, k = TOP_K, col_user="UserId", col_item="MovieId", col_rating="Rating", col_prediction="prediction", relevancy_method="top_k" ) als_ranking_metrics = get_ranking_results(als_ranking_eval) als_diversity_eval = DiversityEvaluation( train_df = train_df, reco_df = top_k_reco, col_user="UserId", col_item="MovieId" ) als_diversity_metrics = get_diversity_results(als_diversity_eval) als_results = generate_summary(MOVIELENS_DATA_SIZE, "als", TOP_K, als_ranking_metrics, als_diversity_metrics) ###Output _____no_output_____ ###Markdown Random Recommender Performance Results ###Code random_ranking_eval = SparkRankingEvaluation( test, pred_df, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="score", k=TOP_K, ) random_ranking_metrics = get_ranking_results(random_ranking_eval) random_diversity_eval = DiversityEvaluation( train_df = train_df, reco_df = pred_df, col_user=COL_USER, col_item=COL_ITEM ) random_diversity_metrics = get_diversity_results(random_diversity_eval) random_results = generate_summary(MOVIELENS_DATA_SIZE, "random", TOP_K, random_ranking_metrics, random_diversity_metrics) ###Output _____no_output_____ ###Markdown Result Comparison ###Code cols = ["Data", "Algo", "K", "Precision@k", "Recall@k", "NDCG@k", "Mean average precision","catalog_coverage", "distributional_coverage","novelty", "diversity", "serendipity" ] df_results = pd.DataFrame(columns=cols) df_results.loc[1] = als_results df_results.loc[2] = random_results df_results ###Output _____no_output_____ ###Markdown ReferenceThe metric definitions/formulations are based on following reference with modification:- G. Shani and A. Gunawardana, Evaluating Recommendation Systems, Recommender Systems Handbook pp. 257-297, 2010.- Y.C. Zhang, D.Ó. Séaghdha, D. Quercia and T. Jambor, Auralist: introducing serendipity into music recommendation, WSDM 2012- P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011- Eugene Yan, Serendipity: Accuracy’s unpopular best friend in Recommender Systems, towards data science, April 2020- N. Hurley and M. Zhang, Novelty and diversity in top-n recommendation--analysis and evaluation, ACM Transactions, 2011 ###Code # cleanup spark instance spark.stop() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Apply Diversity Metrics -- Compare ALS and Random Recommenders on MovieLens (PySpark)In this notebook, we demonstrate how to evaluate a recommender using metrics other than commonly used rating/ranking metrics.Such metrics include:- Coverage - We use following two metrics defined by \[Shani and Gunawardana\]: - (1) catalog_coverage, which measures the proportion of items that get recommended from the item catalog; - (2) distributional_coverage, which measures how equally different items are recommended in the recommendations to all users.- Novelty - A more novel item indicates it is less popular, i.e. it gets recommended less frequently.We use the definition of novelty from \[Castells et al.\]- Diversity - The dissimilarity of items being recommended.We use a definition based on _intralist similarity_ by \[Zhang et al.]- Serendipity - The "unusualness" or "surprise" of recommendations to a user.We use a definition based on cosine similarity by \[Zhang et al.]We evaluate the results obtained with two approaches: using the ALS recommender algorithm vs. a baseline of random recommendations. - Matrix factorization by [ALS](https://spark.apache.org/docs/latest/api/python/_modules/pyspark/ml/recommendation.htmlALS) (Alternating Least Squares) is a well known collaborative filtering algorithm. - We also define a process which randomly recommends unseen items to each user. - We show two options to calculate item-item similarity: (1) based on item co-occurrence count; and (2) based on item feature vectors. The comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the [long-tail items](https://github.com/microsoft/recommenders/blob/main/GLOSSARY.md) having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. From the algorithmic point of view, items in the tail suffer from the cold-start problem, making them hard for recommendation systems to use. However, from the business point of view, oftentimes the items in the tail can be highly profitable, since, depending on supply, business can apply a higher margin to them. Recommendation systems that optimize metrics like novelty and diversity, can help to find users willing to get these long tail items. Usually there is a trade-off between one type of metric vs. another. One should decide which set of metrics to optimize based on business scenarios. **Coverage**We define _catalog coverage_ as the proportion of items showing in all users’ recommendations: $$\textrm{CatalogCoverage} = \frac{|N_r|}{|N_t|}$$where $N_r$ denotes the set of items in the recommendations (`reco_df` in the code below) and $N_t$ the set of items in the historical data (`train_df`)._Distributional coverage_ measures how equally different items are recommended to users when a particular recommender system is used.If $p(i|R)$ denotes the probability that item $i$ is observed among all recommendation lists, we define distributional coverage as$$\textrm{DistributionalCoverage} = -\sum_{i \in N_t} p(i|R) \log_2 p(i)$$where $$p(i|R) = \frac{|M_r (i)|}{|\textrm{reco_df}|}$$and $M_r (i)$ denotes the users who are recommended item $i$. **Diversity**Diversity represents the variety present in a list of recommendations._Intra-List Similarity_ aggregates the pairwise similarity of all items in a set. A recommendation list with groups of very similar items will score a high intra-list similarity. Lower intra-list similarity indicates higher diversity.To measure similarity between any two items we use _cosine similarity_:$$\textrm{Cosine Similarity}(i,j)= \frac{|M_t^{l(i,j)}|} {\sqrt{|M_t^{l(i)}|} \sqrt{|M_t^{l(j)}|} }$$where $M_t^{l(i)}$ denotes the set of users who liked item $i$ and $M_t^{l(i,j)}$ the users who liked both $i$ and $j$.Intra-list similarity is then defined as $$\textrm{IL} = \frac{1}{|M|} \sum_{u \in M} \frac{1}{\binom{N_r(u)}{2}} \sum_{i,j \in N_r (u),\, i<j} \textrm{Cosine Similarity}(i,j)$$where $M$ is the set of users and $N_r(u)$ the set of recommendations for user $u$. Finally, diversity is defined as$$\textrm{diversity} = 1 - \textrm{IL}$$ **Novelty**The novelty of an item is inverse to its _popularity_. If $p(i)$ represents the probability that item $i$ is observed (or known, interacted with etc.) by users, then $$p(i) = \frac{|M_t (i)|} {|\textrm{train_df}|}$$where $M_t (i)$ is the set of users who have interacted with item $i$ in the historical data. The novelty of an item is then defined as$$\textrm{novelty}(i) = -\log_2 p(i)$$and the novelty of the recommendations across all users is defined as$$\textrm{novelty} = \sum_{i \in N_r} \frac{|M_r (i)|}{|\textrm{reco_df}|} \textrm{novelty}(i)$$ **Serendipity**Serendipity represents the “unusualness” or “surprise” of recommendations. Unlike novelty, serendipity encompasses the semantic content of items and can be imagined as the distance between recommended items and their expected contents (Zhang et al.) Lower cosine similarity indicates lower expectedness and higher serendipity.We define the expectedness of an unseen item $i$ for user $u$ as the average similarity between every already seen item $j$ in the historical data and $i$:$$\textrm{expectedness}(i|u) = \frac{1}{|N_t (u)|} \sum_{j \in N_t (u)} \textrm{Cosine Similarity}(i,j)$$The serendipity of item $i$ is (1 - expectedness) multiplied by _relevance_, where relevance indicates whether the item turns out to be liked by the user or not. For example, in a binary scenario, if an item in `reco_df` is liked (purchased, clicked) in `test_df`, its relevance equals one, otherwise it equals zero. Aggregating over all users and items, the overall serendipity is defined as$$\textrm{serendipity} = \frac{1}{|M|} \sum_{u \in M_r}\frac{1}{|N_r (u)|} \sum_{i \in N_r (u)} \big(1 - \textrm{expectedness}(i|u) \big) \, \textrm{relevance}(i)$$ **Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.mddependencies-setup) to install the PySpark environment. ###Code # set the environment path to find Recommenders %load_ext autoreload %autoreload 2 import sys import pyspark from pyspark.ml.recommendation import ALS import pyspark.sql.functions as F from pyspark.sql.types import FloatType, IntegerType, LongType, StructType, StructField from pyspark.ml.feature import Tokenizer, StopWordsRemover from pyspark.ml.feature import HashingTF, CountVectorizer, VectorAssembler import warnings warnings.simplefilter(action='ignore', category=FutureWarning) from recommenders.utils.timer import Timer from recommenders.datasets import movielens from recommenders.datasets.spark_splitters import spark_random_split from recommenders.evaluation.spark_evaluation import SparkRankingEvaluation, SparkDiversityEvaluation from recommenders.utils.spark_utils import start_or_get_spark from pyspark.sql.window import Window import pyspark.sql.functions as F import numpy as np import pandas as pd print("System version: {}".format(sys.version)) print("Spark version: {}".format(pyspark.__version__)) ###Output System version: 3.8.0 (default, Nov 6 2019, 21:49:08) [GCC 7.3.0] Spark version: 3.2.0 ###Markdown Set the default parameters. ###Code # top k items to recommend TOP_K = 10 # Select MovieLens data size: 100k, 1m, 10m, or 20m MOVIELENS_DATA_SIZE = '100k' # user, item column names COL_USER="UserId" COL_ITEM="MovieId" COL_RATING="Rating" COL_TITLE="Title" COL_GENRE="Genre" ###Output _____no_output_____ ###Markdown 1. Set up Spark contextThe following settings work well for debugging locally on VM - change when running on a cluster. We set up a giant single executor with many threads and specify memory cap. ###Code # the following settings work well for debugging locally on VM - change when running on a cluster # set up a giant single executor with many threads and specify memory cap spark = start_or_get_spark("ALS PySpark", memory="16g") spark.conf.set("spark.sql.analyzer.failAmbiguousSelfJoin", "false") spark.conf.set("spark.sql.crossJoin.enabled", "true") ###Output _____no_output_____ ###Markdown 2. Download the MovieLens dataset ###Code # Note: The DataFrame-based API for ALS currently only supports integers for user and item ids. schema = StructType( ( StructField(COL_USER, IntegerType()), StructField(COL_ITEM, IntegerType()), StructField(COL_RATING, FloatType()), StructField("Timestamp", LongType()), ) ) data = movielens.load_spark_df(spark, size=MOVIELENS_DATA_SIZE, schema=schema, title_col=COL_TITLE, genres_col=COL_GENRE) data.show() ###Output 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.81k/4.81k [00:05<00:00, 862KB/s] ###Markdown Split the data using the Spark random splitter provided in utilities ###Code train_df, test_df = spark_random_split(data.select(COL_USER, COL_ITEM, COL_RATING), ratio=0.75, seed=123) print ("N train_df", train_df.cache().count()) print ("N test_df", test_df.cache().count()) ###Output ###Markdown Get all possible user-item pairs Note: We assume that training data contains all users and all catalog items. ###Code users = train_df.select(COL_USER).distinct() items = train_df.select(COL_ITEM).distinct() user_item = users.crossJoin(items) ###Output _____no_output_____ ###Markdown 3. Train the ALS model on the training data, and get the top-k recommendations for our testing dataTo predict movie ratings, we use the rating data in the training set as users' explicit feedback. The hyperparameters used in building the model are referenced from [here](http://mymedialite.net/examples/datasets.html). We do not constrain the latent factors (`nonnegative = False`) in order to allow for both positive and negative preferences towards movies.Timing will vary depending on the machine being used to train. ###Code header = { "userCol": COL_USER, "itemCol": COL_ITEM, "ratingCol": COL_RATING, } als = ALS( rank=10, maxIter=15, implicitPrefs=False, regParam=0.05, coldStartStrategy='drop', nonnegative=False, seed=42, **header ) with Timer() as train_time: model = als.fit(train_df) print("Took {} seconds for training.".format(train_time.interval)) ###Output ###Markdown In the movie recommendation use case, recommending movies that have been rated by the users does not make sense. Therefore, the rated movies are removed from the recommended items.In order to achieve this, we recommend all movies to all users, and then remove the user-movie pairs that exist in the training dataset. ###Code # Score all user-item pairs dfs_pred = model.transform(user_item) # Remove seen items. dfs_pred_exclude_train = dfs_pred.alias("pred").join( train_df.alias("train"), (dfs_pred[COL_USER] == train_df[COL_USER]) & (dfs_pred[COL_ITEM] == train_df[COL_ITEM]), how='outer' ) top_all = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \ .select('pred.' + COL_USER, 'pred.' + COL_ITEM, 'pred.' + "prediction") print(top_all.count()) window = Window.partitionBy(COL_USER).orderBy(F.col("prediction").desc()) top_k_reco = top_all.select("*", F.row_number().over(window).alias("rank")).filter(F.col("rank") <= TOP_K).drop("rank") print(top_k_reco.count()) ###Output ###Markdown 4. Random RecommenderWe define a recommender which randomly recommends unseen items to each user. ###Code # random recommender window = Window.partitionBy(COL_USER).orderBy(F.rand()) # randomly generated recommendations for each user pred_df = ( train_df # join training data with all possible user-item pairs (seen in training) .join(user_item, on=[COL_USER, COL_ITEM], how="right" ) # get user-item pairs that were not seen in the training data .filter(F.col(COL_RATING).isNull()) # count items for each user (randomly sorting them) .withColumn("score", F.row_number().over(window)) # get the top k items per user .filter(F.col("score") <= TOP_K) .drop(COL_RATING) ) ###Output _____no_output_____ ###Markdown 5. ALS vs Random Recommenders Performance Comparison ###Code def get_ranking_results(ranking_eval): metrics = { "Precision@k": ranking_eval.precision_at_k(), "Recall@k": ranking_eval.recall_at_k(), "NDCG@k": ranking_eval.ndcg_at_k(), "Mean average precision": ranking_eval.map_at_k() } return metrics def get_diversity_results(diversity_eval): metrics = { "catalog_coverage":diversity_eval.catalog_coverage(), "distributional_coverage":diversity_eval.distributional_coverage(), "novelty": diversity_eval.novelty(), "diversity": diversity_eval.diversity(), "serendipity": diversity_eval.serendipity() } return metrics def generate_summary(data, algo, k, ranking_metrics, diversity_metrics): summary = {"Data": data, "Algo": algo, "K": k} if ranking_metrics is None: ranking_metrics = { "Precision@k": np.nan, "Recall@k": np.nan, "nDCG@k": np.nan, "MAP": np.nan, } summary.update(ranking_metrics) summary.update(diversity_metrics) return summary ###Output _____no_output_____ ###Markdown ALS Recommender Performance Results ###Code als_ranking_eval = SparkRankingEvaluation( test_df, top_all, k = TOP_K, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="prediction", relevancy_method="top_k" ) als_ranking_metrics = get_ranking_results(als_ranking_eval) als_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, col_user = COL_USER, col_item = COL_ITEM ) als_diversity_metrics = get_diversity_results(als_diversity_eval) als_results = generate_summary(MOVIELENS_DATA_SIZE, "als", TOP_K, als_ranking_metrics, als_diversity_metrics) ###Output _____no_output_____ ###Markdown Random Recommender Performance Results ###Code random_ranking_eval = SparkRankingEvaluation( test_df, pred_df, col_user=COL_USER, col_item=COL_ITEM, col_rating=COL_RATING, col_prediction="score", k=TOP_K, ) random_ranking_metrics = get_ranking_results(random_ranking_eval) random_diversity_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, col_user = COL_USER, col_item = COL_ITEM ) random_diversity_metrics = get_diversity_results(random_diversity_eval) random_results = generate_summary(MOVIELENS_DATA_SIZE, "random", TOP_K, random_ranking_metrics, random_diversity_metrics) ###Output _____no_output_____ ###Markdown Result Comparison ###Code cols = ["Data", "Algo", "K", "Precision@k", "Recall@k", "NDCG@k", "Mean average precision","catalog_coverage", "distributional_coverage","novelty", "diversity", "serendipity" ] df_results = pd.DataFrame(columns=cols) df_results.loc[1] = als_results df_results.loc[2] = random_results df_results ###Output _____no_output_____ ###Markdown ConclusionThe comparision results show that the ALS recommender outperforms the random recommender on ranking metrics (Precision@k, Recall@k, NDCG@k, and Mean average precision), while the random recommender outperforms ALS recommender on diversity metrics. This is because ALS is optimized for estimating the item rating as accurate as possible, therefore it performs well on accuracy metrics including rating and ranking metrics. As a side effect, the items being recommended tend to be popular items, which are the items mostly sold or viewed. It leaves the long-tail less popular items having less chance to get introduced to the users. This is the reason why ALS is not performing as well as a random recommender on diversity metrics. 6. Calculate diversity metrics using item feature vector based item-item similarityIn the above section we calculate diversity metrics using item co-occurrence count based item-item similarity. In the scenarios when item features are available, we may want to calculate item-item similarity based on item feature vectors. In this section, we show how to calculate diversity metrics using item feature vector based item-item similarity. ###Code # Get movie features "title" and "genres" movies = ( data.groupBy(COL_ITEM, COL_TITLE, COL_GENRE).count() .na.drop() # remove rows with null values .withColumn(COL_GENRE, F.split(F.col(COL_GENRE), "\|")) # convert to array of genres .withColumn(COL_TITLE, F.regexp_replace(F.col(COL_TITLE), "[\(),:^0-9]", "")) # remove year from title .drop("count") # remove unused columns ) # tokenize "title" column title_tokenizer = Tokenizer(inputCol=COL_TITLE, outputCol="title_words") tokenized_data = title_tokenizer.transform(movies) # remove stop words remover = StopWordsRemover(inputCol="title_words", outputCol="text") clean_data = remover.transform(tokenized_data).drop(COL_TITLE, "title_words") # convert text input into feature vectors # step 1: perform HashingTF on column "text" text_hasher = HashingTF(inputCol="text", outputCol="text_features", numFeatures=1024) hashed_data = text_hasher.transform(clean_data) # step 2: fit a CountVectorizerModel from column "genres". count_vectorizer = CountVectorizer(inputCol=COL_GENRE, outputCol="genres_features") count_vectorizer_model = count_vectorizer.fit(hashed_data) vectorized_data = count_vectorizer_model.transform(hashed_data) # step 3: assemble features into a single vector assembler = VectorAssembler( inputCols=["text_features", "genres_features"], outputCol="features", ) feature_data = assembler.transform(vectorized_data).select(COL_ITEM, "features") feature_data.show(10, False) ###Output [Stage 1441:============================================> (172 + 2) / 200] ###Markdown The *features* column is represented with a SparseVector object. For example, in the feature vector (1043,[128,544,1025],[1.0,1.0,1.0]), 1043 is the vector length, indicating the vector consisting of 1043 item features. The values at index positions 128,544,1025 are 1.0, and the values at other positions are all 0. ###Code als_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = top_k_reco, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) als_diversity=als_eval.diversity() als_serendipity=als_eval.serendipity() print(als_diversity) print(als_serendipity) random_eval = SparkDiversityEvaluation( train_df = train_df, reco_df = pred_df, item_feature_df = feature_data, item_sim_measure="item_feature_vector", col_user = COL_USER, col_item = COL_ITEM ) random_diversity=random_eval.diversity() random_serendipity=random_eval.serendipity() print(random_diversity) print(random_serendipity) ###Output ###Markdown It's interesting that the value of diversity and serendipity changes when using different item-item similarity calculation approach, for both ALS algorithm and random recommender. The diversity and serendipity of random recommender are still higher than ALS algorithm. ReferencesThe metric definitions / formulations are based on the following references:- P. Castells, S. Vargas, and J. Wang, Novelty and diversity metrics for recommender systems: choice, discovery and relevance, ECIR 2011- G. Shani and A. Gunawardana, Evaluating recommendation systems, Recommender Systems Handbook pp. 257-297, 2010.- E. Yan, Serendipity: Accuracy’s unpopular best friend in recommender Systems, eugeneyan.com, April 2020- Y.C. Zhang, D.Ó. Séaghdha, D. Quercia and T. Jambor, Auralist: introducing serendipity into music recommendation, WSDM 2012 ###Code # cleanup spark instance spark.stop() ###Output _____no_output_____
Practical Statistics/Regression/Logistic Regression/Fitting Logistic Regression.ipynb
###Markdown Fitting Logistic RegressionIn this first notebook, you will be fitting a logistic regression model to a dataset where we would like to predict if a transaction is fraud or not.To get started let's read in the libraries and take a quick look at the dataset. ###Code import numpy as np import pandas as pd import statsmodels.api as sm df = pd.read_csv('./fraud_dataset.csv') df.head() ###Output _____no_output_____ ###Markdown `1.` As you can see, there are two columns that need to be changed to dummy variables. Replace each of the current columns to the dummy version. Use the 1 for `weekday` and `True`, and 0 otherwise. Use the first quiz to answer a few questions about the dataset. ###Code df['weekday'] = pd.get_dummies(df['day'])['weekday'] df[['not_fraud','fraud']] = pd.get_dummies(df['fraud']) df = df.drop('not_fraud', axis=1) df.head() df.fraud.mean() df.query('fraud == 1')['duration'].mean(), df.query('fraud == 0')['duration'].mean() df.weekday.sum()/df.shape[0] ###Output _____no_output_____ ###Markdown `2.` Now that you have dummy variables, fit a logistic regression model to predict if a transaction is fraud using both day and duration. Don't forget an intercept! Use the second quiz below to assure you fit the model correctly. ###Code df['intercept'] = 1 model = sm.Logit(df['fraud'], df[['weekday', 'duration', 'intercept']]) result = model.fit() # https://stackoverflow.com/questions/49814258/statsmodel-attributeerror-module-scipy-stats-has-no-attribute-chisqprob from scipy import stats stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df) result.summary() print("weekday: {}, duration: {}".format(np.exp(2.5465), np.exp(-1.4637))) ###Output weekday: 12.762357271496972, duration: 0.2313785882117941
d2l/tensorflow/chapter_convolutional-modern/densenet.ipynb
###Markdown [**Transition Layers**]Since each dense block will increase the number of channels, adding too many of them will lead to an excessively complex model. A *transition layer* is used to control the complexity of the model. It reduces the number of channels by using the $1\times 1$ convolutional layer and halves the height and width of the average pooling layer with a stride of 2, further reducing the complexity of the model. ###Code class TransitionBlock(tf.keras.layers.Layer): def __init__(self, num_channels, **kwargs): super(TransitionBlock, self).__init__(**kwargs) self.batch_norm = tf.keras.layers.BatchNormalization() self.relu = tf.keras.layers.ReLU() self.conv = tf.keras.layers.Conv2D(num_channels, kernel_size=1) self.avg_pool = tf.keras.layers.AvgPool2D(pool_size=2, strides=2) def call(self, x): x = self.batch_norm(x) x = self.relu(x) x = self.conv(x) return self.avg_pool(x) ###Output _____no_output_____ ###Markdown [**Apply a transition layer**] with 10 channels to the output of the dense block in the previous example. This reduces the number of output channels to 10, and halves the height and width. ###Code blk = TransitionBlock(10) blk(Y).shape ###Output _____no_output_____ ###Markdown [**DenseNet Model**]Next, we will construct a DenseNet model. DenseNet first uses the same single convolutional layer and maximum pooling layer as in ResNet. ###Code def block_1(): return tf.keras.Sequential([ tf.keras.layers.Conv2D(64, kernel_size=7, strides=2, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.ReLU(), tf.keras.layers.MaxPool2D(pool_size=3, strides=2, padding='same')]) ###Output _____no_output_____ ###Markdown Then, similar to the four modules made up of residual blocks that ResNet uses,DenseNet uses four dense blocks.Similar to ResNet, we can set the number of convolutional layers used in each dense block. Here, we set it to 4, consistent with the ResNet-18 model in :numref:`sec_resnet`. Furthermore, we set the number of channels (i.e., growth rate) for the convolutional layers in the dense block to 32, so 128 channels will be added to each dense block.In ResNet, the height and width are reduced between each module by a residual block with a stride of 2. Here, we use the transition layer to halve the height and width and halve the number of channels. ###Code def block_2(): net = block_1() # `num_channels`: the current number of channels num_channels, growth_rate = 64, 32 num_convs_in_dense_blocks = [4, 4, 4, 4] for i, num_convs in enumerate(num_convs_in_dense_blocks): net.add(DenseBlock(num_convs, growth_rate)) # This is the number of output channels in the previous dense block num_channels += num_convs * growth_rate # A transition layer that halves the number of channels is added # between the dense blocks if i != len(num_convs_in_dense_blocks) - 1: num_channels //= 2 net.add(TransitionBlock(num_channels)) return net ###Output _____no_output_____ ###Markdown Similar to ResNet, a global pooling layer and a fully-connected layer are connected at the end to produce the output. ###Code def net(): net = block_2() net.add(tf.keras.layers.BatchNormalization()) net.add(tf.keras.layers.ReLU()) net.add(tf.keras.layers.GlobalAvgPool2D()) net.add(tf.keras.layers.Flatten()) net.add(tf.keras.layers.Dense(10)) return net ###Output _____no_output_____ ###Markdown [**Training**]Since we are using a deeper network here, in this section, we will reduce the input height and width from 224 to 96 to simplify the computation. ###Code lr, num_epochs, batch_size = 0.1, 10, 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96) d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu()) ###Output loss 0.136, train acc 0.951, test acc 0.909 5727.2 examples/sec on /GPU:0 ###Markdown Densely Connected Networks (DenseNet)ResNet significantly changed the view of how to parametrize the functions in deep networks. *DenseNet* (dense convolutional network) is to some extent the logical extension of this :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`.To understand how to arrive at it, let us take a small detour to mathematics. From ResNet to DenseNetRecall the Taylor expansion for functions. For the point $x = 0$ it can be written as$$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots.$$The key point is that it decomposes a function into increasingly higher order terms. In a similar vein, ResNet decomposes functions into$$f(\mathbf{x}) = \mathbf{x} + g(\mathbf{x}).$$That is, ResNet decomposes $f$ into a simple linear term and a more complexnonlinear one.What if we want to capture (not necessarily add) information beyond two terms?One solution was DenseNet :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`.![The main difference between ResNet (left) and DenseNet (right) in cross-layer connections: use of addition and use of concatenation. ](../img/densenet-block.svg):label:`fig_densenet_block`As shown in :numref:`fig_densenet_block`, the key difference between ResNet and DenseNet is that in the latter case outputs are *concatenated* (denoted by $[,]$) rather than added.As a result, we perform a mapping from $\mathbf{x}$ to its values after applying an increasingly complex sequence of functions:$$\mathbf{x} \to \left[\mathbf{x},f_1(\mathbf{x}),f_2([\mathbf{x}, f_1(\mathbf{x})]), f_3([\mathbf{x}, f_1(\mathbf{x}), f_2([\mathbf{x}, f_1(\mathbf{x})])]), \ldots\right].$$In the end, all these functions are combined in MLP to reduce the number of features again. In terms of implementation this is quite simple:rather than adding terms, we concatenate them. The name DenseNet arises from the fact that the dependency graph between variables becomes quite dense. The last layer of such a chain is densely connected to all previous layers. The dense connections are shown in :numref:`fig_densenet`.![Dense connections in DenseNet.](../img/densenet.svg):label:`fig_densenet`The main components that compose a DenseNet are *dense blocks* and *transition layers*. The former define how the inputs and outputs are concatenated, while the latter control the number of channels so that it is not too large. [**Dense Blocks**]DenseNet uses the modified "batch normalization, activation, and convolution"structure of ResNet (see the exercise in :numref:`sec_resnet`).First, we implement this convolution block structure. ###Code import tensorflow as tf from d2l import tensorflow as d2l class ConvBlock(tf.keras.layers.Layer): def __init__(self, num_channels): super(ConvBlock, self).__init__() self.bn = tf.keras.layers.BatchNormalization() self.relu = tf.keras.layers.ReLU() self.conv = tf.keras.layers.Conv2D( filters=num_channels, kernel_size=(3, 3), padding='same') self.listLayers = [self.bn, self.relu, self.conv] def call(self, x): y = x for layer in self.listLayers.layers: y = layer(y) y = tf.keras.layers.concatenate([x,y], axis=-1) return y ###Output _____no_output_____ ###Markdown A *dense block* consists of multiple convolution blocks, each using the same number of output channels. In the forward propagation, however, we concatenate the input and output of each convolution block on the channel dimension. ###Code class DenseBlock(tf.keras.layers.Layer): def __init__(self, num_convs, num_channels): super(DenseBlock, self).__init__() self.listLayers = [] for _ in range(num_convs): self.listLayers.append(ConvBlock(num_channels)) def call(self, x): for layer in self.listLayers.layers: x = layer(x) return x ###Output _____no_output_____ ###Markdown In the following example,we [**define a `DenseBlock` instance**] with 2 convolution blocks of 10 output channels.When using an input with 3 channels, we will get an output with $3+2\times 10=23$ channels. The number of convolution block channels controls the growth in the number of output channels relative to the number of input channels. This is also referred to as the *growth rate*. ###Code blk = DenseBlock(2, 10) X = tf.random.uniform((4, 8, 8, 3)) Y = blk(X) Y.shape ###Output _____no_output_____ ###Markdown 稠密连接网络(DenseNet)ResNet极大地改变了如何参数化深层网络中函数的观点。*稠密连接网络*(DenseNet) :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`在某种程度上是ResNet的逻辑扩展。让我们先从数学上了解一下。 从ResNet到DenseNet回想一下任意函数的泰勒展开式(Taylor expansion),它把这个函数分解成越来越高阶的项。在$x$接近0时,$$f(x) = f(0) + f'(0) x + \frac{f''(0)}{2!} x^2 + \frac{f'''(0)}{3!} x^3 + \ldots.$$同样,ResNet将函数展开为$$f(\mathbf{x}) = \mathbf{x} + g(\mathbf{x}).$$也就是说,ResNet将$f$分解为两部分:一个简单的线性项和一个复杂的非线性项。那么再向前拓展一步,如果我们想将$f$拓展成超过两部分的信息呢?一种方案便是DenseNet。![ResNet(左)与 DenseNet(右)在跨层连接上的主要区别:使用相加和使用连结。](../img/densenet-block.svg):label:`fig_densenet_block`如 :numref:`fig_densenet_block`所示,ResNet和DenseNet的关键区别在于,DenseNet输出是*连接*(用图中的$[,]$表示)而不是如ResNet的简单相加。因此,在应用越来越复杂的函数序列后,我们执行从$\mathbf{x}$到其展开式的映射:$$\mathbf{x} \to \left[\mathbf{x},f_1(\mathbf{x}),f_2([\mathbf{x}, f_1(\mathbf{x})]), f_3([\mathbf{x}, f_1(\mathbf{x}), f_2([\mathbf{x}, f_1(\mathbf{x})])]), \ldots\right].$$最后,将这些展开式结合到多层感知机中,再次减少特征的数量。实现起来非常简单:我们不需要添加术语,而是将它们连接起来。DenseNet这个名字由变量之间的“稠密连接”而得来,最后一层与之前的所有层紧密相连。稠密连接如 :numref:`fig_densenet`所示。![稠密连接。](../img/densenet.svg):label:`fig_densenet`稠密网络主要由2部分构成:*稠密块*(dense block)和*过渡层*(transition layer)。前者定义如何连接输入和输出,而后者则控制通道数量,使其不会太复杂。 (**稠密块体**)DenseNet使用了ResNet改良版的“批量规范化、激活和卷积”架构(参见 :numref:`sec_resnet`中的练习)。我们首先实现一下这个架构。 ###Code import tensorflow as tf from d2l import tensorflow as d2l class ConvBlock(tf.keras.layers.Layer): def __init__(self, num_channels): super(ConvBlock, self).__init__() self.bn = tf.keras.layers.BatchNormalization() self.relu = tf.keras.layers.ReLU() self.conv = tf.keras.layers.Conv2D( filters=num_channels, kernel_size=(3, 3), padding='same') self.listLayers = [self.bn, self.relu, self.conv] def call(self, x): y = x for layer in self.listLayers.layers: y = layer(y) y = tf.keras.layers.concatenate([x,y], axis=-1) return y ###Output _____no_output_____ ###Markdown 一个*稠密块*由多个卷积块组成,每个卷积块使用相同数量的输出通道。然而,在前向传播中,我们将每个卷积块的输入和输出在通道维上连结。 ###Code class DenseBlock(tf.keras.layers.Layer): def __init__(self, num_convs, num_channels): super(DenseBlock, self).__init__() self.listLayers = [] for _ in range(num_convs): self.listLayers.append(ConvBlock(num_channels)) def call(self, x): for layer in self.listLayers.layers: x = layer(x) return x ###Output _____no_output_____ ###Markdown 在下面的例子中,我们[**定义一个**]有2个输出通道数为10的(**`DenseBlock`**)。使用通道数为3的输入时,我们会得到通道数为$3+2\times 10=23$的输出。卷积块的通道数控制了输出通道数相对于输入通道数的增长,因此也被称为*增长率*(growth rate)。 ###Code blk = DenseBlock(2, 10) X = tf.random.uniform((4, 8, 8, 3)) Y = blk(X) Y.shape ###Output _____no_output_____ ###Markdown [**过渡层**]由于每个稠密块都会带来通道数的增加,使用过多则会过于复杂化模型。而过渡层可以用来控制模型复杂度。它通过$1\times 1$卷积层来减小通道数,并使用步幅为2的平均汇聚层减半高和宽,从而进一步降低模型复杂度。 ###Code class TransitionBlock(tf.keras.layers.Layer): def __init__(self, num_channels, **kwargs): super(TransitionBlock, self).__init__(**kwargs) self.batch_norm = tf.keras.layers.BatchNormalization() self.relu = tf.keras.layers.ReLU() self.conv = tf.keras.layers.Conv2D(num_channels, kernel_size=1) self.avg_pool = tf.keras.layers.AvgPool2D(pool_size=2, strides=2) def call(self, x): x = self.batch_norm(x) x = self.relu(x) x = self.conv(x) return self.avg_pool(x) ###Output _____no_output_____ ###Markdown 对上一个例子中稠密块的输出[**使用**]通道数为10的[**过渡层**]。此时输出的通道数减为10,高和宽均减半。 ###Code blk = TransitionBlock(10) blk(Y).shape ###Output _____no_output_____ ###Markdown [**DenseNet模型**]我们来构造DenseNet模型。DenseNet首先使用同ResNet一样的单卷积层和最大汇聚层。 ###Code def block_1(): return tf.keras.Sequential([ tf.keras.layers.Conv2D(64, kernel_size=7, strides=2, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.ReLU(), tf.keras.layers.MaxPool2D(pool_size=3, strides=2, padding='same')]) ###Output _____no_output_____ ###Markdown 接下来,类似于ResNet使用的4个残差块,DenseNet使用的是4个稠密块。与ResNet类似,我们可以设置每个稠密块使用多少个卷积层。这里我们设成4,从而与 :numref:`sec_resnet`的ResNet-18保持一致。稠密块里的卷积层通道数(即增长率)设为32,所以每个稠密块将增加128个通道。在每个模块之间,ResNet通过步幅为2的残差块减小高和宽,DenseNet则使用过渡层来减半高和宽,并减半通道数。 ###Code def block_2(): net = block_1() # num_channels为当前的通道数 num_channels, growth_rate = 64, 32 num_convs_in_dense_blocks = [4, 4, 4, 4] for i, num_convs in enumerate(num_convs_in_dense_blocks): net.add(DenseBlock(num_convs, growth_rate)) # 上一个稠密块的输出通道数 num_channels += num_convs * growth_rate # 在稠密块之间添加一个转换层,使通道数量减半 if i != len(num_convs_in_dense_blocks) - 1: num_channels //= 2 net.add(TransitionBlock(num_channels)) return net ###Output _____no_output_____ ###Markdown 与ResNet类似,最后接上全局汇聚层和全连接层来输出结果。 ###Code def net(): net = block_2() net.add(tf.keras.layers.BatchNormalization()) net.add(tf.keras.layers.ReLU()) net.add(tf.keras.layers.GlobalAvgPool2D()) net.add(tf.keras.layers.Flatten()) net.add(tf.keras.layers.Dense(10)) return net ###Output _____no_output_____ ###Markdown [**训练模型**]由于这里使用了比较深的网络,本节里我们将输入高和宽从224降到96来简化计算。 ###Code lr, num_epochs, batch_size = 0.1, 10, 256 train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96) d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu()) ###Output loss 0.136, train acc 0.950, test acc 0.884 6398.5 examples/sec on /GPU:0
Model Experimentation/Model Experimentation - Random Forest Regressor.ipynb
###Markdown Model Experimentation1. Random Forest Regressor2. XGBoost Regressor3. LightGBM Regressor Import Libraries ###Code # dataframe packages import pandas as pd import numpy as np from skopt.space import Categorical, Integer, Real from skopt.utils import use_named_args from skopt import gp_minimize # statistical packages import math from scipy.stats import uniform from math import sqrt # modeling packages from sklearn.ensemble import RandomForestRegressor import lightgbm as lgb from lightgbm import LGBMRegressor from xgboost import XGBRegressor from sklearn.svm import SVR # evaluation packages from sklearn.metrics import r2_score,mean_squared_error, mean_squared_log_error from sklearn.model_selection import cross_val_score, RepeatedKFold, train_test_split, RandomizedSearchCV, GridSearchCV from sklearn.model_selection import TimeSeriesSplit # scaling packages from sklearn.preprocessing import StandardScaler, MinMaxScaler # visualisation packages import seaborn as sns import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Load Dataset ###Code directory = 'C:/Users/chery/Documents/NUS Y4 Sem 2 Modules/BT4222/Dataset/Primary Dataset/modelling_dataset.csv' model_df = pd.read_csv(directory) model_df.head() ###Output _____no_output_____ ###Markdown Train Test Split1. Training set 0.62. Validation set 0.23. Test set - 0.2The output variable will be Unit Price ($ PSM) ###Code X = model_df.drop(["Unit Price ($ PSM)"], axis=1) y = model_df['Unit Price ($ PSM)'] print('Shape of X is:', X.shape) print('Shape of Y is:', y.shape) ###Output Shape of X is: (54674, 52) Shape of Y is: (54674,) ###Markdown Train Test Split ###Code X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, shuffle=True) print('Shape of X_train is:', X_train.shape) print('Shape of y_train is:', y_train.shape) print('Shape of X_test is:', X_test.shape) print('Shape of y_test is:', y_test.shape) ###Output Shape of X_train is: (38271, 52) Shape of y_train is: (38271,) Shape of X_test is: (16403, 52) Shape of y_test is: (16403,) ###Markdown Scaling ###Code all_features = list(X_train.columns) standardScale_vars = ['Area (SQM)', 'Floor Number', 'PPI', 'Average Cases Per Year', 'Nearest Primary School', 'nearest_station_distance'] minMax_vars = ['Remaining Lease'] remaining_features = [x for x in all_features if x not in standardScale_vars and x not in minMax_vars] s_scaler = StandardScaler() mm_scaler = MinMaxScaler() s_scaled = pd.DataFrame(s_scaler.fit_transform(X_train.loc[:, standardScale_vars].copy()), columns=standardScale_vars, index=X_train.index) mm_scaled = pd.DataFrame(mm_scaler.fit_transform(X_train.loc[:, minMax_vars].copy()), columns=minMax_vars, index=X_train.index) X_train = pd.concat([s_scaled, mm_scaled, X_train.loc[:, remaining_features].copy()], axis=1) X_train s_scaled_test = pd.DataFrame(s_scaler.transform(X_test.loc[:, standardScale_vars].copy()), columns=standardScale_vars, index=X_test.index) mm_scaled_test = pd.DataFrame(mm_scaler.fit_transform(X_test.loc[:, minMax_vars].copy()), columns=minMax_vars, index=X_test.index) X_test = pd.concat([s_scaled_test, mm_scaled_test, X_test.loc[:, remaining_features].copy()], axis=1) X_test ###Output _____no_output_____ ###Markdown Model Tuning Split training set into training and evaluation ###Code X_train, X_eval, y_train, y_eval = train_test_split(X_train, y_train, test_size=0.2, random_state=42, shuffle=True) print('Shape of X_train is:', X_train.shape) print('Shape of y_train is:', y_train.shape) print('Shape of X_eval is:', X_eval.shape) print('Shape of y_eval is:', y_eval.shape) ###Output Shape of X_train is: (30616, 52) Shape of y_train is: (30616,) Shape of X_eval is: (7655, 52) Shape of y_eval is: (7655,) ###Markdown Random Forest Regressor Select hyperparameters ###Code # Create the parameter grid: gbm_param_grid gbm_param_grid = {'max_depth':[i for i in range(30, 50)], 'min_samples_leaf':[i for i in range(1, 20)], 'min_samples_split':[i for i in range(2, 10)], 'min_impurity_decrease':[i for i in range(10, 20)], 'min_weight_fraction_leaf':[i/10.0 for i in range(0,5)], 'max_leaf_nodes':[i for i in range(10, 20)], } # Instantiate the regressor: lgbm model = RandomForestRegressor(bootstrap=False, criterion='mse', max_features='sqrt', n_estimators=6500, random_state=42) # Perform random search: grid_mse clf = RandomizedSearchCV(model, gbm_param_grid, random_state=0) # Fit randomized_mse to the data search = clf.fit(X_train, y_train) # Print the best parameters and lowest RMSE from pprint import pprint pprint(search.best_estimator_.get_params()) model = RandomForestRegressor(bootstrap=False, criterion='mse', max_depth=33, max_features='sqrt', min_samples_leaf=2, min_samples_split=2, n_estimators=6500, random_state=42) model.fit(X_train, y_train) predict_price_xgb = model.predict(X_eval) predict_price_xgb ###Output _____no_output_____ ###Markdown Both y_test (the actual test values) and predict_price (the predicted values) are reshaped in order to compare the model accuracy using the root mean squared error. ###Code y_eval_xgb = np.array(y_eval) y_eval_xgb = y_eval_xgb.reshape(-1,1) predict_price_xgb = predict_price_xgb.reshape(-1,1) ###Output _____no_output_____ ###Markdown Root Mean Squared Error (RMSE) ###Code eval_mse = mean_squared_error(y_eval_xgb, predict_price_xgb) rmse = sqrt(eval_mse) print('RMSE: %f' % rmse) ###Output RMSE: 850.815577 ###Markdown Let’s compare this to the mean value across the test data: ###Code np.mean(y_eval_xgb) print('Size of error is approximately:', float(rmse/np.mean(y_eval_xgb)*100)) ###Output Size of error is approximately: 7.586371582836665 ###Markdown Mean Absolute Percentage Error ###Code def mean_absolute_percentage_error(y_true, y_pred): y_true, y_pred = np.array(y_true), np.array(y_pred) return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 mape_xgb = mean_absolute_percentage_error(y_eval_xgb, predict_price_xgb) print('MAPE:', mape_xgb) ###Output MAPE: 4.762758307130743 ###Markdown Root Mean Squared Log Error ###Code eval_msle = mean_squared_log_error(y_eval_xgb, predict_price_xgb) rmsle = sqrt(eval_msle) print('RMSLE: %f' % rmsle) ###Output RMSLE: 0.065679 ###Markdown Average house price will be 1.05 times as big as the true value Adjusted R^2 ###Code adj_r = 1 - (1-model.score(X_train, y_train))*(len(y_train)-1)/(len(y_train)-X_train.shape[1]-1) print('Adjusted R^2: %f' % adj_r) ###Output Adjusted R^2: 0.989879 ###Markdown Predict on Test Dataset ###Code predict_price = model.predict(X_test) y_test_xgb = np.array(y_test) y_test_xgb = y_test_xgb.reshape(-1,1) predict_test_price = predict_price.reshape(-1,1) ###Output _____no_output_____ ###Markdown RMSE ###Code test_mse = mean_squared_error(y_test_xgb, predict_test_price) rmse = sqrt(test_mse) print('RMSE: %f' % rmse) np.mean(y_test_xgb) print('Size of error is approximately:', float(rmse/np.mean(y_test_xgb)*100)) ###Output Size of error is approximately: 7.93971087057534 ###Markdown MAPE ###Code mape_test_xgb = mean_absolute_percentage_error(y_test_xgb, predict_test_price) print('MAPE:', mape_test_xgb) ###Output MAPE: 4.853895316690389 ###Markdown RMSLE ###Code test_msle = mean_squared_log_error(y_test_xgb, predict_test_price) rmsle = sqrt(test_msle) print('RMSLE: %f' % rmsle) # Feature Importance imp = pd.Series(data= model.feature_importances_, index= X_train.columns).sort_values(ascending=False) plt.figure(figsize=(10,12)) plt.title("Feature importance") ax = sns.barplot(y=imp.index, x=imp.values, palette="Blues_d", orient='h') ###Output _____no_output_____
Titanic_End_to_End/Titatnic.ipynb
###Markdown Exploring and Processing Data ###Code # imports import pandas as pd import numpy as np import os os.path.pardir pwd() ###Output _____no_output_____ ###Markdown Import Data ###Code train_df = pd.read_csv("train.csv") test_df = pd.read_csv("train.csv") # read the data with all default parameters #train_df = pd.read_csv(train_file_path, index_col='PassengerId') #test_df = pd.read_csv(test_file_path, index_col='PassengerId') # get the type type(train_df) ###Output _____no_output_____ ###Markdown Basic Structure ###Code # use .info() to get brief information about the dataframe train_df.info() test_df.info() test_df['Survived'] = -888 # Adding Survived with a default value df = pd.concat((train_df, test_df),axis=0) df.info() # use .head() to get top 5 rows df.head() # use .head(n) to get top-n rows df.head(10) # use .tail() to get last 5 rows df.tail() # column selection using dot df.Name # selection using column name as string df['Name'] # selecting multiple columns using a list of column name strings df[['Name','Age']] # indexing : use loc for label based indexing # all columns df.loc[5:10,] # selecting column range df.loc[5:10, 'Age' : 'Pclass'] # selecting discrete columns df.loc[5:10, ['Survived', 'Fare','Embarked']] # indexing : use iloc for position based indexing df.iloc[5:10, 3:8] # filter rows based on the condition male_passengers = df.loc[df.Sex == 'male',:] print('Number of male passengers : {0}'.format(len(male_passengers))) # use & or | operators to build complex logic male_passengers_first_class = df.loc[((df.Sex == 'male') & (df.Pclass == 1)),:] print('Number of male passengers in first class: {0}'.format(len(male_passengers_first_class))) ###Output Number of male passengers in first class: 179 ###Markdown Summary Statistics ###Code # use .describe() to get statistics for all numeric columns df.describe() train_df.info() # numerical feature # centrality measures print('Mean fare : {0}'.format(df.Fare.mean())) # mean print('Median fare : {0}'.format(df.Fare.median())) # median # dispersion measures print('Min fare : {0}'.format(df.Fare.min())) # minimum print('Max fare : {0}'.format(df.Fare.max())) # maximum print('Fare range : {0}'.format(df.Fare.max() - df.Fare.min())) # range print('25 percentile : {0}'.format(df.Fare.quantile(.25))) # 25 percentile print('50 percentile : {0}'.format(df.Fare.quantile(.5))) # 50 percentile print('75 percentile : {0}'.format(df.Fare.quantile(.75))) # 75 percentile print('Variance fare : {0}'.format(df.Fare.var())) # variance print('Standard deviation fare : {0}'.format(df.Fare.std())) # standard deviation %matplotlib inline # box-whisker plot df.Fare.plot(kind='box') # use .describe(include='all') to get statistics for all columns including non-numeric ones df.describe(include='all') # categorical column : Counts df.Sex.value_counts() # categorical column : Proprotions df.Sex.value_counts(normalize=True) #Relative frequency # apply on other columns df[df.Survived != -888].Survived.value_counts() # count : Passenger class df.Pclass.value_counts() # visualize counts #Categorical vs Numerical df.Pclass.value_counts().plot(kind='bar') # title : to set title, color : to set color, rot : to rotate labels df.Pclass.value_counts().plot(kind='bar',rot = 0, title='Class wise passenger count', color='c'); ###Output _____no_output_____ ###Markdown Distributions ###Code # use hist to create histogram df.Age.plot(kind='hist', title='histogram for Age', color='c'); #Continous and Numerical # use bins to add or remove bins df.Age.plot(kind='hist', title='histogram for Age', color='c', bins=20); # use kde for density plot #Kernel Density Estimation df.Age.plot(kind='kde', title='Density plot for Age', color='c'); #We cannot predict for continous value #Probability will be zero in that case #Density is probability per unit x #Density * Range = Probability = 1 # histogram for fare df.Fare.plot(kind='hist', title='histogram for Fare', color='c', bins=20); print('skewness for age : {0:.2f}'.format(df.Age.skew())) print('skewness for fare : {0:.2f}'.format(df.Fare.skew())) # use scatter plot for bi-variate distribution df.plot.scatter(x='Age', y='Fare', color='c', title='scatter plot : Age vs Fare'); # use alpha to set the transparency df.plot.scatter(x='Age', y='Fare', color='c', title='scatter plot : Age vs Fare', alpha=0.1); #Most of the passengesrs are between age 20 to 40 and paying around 3- rs df.plot.scatter(x='Pclass', y='Fare', color='c', title='Scatter plot : Passenger class vs Fare', alpha=0.15); ###Output _____no_output_____ ###Markdown Grouping and Aggregations ###Code # group by df.groupby('Sex').Age.median() # group by df.groupby(['Pclass']).Fare.median() df.groupby(['Pclass']).Age.median() df.groupby(['Pclass'])['Fare','Age'].median() df.groupby(['Pclass']).agg({'Fare' : 'mean', 'Age' : 'median'}) # more complicated aggregations aggregations = { 'Fare': { # work on the "Fare" column 'mean_Fare': 'mean', # get the mean fare 'median_Fare': 'median', # get median fare 'max_Fare': max, 'min_Fare': np.min }, 'Age': { # work on the "Age" column 'median_Age': 'median', # Find the max, call the result "max_date" 'min_Age': min, 'max_Age': max, 'range_Age': lambda x: max(x) - min(x) # Calculate the age range per group } } df.groupby(['Pclass']).agg(aggregations) df.groupby(['Pclass', 'Embarked']).Fare.median() ###Output _____no_output_____ ###Markdown Crosstabs ###Code # crosstab on Sex and Pclass pd.crosstab(df.Sex, df.Pclass) pd.crosstab(df.Sex, df.Pclass).plot(kind='bar'); ###Output _____no_output_____ ###Markdown Pivots ###Code # pivot table df.pivot_table(index='Sex',columns = 'Pclass',values='Age', aggfunc='mean') df.groupby(['Sex','Pclass']).Age.mean() df.groupby(['Sex','Pclass']).Age.mean().unstack() ###Output _____no_output_____ ###Markdown Data Munging : Working with missing values ###Code # use .info() to detect missing values (if any) df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 11 columns): Age 1046 non-null float64 Cabin 295 non-null object Embarked 1307 non-null object Fare 1308 non-null float64 Name 1309 non-null object Parch 1309 non-null int64 Pclass 1309 non-null int64 Sex 1309 non-null object SibSp 1309 non-null int64 Survived 1309 non-null int64 Ticket 1309 non-null object dtypes: float64(2), int64(4), object(5) memory usage: 162.7+ KB ###Markdown Feature : Embarked ###Code # extract rows with Embarked as Null df[df.Embarked.isnull()] # how many people embarked at different points df.Embarked.value_counts() # which embarked point has higher survival count pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].Embarked) # impute the missing values with 'S' # df.loc[df.Embarked.isnull(), 'Embarked'] = 'S' # df.Embarked.fillna('S', inplace=True) # Option 2 : explore the fare of each class for each embarkment point df.groupby(['Pclass', 'Embarked']).Fare.median() # replace the missing values with 'C' df.Embarked.fillna('C', inplace=True) # check if any null value remaining df[df.Embarked.isnull()] # check info again df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 11 columns): Age 1046 non-null float64 Cabin 295 non-null object Embarked 1309 non-null object Fare 1308 non-null float64 Name 1309 non-null object Parch 1309 non-null int64 Pclass 1309 non-null int64 Sex 1309 non-null object SibSp 1309 non-null int64 Survived 1309 non-null int64 Ticket 1309 non-null object dtypes: float64(2), int64(4), object(5) memory usage: 162.7+ KB ###Markdown Feature : Fare ###Code df[df.Fare.isnull()] median_fare = df.loc[(df.Pclass == 3) & (df.Embarked == 'S'),'Fare'].median() print(median_fare) df.Fare.fillna(median_fare, inplace=True) # check info again df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 11 columns): Age 1046 non-null float64 Cabin 295 non-null object Embarked 1309 non-null object Fare 1309 non-null float64 Name 1309 non-null object Parch 1309 non-null int64 Pclass 1309 non-null int64 Sex 1309 non-null object SibSp 1309 non-null int64 Survived 1309 non-null int64 Ticket 1309 non-null object dtypes: float64(2), int64(4), object(5) memory usage: 162.7+ KB ###Markdown Feature : Age ###Code # set maximum number of rows to be displayed pd.options.display.max_rows = 15 # return null rows df[df.Age.isnull()] ###Output _____no_output_____ ###Markdown option 1 : replace all missing age with mean value ###Code df.Age.plot(kind='hist', bins=20, color='c'); # get mean df.Age.mean() ###Output _____no_output_____ ###Markdown issue : due to few high values of 70's and 80's pushing the overall mean ###Code # replace the missing values # df.Age.fillna(df.Age.mean(), inplace=True) ###Output _____no_output_____ ###Markdown option 2 : replace with median age of gender ###Code # median values df.groupby('Sex').Age.median() # visualize using boxplot df[df.Age.notnull()].boxplot('Age','Sex'); # replace : # age_sex_median = df.groupby('Sex').Age.transform('median') # df.Age.fillna(age_sex_median, inplace=True) ###Output _____no_output_____ ###Markdown option 3 : replace with median age of Pclass ###Code df[df.Age.notnull()].boxplot('Age','Pclass'); # replace : # pclass_age_median = df.groupby('Pclass').Age.transform('median') # df.Age.fillna(pclass_age_median , inplace=True) ###Output _____no_output_____ ###Markdown option 4 : replace with median age of title ###Code df.Name # Function to extract the title from the name def GetTitle(name): first_name_with_title = name.split(',')[1] title = first_name_with_title.split('.')[0] title = title.strip().lower() return title # use map function to apply the function on each Name value row i df.Name.map(lambda x : GetTitle(x)) # alternatively you can use : df.Name.map(GetTitle) df.Name.map(lambda x : GetTitle(x)).unique() # Function to extract the title from the name def GetTitle(name): title_group = {'mr' : 'Mr', 'mrs' : 'Mrs', 'miss' : 'Miss', 'master' : 'Master', 'don' : 'Sir', 'rev' : 'Sir', 'dr' : 'Officer', 'mme' : 'Mrs', 'ms' : 'Mrs', 'major' : 'Officer', 'lady' : 'Lady', 'sir' : 'Sir', 'mlle' : 'Miss', 'col' : 'Officer', 'capt' : 'Officer', 'the countess' : 'Lady', 'jonkheer' : 'Sir', 'dona' : 'Lady' } first_name_with_title = name.split(',')[1] title = first_name_with_title.split('.')[0] title = title.strip().lower() return title_group[title] # create Title feature df['Title'] = df.Name.map(lambda x : GetTitle(x)) # head df.head() # Box plot of Age with title df[df.Age.notnull()].boxplot('Age','Title'); # replace missing values title_age_median = df.groupby('Title').Age.transform('median') df.Age.fillna(title_age_median , inplace=True) # check info again df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 12 columns): Age 1309 non-null float64 Cabin 295 non-null object Embarked 1309 non-null object Fare 1309 non-null float64 Name 1309 non-null object Parch 1309 non-null int64 Pclass 1309 non-null int64 Sex 1309 non-null object SibSp 1309 non-null int64 Survived 1309 non-null int64 Ticket 1309 non-null object Title 1309 non-null object dtypes: float64(2), int64(4), object(6) memory usage: 172.9+ KB ###Markdown Working with outliers Age ###Code # use histogram to get understand the distribution df.Age.plot(kind='hist', bins=20, color='c'); df.loc[df.Age > 70] ###Output _____no_output_____ ###Markdown Fare ###Code # histogram for fare df.Fare.plot(kind='hist', title='histogram for Fare', bins=20, color='c'); # box plot to indentify outliers df.Fare.plot(kind='box'); # look into the outliers df.loc[df.Fare == df.Fare.max()] # Try some transformations to reduce the skewness LogFare = np.log(df.Fare + 1.0) # Adding 1 to accomodate zero fares : log(0) is not defined # Histogram of LogFare LogFare.plot(kind='hist', color='c', bins=20); # binning pd.qcut(df.Fare, 4) pd.qcut(df.Fare, 4, labels=['very_low','low','high','very_high']) # discretization pd.qcut(df.Fare, 4, labels=['very_low','low','high','very_high']).value_counts().plot(kind='bar', color='c', rot=0); # create fare bin feature df['Fare_Bin'] = pd.qcut(df.Fare, 4, labels=['very_low','low','high','very_high']) ###Output _____no_output_____ ###Markdown Feature Engineering Feature : Age State ( Adult or Child ) ###Code # AgeState based on Age df['AgeState'] = np.where(df['Age'] >= 18, 'Adult','Child') # AgeState Counts df['AgeState'].value_counts() # crosstab pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].AgeState) ###Output _____no_output_____ ###Markdown Feature : FamilySize ###Code # Family : Adding Parents with Siblings df['FamilySize'] = df.Parch + df.SibSp + 1 # 1 for self # explore the family feature df['FamilySize'].plot(kind='hist', color='c'); # further explore this family with max family members df.loc[df.FamilySize == df.FamilySize.max(),['Name','Survived','FamilySize','Ticket']] pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].FamilySize) ###Output _____no_output_____ ###Markdown Feature : IsMother ###Code # a lady aged more thana 18 who has Parch >0 and is married (not Miss) df['IsMother'] = np.where(((df.Sex == 'female') & (df.Parch > 0) & (df.Age > 18) & (df.Title != 'Miss')), 1, 0) # Crosstab with IsMother pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].IsMother) ###Output _____no_output_____ ###Markdown Deck ###Code # explore Cabin values df.Cabin # use unique to get unique values for Cabin feature df.Cabin.unique() # look at the Cabin = T df.loc[df.Cabin == 'T'] # set the value to NaN df.loc[df.Cabin == 'T', 'Cabin'] = np.NaN # look at the unique values of Cabin again df.Cabin.unique() # extract first character of Cabin string to the deck def get_deck(cabin): return np.where(pd.notnull(cabin),str(cabin)[0].upper(),'Z') df['Deck'] = df['Cabin'].map(lambda x : get_deck(x)) # check counts df.Deck.value_counts() # use crosstab to look into survived feature cabin wise pd.crosstab(df[df.Survived != -888].Survived, df[df.Survived != -888].Deck) # info command df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 17 columns): Age 1309 non-null float64 Cabin 294 non-null object Embarked 1309 non-null object Fare 1309 non-null float64 Name 1309 non-null object Parch 1309 non-null int64 Pclass 1309 non-null int64 Sex 1309 non-null object SibSp 1309 non-null int64 Survived 1309 non-null int64 Ticket 1309 non-null object Title 1309 non-null object Fare_Bin 1309 non-null category AgeState 1309 non-null object FamilySize 1309 non-null int64 IsMother 1309 non-null int32 Deck 1309 non-null object dtypes: category(1), float64(2), int32(1), int64(5), object(8) memory usage: 210.2+ KB ###Markdown Categorical Feature Encoding ###Code # sex df['IsMale'] = np.where(df.Sex == 'male', 1, 0) # columns Deck, Pclass, Title, AgeState df = pd.get_dummies(df,columns=['Deck', 'Pclass','Title', 'Fare_Bin', 'Embarked','AgeState']) print(df.info()) ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 33 columns): Survived 1309 non-null int64 Age 1309 non-null float64 Fare 1309 non-null float64 FamilySize 1309 non-null int64 IsMother 1309 non-null int32 IsMale 1309 non-null int32 Deck_A 1309 non-null uint8 Deck_B 1309 non-null uint8 Deck_C 1309 non-null uint8 Deck_D 1309 non-null uint8 Deck_E 1309 non-null uint8 Deck_F 1309 non-null uint8 Deck_G 1309 non-null uint8 Deck_Z 1309 non-null uint8 Pclass_1 1309 non-null uint8 Pclass_2 1309 non-null uint8 Pclass_3 1309 non-null uint8 Title_Lady 1309 non-null uint8 Title_Master 1309 non-null uint8 Title_Miss 1309 non-null uint8 Title_Mr 1309 non-null uint8 Title_Mrs 1309 non-null uint8 Title_Officer 1309 non-null uint8 Title_Sir 1309 non-null uint8 Fare_Bin_very_low 1309 non-null uint8 Fare_Bin_low 1309 non-null uint8 Fare_Bin_high 1309 non-null uint8 Fare_Bin_very_high 1309 non-null uint8 Embarked_C 1309 non-null uint8 Embarked_Q 1309 non-null uint8 Embarked_S 1309 non-null uint8 AgeState_Adult 1309 non-null uint8 AgeState_Child 1309 non-null uint8 dtypes: float64(2), int32(2), int64(2), uint8(27) memory usage: 135.9 KB None ###Markdown Drop and Reorder Columns ###Code # drop columns df.drop(['Cabin','Name','Ticket','Parch','SibSp','Sex'], axis=1, inplace=True) # reorder columns columns = [column for column in df.columns if column != 'Survived'] columns = ['Survived'] + columns df = df[columns] # check info again df.info() ###Output <class 'pandas.core.frame.DataFrame'> Int64Index: 1309 entries, 1 to 1309 Data columns (total 33 columns): Survived 1309 non-null int64 Age 1309 non-null float64 Fare 1309 non-null float64 FamilySize 1309 non-null int64 IsMother 1309 non-null int32 IsMale 1309 non-null int32 Deck_A 1309 non-null uint8 Deck_B 1309 non-null uint8 Deck_C 1309 non-null uint8 Deck_D 1309 non-null uint8 Deck_E 1309 non-null uint8 Deck_F 1309 non-null uint8 Deck_G 1309 non-null uint8 Deck_Z 1309 non-null uint8 Pclass_1 1309 non-null uint8 Pclass_2 1309 non-null uint8 Pclass_3 1309 non-null uint8 Title_Lady 1309 non-null uint8 Title_Master 1309 non-null uint8 Title_Miss 1309 non-null uint8 Title_Mr 1309 non-null uint8 Title_Mrs 1309 non-null uint8 Title_Officer 1309 non-null uint8 Title_Sir 1309 non-null uint8 Fare_Bin_very_low 1309 non-null uint8 Fare_Bin_low 1309 non-null uint8 Fare_Bin_high 1309 non-null uint8 Fare_Bin_very_high 1309 non-null uint8 Embarked_C 1309 non-null uint8 Embarked_Q 1309 non-null uint8 Embarked_S 1309 non-null uint8 AgeState_Adult 1309 non-null uint8 AgeState_Child 1309 non-null uint8 dtypes: float64(2), int32(2), int64(2), uint8(27) memory usage: 135.9 KB ###Markdown Save Processed Dataset ###Code processed_data_path = os.path.join(os.path.pardir,'data','processed') write_train_path = os.path.join(processed_data_path, 'train.csv') write_test_path = os.path.join(processed_data_path, 'test.csv') # train data df.loc[df.Survived != -888].to_csv(write_train_path) # test data columns = [column for column in df.columns if column != 'Survived'] df.loc[df.Survived == -888, columns].to_csv(write_test_path) ###Output _____no_output_____ ###Markdown Advanced visualization using MatPlotlib ###Code import matplotlib.pyplot as plt %matplotlib inline plt.hist(df.Age) plt.hist(df.Age, bins=20, color='c') plt.show() plt.hist(df.Age, bins=20, color='c') plt.title('Histogram : Age') plt.xlabel('Bins') plt.ylabel('Counts') plt.show() f , ax = plt.subplots() ax.hist(df.Age, bins=20, color='c') ax.set_title('Histogram : Age') ax.set_xlabel('Bins') ax.set_ylabel('Counts') plt.show() # Add subplots f , (ax1, ax2) = plt.subplots(1, 2 , figsize=(14,3)) ax1.hist(df.Fare, bins=20, color='c') ax1.set_title('Histogram : Fare') ax1.set_xlabel('Bins') ax1.set_ylabel('Counts') ax2.hist(df.Age, bins=20, color='tomato') ax2.set_title('Histogram : Age') ax2.set_xlabel('Bins') ax2.set_ylabel('Counts') plt.show() # Adding subplots f , ax_arr = plt.subplots(3 , 2 , figsize=(14,7)) # Plot 1 ax_arr[0,0].hist(df.Fare, bins=20, color='c') ax_arr[0,0].set_title('Histogram : Fare') ax_arr[0,0].set_xlabel('Bins') ax_arr[0,0].set_ylabel('Counts') # Plot 2 ax_arr[0,1].hist(df.Age, bins=20, color='c') ax_arr[0,1].set_title('Histogram : Age') ax_arr[0,1].set_xlabel('Bins') ax_arr[0,1].set_ylabel('Counts') # Plot 3 ax_arr[1,0].boxplot(df.Fare.values) ax_arr[1,0].set_title('Boxplot : Age') ax_arr[1,0].set_xlabel('Fare') ax_arr[1,0].set_ylabel('Fare') # Plot 4 ax_arr[1,1].boxplot(df.Age.values) ax_arr[1,1].set_title('Boxplot : Age') ax_arr[1,1].set_xlabel('Age') ax_arr[1,1].set_ylabel('Age') # Plot 5 ax_arr[2,0].scatter(df.Age, df.Fare, color='c', alpha=0.15) ax_arr[2,0].set_title('Scatter Plot : Age vs Fare') ax_arr[2,0].set_xlabel('Age') ax_arr[2,0].set_ylabel('Fare') ax_arr[2,1].axis('off') plt.tight_layout() plt.show() # family size family_survived = pd.crosstab(df[df.Survived != -888].FamilySize, df[df.Survived != -888].Survived) print(family_survived) # impact of family size on survival rate family_survived = df[df.Survived != -888].groupby(['FamilySize','Survived']).size().unstack() print(family_survived) family_survived.columns = ['Not Survived', 'Survived'] # Mix and Match f, ax = plt.subplots(figsize=(10,3)) ax.set_title('Impact of family size on survival rate') family_survived.plot(kind='bar', stacked=True, color=['tomato','c'], ax=ax, rot=0) plt.legend(bbox_to_anchor=(1.3,1.0)) plt.show() family_survived.sum(axis = 1) scaled_family_survived = family_survived.div(family_survived.sum(axis=1), axis=0) scaled_family_survived.columns = ['Not Survived', 'Survived'] # Mix and Match f, ax = plt.subplots(figsize=(10,3)) ax.set_title('Impact of family size on survival rate') scaled_family_survived.plot(kind='bar', stacked=True, color=['tomato','c'], ax=ax, rot=0) plt.legend(bbox_to_anchor=(1.3,1.0)) plt.show() ###Output _____no_output_____
lectures/.ipynb_checkpoints/4-checkpoint.ipynb
###Markdown set / frozenset ###Code a = {2, 4} b = {1, 2, 3, 4} s = set() a.issubset(b), a.issuperset(b), b.issuperset(a) a.union(b), b.difference(a) a = frozenset([1, 2]) d = {} d[a] = 'frozenset' # can use frozenset as a dic key d s = {1, 2, 3} s.add(4) s ###Output _____no_output_____ ###Markdown decorator example ###Code import functools def decorator(foo): @functools.wraps(foo) def decorated(*args, **kwargs): print('Decorated') foo(*args, **kwargs) return decorated @decorator def foo(v): print(v) ###Output _____no_output_____ ###Markdown required kwarg ###Code def foo(arg1, arg2, *, required_kwarg): pass ###Output _____no_output_____ ###Markdown calculator ###Code from collections import namedtuple Expression = namedtuple('Expression', ['arg1', 'operator', 'arg2']) COMMANDS = { '+': lambda x, y: x + y, '-': lambda x, y: x - y, '*': lambda x, y: x * y, '/': lambda x, y: x / y, } def handle(): user_input = input( 'Expession format: {arg1} {operator} {arg2}. \n' + 'Available operators: +, -, /, * \n\n') if user_input == 'q': return arg1, operator, arg2 = user_input.split() ex = Expression(arg1, operator, arg2) result = COMMANDS[ex.operator](float(ex.arg1), float(ex.arg2)) print('_'*20) print(result) print('_'*20) handle() handle() ###Output _____no_output_____ ###Markdown callback example ###Code def print_all(l): print(l) def print_elements(l): for i in l: print(i) def foo(x, callback): xs = [i for i in range(x)] callback(xs) return 'foo is done \n' print( foo(10, print_all) ) print( foo(10, print_elements) ) def add_one(*args, **kwargs): print(kwargs) # kwargs is dict return [arg + 1 for arg in args] # args is tutle add_one(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, key=1, value=2) ###Output _____no_output_____ ###Markdown ex. 1 ###Code from random import randint def print_field(field): print('-'*40) for line in field: print(line) print('-'*40) def ai_move(field, symbol): x, y = randint(0, 2), randint(0, 2) if field[x][y] != ' ': ai_move(field, symbol) else: field[x][y] = symbol def get_user_symbol(): symbol = input('Choose symbol {X or 0}: ') if symbol in ['0', 'X']: return symbol else: print('Invalid input. Enter again. \n') symbol = get_user_symbol() return symbol def get_user_move(): x, y = \ input('Enter in format {x (from 1 to 3)} {y (from 1 to 3)}: ').split() if x.isdigit() and y.isdigit() and 0 < int(x) < 4 and 0 < int(y) < 4: return int(x) - 1, int(y) - 1 else: print('Invalid input. Enter again. \n') get_user_input() def is_game_over(field, user_symbol, ai_symbol): PLAYERS = { user_symbol: 'Player', ai_symbol: 'Computer' } print(PLAYERS) def get_diags(): diag_one = [field[i][i] for i in range(len(field))] diag_two = [] l = len(field) - 1 i = l k = 0 while i >= 0: diag_two.append(field[i][k]) i -= 1 k += 1 return [diag_one, diag_two] def get_verticals(): return [[field[k][i] for k in range(len(field))] \ for i in range(len(field))] def get_horizontals(): return [[field[i][k] for k in range(len(field))] \ for i in range(len(field))] lists = get_diags() + get_verticals() + get_horizontals() sets = (set(l) for l in lists) for s in sets: if s == {'X'}: print('{} wins'.format(PLAYERS['X'])) return True elif s == {'0'}: print('{} wins'.format(PLAYERS['0'])) return True print('Go on..') return False def game(field): initial_field = [elem[:] for elem in field] user_symbol = get_user_symbol() ai_symbol = '0' if user_symbol == 'X' else 'X' game_over = False while not game_over: print_field(field) x, y = get_user_move() field[y][x] = user_symbol ai_move(field, ai_symbol) game_over = is_game_over(field, user_symbol, ai_symbol) else: print_field(field) print('-'*40) print('RESTARTING..') game(initial_field) FIELD = [[' ' for i in range(3)] for k in range(3)] game(FIELD) ###Output Choose symbol {X or 0}: 0 ---------------------------------------- [' ', ' ', ' '] [' ', ' ', ' '] [' ', ' ', ' '] ---------------------------------------- Enter in format {x (from 1 to 3)} {y (from 1 to 3)}: 1 1 {'0': 'Player', 'X': 'Computer'} Go on.. ---------------------------------------- ['0', ' ', ' '] [' ', ' ', ' '] [' ', ' ', 'X'] ---------------------------------------- Enter in format {x (from 1 to 3)} {y (from 1 to 3)}: 1 2 {'0': 'Player', 'X': 'Computer'} Go on.. ---------------------------------------- ['0', ' ', ' '] ['0', ' ', ' '] ['X', ' ', 'X'] ---------------------------------------- Enter in format {x (from 1 to 3)} {y (from 1 to 3)}: 2 1 {'0': 'Player', 'X': 'Computer'} Go on.. ---------------------------------------- ['0', '0', 'X'] ['0', ' ', ' '] ['X', ' ', 'X'] ---------------------------------------- Enter in format {x (from 1 to 3)} {y (from 1 to 3)}: 2 2 {'0': 'Player', 'X': 'Computer'} Computer wins ---------------------------------------- ['0', '0', 'X'] ['0', '0', ' '] ['X', 'X', 'X'] ---------------------------------------- ---------------------------------------- RESTARTING..
DSA/math/isPowerOfThree.ipynb
###Markdown Given an integer, write a function to determine if it is a power of three.Example 1: Input: 27 Output: trueExample 2: Input: 0 Output: falseExample 3: Input: 9 Output: trueExample 4: Input: 45 Output: falseFollow up:- Could you do it without using any loop / recursion? - [Math 1-liner, no log, with explanation](https://leetcode.com/problems/power-of-three/discuss/77977/Math-1-liner-no-log-with-explanation)> The positive divisors of 3^19 are exactly the powers of 3 from 3^0 to 3^19. That's all powers of 3 in the possible range here (signed 32-bit integer). So just check whether the number is positive and whether it divides 3^19.2 ^ 20 = 2147483648 3 ^ 19 = 1162261467 ###Code class Solution: def isPowerOfThree(self, n: int) -> bool: return n > 0 == 3 ** 19 % n # test n = 27 Solution().isPowerOfThree(n) ###Output _____no_output_____
lei/P18 Improving Training Data for sentiment analysis.ipynb
###Markdown P18 Improving Training Data for sentiment analysis So now it is time to train on a new data set. Our goal is to do Twitter sentiment, so we're hoping for a data set that is a bit shorter per positive and negative statement. It just so happens that I have a data set of 5300+ positive and 5300+ negative movie reviews, which are much shorter. These should give us a bit more accuracy from the larger training set, as well as be more fitting for tweets from Twitter.I have hosted both files here, you can find them by going to the downloads for the short reviews. Save these files as positive.txt and negative.txt.Now, we can build our new data set in a very similar way as before. What needs to change?We need a new methodology for creating our "documents" variable, and then we also need a new way to create the "all_words" variable. No problem, really, here's how I did it:short_pos = open("short_reviews/positive.txt","r").read()short_neg = open("short_reviews/negative.txt","r").read()documents = []for r in short_pos.split('\n'): documents.append( (r, "pos") )for r in short_neg.split('\n'): documents.append( (r, "neg") )all_words = []short_pos_words = word_tokenize(short_pos)short_neg_words = word_tokenize(short_neg)for w in short_pos_words: all_words.append(w.lower())for w in short_neg_words: all_words.append(w.lower())all_words = nltk.FreqDist(all_words)Next, we also need to adjust our feature finding function, mainly tokenizing by word in the document, since we didn't have a nifty .words() feature for our new sample. I also went ahead and increased the most common words:word_features = list(all_words.keys())[:5000]def find_features(document): words = word_tokenize(document) features = {} for w in word_features: features[w] = (w in words) return features featuresets = [(find_features(rev), category) for (rev, category) in documents]random.shuffle(featuresets) ###Code import nltk import random from nltk.corpus import movie_reviews from nltk.classify.scikitlearn import SklearnClassifier import pickle from sklearn.naive_bayes import MultinomialNB, BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk.classify import ClassifierI from statistics import mode from nltk.tokenize import word_tokenize class VoteClassifier(ClassifierI): def __init__(self, *classifiers): self._classifiers = classifiers def classify(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) return mode(votes) def confidence(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) choice_votes = votes.count(mode(votes)) conf = choice_votes / len(votes) return conf short_pos = open('short_reviews/positive.txt','r').read() short_neg = open('short_reviews/negative.txt','r').read() documents = [] for r in short_pos.split('\n'): documents.append( (r, "pos") ) for r in short_neg.split('\n'): documents.append( (r, "neg") ) all_words = [] short_pos_words = word_tokenize(short_pos) short_neg_words = word_tokenize(short_neg) for w in short_pos_words: all_words.append(w.lower()) for w in short_neg_words: all_words.append(w.lower()) all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:5000] def find_features(document): words = word_tokenize(document) features = {} for w in word_features: features[w] = (w in words) return features #print((find_features(movie_reviews.words('neg/cv000_29416.txt')))) featuresets = [(find_features(rev), category) for (rev, category) in documents] random.shuffle(featuresets) # positive data example: training_set = featuresets[:10000] testing_set = featuresets[10000:] ## ### negative data example: ##training_set = featuresets[100:] ##testing_set = featuresets[:100] classifier = nltk.NaiveBayesClassifier.train(training_set) print("Original Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100) classifier.show_most_informative_features(15) MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(training_set) print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100) BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100) LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100) SGDClassifier_classifier = SklearnClassifier(SGDClassifier()) SGDClassifier_classifier.train(training_set) print("SGDClassifier_classifier accuracy percent:", (nltk.classify.accuracy(SGDClassifier_classifier, testing_set))*100) ##SVC_classifier = SklearnClassifier(SVC()) ##SVC_classifier.train(training_set) ##print("SVC_classifier accuracy percent:", (nltk.classify.accuracy(SVC_classifier, testing_set))*100) LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100) NuSVC_classifier = SklearnClassifier(NuSVC()) NuSVC_classifier.train(training_set) print("NuSVC_classifier accuracy percent:", (nltk.classify.accuracy(NuSVC_classifier, testing_set))*100) voted_classifier = VoteClassifier( NuSVC_classifier, LinearSVC_classifier, MNB_classifier, BernoulliNB_classifier, LogisticRegression_classifier) print("voted_classifier accuracy percent:", (nltk.classify.accuracy(voted_classifier, testing_set))*100) ###Output _____no_output_____
tutorials/Qubit/Qubit.ipynb
###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable quse q = Qubit();// You can work with the qubit here// ...// The qubit is deallocated once it's not used any longer```> Before Q 0.15 the syntax for qubit allocation was different:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state by the time they are released. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:![DumpMachine header](./img/Dumpmachine-visualization-state-header.png)For example, the state $|0\rangle$ would be represented as follows:![0 state](./img/Dumpmachine-visualization-state-0.png)The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:![-i state](./img/Dumpmachine-visualization-state--i.png)> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/azure/quantum/concepts-dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output // should run but gives random syntax error open Microsoft.Quantum.Diagnostics; operation QubitsDemo () : Unit { // This line allocates a qubit in state |0⟩ use q = Qubit(); Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } %simulate QubitsDemo ###Output UsageError: Line magic function `%simulate` not found. ###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable quse q = Qubit();// You can work with the qubit here// ...// The qubit is deallocated once it's not used any longer```> Before Q 0.15 the syntax for qubit allocation was different:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state by the time they are released. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:![DumpMachine header](./img/Dumpmachine-visualization-state-header.png)For example, the state $|0\rangle$ would be represented as follows:![0 state](./img/Dumpmachine-visualization-state-0.png)The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:![-i state](./img/Dumpmachine-visualization-state--i.png)> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/azure/quantum/concepts-dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output open Microsoft.Quantum.Diagnostics; operation QubitsDemo () : Unit { // This line allocates a qubit in state |0⟩ use q = Qubit(); Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } %simulate QubitsDemo ###Output UsageError: Line magic function `%simulate` not found. ###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state at the end of the block. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:```state amplitude visual probability probability direction phase```For example, the state $|0\rangle$ would be represented as follows:```∣0❭: 1.000000 + 0.000000 i == ******************** [ 1.000000 ] --- [ 0.00000 rad ]∣1❭: 0.000000 + 0.000000 i == [ 0.000000 ]```The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:```∣0❭: 0.707107 + 0.000000 i == *********** [ 0.500000 ] --- [ 0.00000 rad ]∣1❭: 0.000000 + -0.707107 i == *********** [ 0.500000 ] ↓ [ -1.57080 rad ]```> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/quantum/concepts/dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output open Microsoft.Quantum.Diagnostics; operation Qubits_Demo () : Unit { let divider = "--------------------------------------------------------------------------------------------------"; // This line allocates a qubit in state |0⟩ using (q = Qubit()) { Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); Message(divider); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); Message(divider); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); Message(divider); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); Message(divider); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } } %simulate Qubits_Demo ###Output _____no_output_____ ###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable quse q = Qubit();// You can work with the qubit here// ...// The qubit is deallocated once it's not used any longer```> Before Q 0.15 the syntax for qubit allocation was different:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state by the time they are released. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:![DumpMachine header](./img/Dumpmachine-visualization-state-header.png)For example, the state $|0\rangle$ would be represented as follows:![0 state](./img/Dumpmachine-visualization-state-0.png)The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:![-i state](./img/Dumpmachine-visualization-state--i.png)> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/azure/quantum/concepts-dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output open Microsoft.Quantum.Diagnostics; operation QubitsDemo () : Unit { // This line allocates a qubit in state |0⟩ use q = Qubit(); Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } %simulate QubitsDemo ###Output UsageError: Line magic function `%simulate` not found. ###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable quse q = Qubit();// You can work with the qubit here// ...// The qubit is deallocated once it's not used any longer```> Before Q 0.15 the syntax for qubit allocation was different:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state by the time they are released. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:![DumpMachine header](./img/Dumpmachine-visualization-state-header.png)For example, the state $|0\rangle$ would be represented as follows:![0 state](./img/Dumpmachine-visualization-state-0.png)The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:![-i state](./img/Dumpmachine-visualization-state--i.png)> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/azure/quantum/concepts-dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output open Microsoft.Quantum.Diagnostics; operation QubitsDemo () : Unit { // This line allocates a qubit in state |0⟩ use q = Qubit(); Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } %simulate QubitsDemo ###Output State |0⟩: ###Markdown The QubitThis tutorial introduces you to one of the core concepts in quantum computing - the qubit, and its representation in mathematical notation and in Q code.If you aren't familiar with [complex arithmetic](../ComplexArithmetic/ComplexArithmetic.ipynb) and [linear algebra](../LinearAlgebra/LinearAlgebra.ipynb), we recommend that you complete those tutorials first.This tutorial covers the following topics:* The concept of a qubit* Superposition* Vector representation of qubit states* Dirac notation* `Qubit` data type in Q The Concept of a QubitThe basic building block of a classical computer is the bit - a single memory cell that is either in state $0$ or in state $1$. Similarly, the basic building block of a quantum computer is the quantum bit, or **qubit**. Like the classical bit, a qubit can be in state $0$ or in state $1$. Unlike the classical bit, however, the qubit isn't limited to just those two states - it may also be in a combination, or **superposition** of those states.> A common misconception about quantum computing is that a qubit is always in one state or the other, we just don't know which one until we "measure" it. That is not the case. A qubit in a superposition is in a state between the states $0$ and $1$. When a qubit is measured, it is forced entirely into one state or the other - in other words, measuring it actually changes its state. Matrix RepresentationThe state of a qubit is represented by a complex vector of size 2:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix}$$Here $\alpha$ represents how "close" the qubit is to the state $0$, and $\beta$ represents how "close" the qubit is to the state $1$. This vector is normalized: $|\alpha|^2 + |\beta|^2 = 1$.$\alpha$ and $\beta$ are known as **amplitudes** of states $0$ and $1$, respectively. Basis StatesA qubit in state $0$ would be represented by the following vector:$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}$$Likewise, a qubit in state $1$ would be represented by this vector:$$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Note that you can use scalar multiplication and vector addition to express any qubit state as a sum of these two vectors with certain weights (known as **linear combination**):$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\begin{bmatrix} \alpha \\ 0 \end{bmatrix} + \begin{bmatrix} 0 \\ \beta \end{bmatrix} =\alpha \cdot \begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta \cdot \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$Because of this, these two states are known as **basis states**.These two vectors have two additional properties. First, as mentioned before, both are **normalized**:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle = 1$$Second, they are **orthogonal** to each other:$$\langle \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \rangle =\langle \begin{bmatrix} 0 \\ 1 \end{bmatrix} , \begin{bmatrix} 1 \\ 0 \end{bmatrix} \rangle = 0$$> As a reminder, $\langle V , W \rangle$ is the [inner product](../LinearAlgebra/LinearAlgebra.ipynbInner-Product) of $V$ and $W$.This means that these vectors form an **orthonormal basis**. The basis of $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \\ 1 \end{bmatrix}$ is called the **computational basis**, also known as the **canonical basis**.> There exist other orthonormal bases, for example, the **Hadamard basis**, formed by the vectors>> $$\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} \text{ and } \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> You can check that these vectors are normalized, and orthogonal to each other. Any qubit state can be expressed as a linear combination of these vectors:>> $$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} =\frac{\alpha + \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} +\frac{\alpha - \beta}{\sqrt{2}} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix}$$>> The Hadamard basis is widely used in quantum computing, for example, in the [BB84 quantum key distribution protocol](https://en.wikipedia.org/wiki/BB84). Dirac NotationWriting out each vector when doing quantum calculations takes up a lot of space, and this will get even worse once we introduce quantum gates and multi-qubit systems. **Dirac notation** is a shorthand notation that helps solve this issue. In Dirac notation, a vector is denoted by a symbol called a **ket**. For example, a qubit in state $0$ is represented by the ket $|0\rangle$, and a qubit in state $1$ is represented by the ket $|1\rangle$: $|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ $|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ These two kets represent basis states, so they can be used to represent any other state:$$\begin{bmatrix} \alpha \\ \beta \end{bmatrix} = \alpha|0\rangle + \beta|1\rangle$$Any symbol other than $0$ or $1$ within the ket can be used to represent arbitrary vectors, similar to how variables are used in algebra: $$|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$$Several ket symbols have a generally accepted use, such as: $|+\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$ $|-\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - |1\rangle\big)$ $|i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle + i|1\rangle\big)$ $|-i\rangle = \frac{1}{\sqrt{2}}\big(|0\rangle - i|1\rangle\big)$ We will learn more about Dirac notation in the next tutorials, as we introduce quantum gates and multi-qubit systems. Q=== Qubit data typeIn Q, qubits are represented by the `Qubit` data type. On a physical quantum computer, it's impossible to directly access the state of a qubit, whether to read its exact state, or to set it to a desired state, and this data type reflects that. Instead, you can change the state of a qubit using [quantum gates](../SingleQubitGates/SingleQubitGates.ipynb), and extract information about the state of the system using measurements.That being said, when you run Q code on a quantum simulator instead of a physical quantum computer, you can use diagnostic functions that allow you to peek at the state of the quantum system. This is very useful both for learning and for debugging small Q programs.The qubits aren't an ordinary data type, so the variables of this type have to be declared and initialized ("allocated") a little differently:```c// This statement allocates a qubit, and binds it to the variable quse q = Qubit();// You can work with the qubit here// ...// The qubit is deallocated once it's not used any longer```> Before Q 0.15 the syntax for qubit allocation was different:```c// This statement allocates a qubit, and binds it to the variable qusing (q = Qubit()) { // You can work with the qubit here // ...}// The qubit is no longer allocated outside of the 'using' block```Freshly allocated qubits start out in state $|0\rangle$, and have to be returned to that state by the time they are released. If you attempt to release a qubit in any state other than $|0\rangle$, your program will throw a `ReleasedQubitsAreNotInZeroStateException`. We will see why it is important later, when we look at multi-qubit systems. Demo: Examining Qubit States in QWe will be using the function [`DumpMachine`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.diagnostics.dumpmachine) to print the state of the quantum computer.The exact behavior of this function depends on the quantum simulator or processor you are using.On a full state simulator used in this demo, this function prints the information on each basis state, one basis state per row.This includes information about the amplitude of the state, the probability of measuring that state, and the phase of the state (more on that later).Each row has the following format:![DumpMachine header](./img/Dumpmachine-visualization-state-header.png)For example, the state $|0\rangle$ would be represented as follows:![0 state](./img/Dumpmachine-visualization-state-0.png)The state $\frac{1}{\sqrt{2}}|0\rangle - \frac{i}{\sqrt{2}}|1\rangle$ would be represented as so:![-i state](./img/Dumpmachine-visualization-state--i.png)> It is important to note that although we reason about quantum systems in terms of their state, Q does not have any representation of the quantum state in the language. Instead, state is an internal property of the quantum system, modified using gates. For more information, see [Q documentation on quantum states](https://docs.microsoft.com/azure/quantum/concepts-dirac-notationq-gate-sequences-equivalent-to-quantum-states).This demo shows how to allocate a qubit and examine its state in Q. This demo uses quantum gates to manipulate the state of the qubit - we will explain how they work in the next tutorial, so do not worry about them for now. Run the next cell using `Ctrl+Enter` (`⌘+Enter` on Mac), then run the cell after it to see the output. ###Code // Run this cell using Ctrl+Enter (⌘+Enter on Mac) // Then run the next cell to see the output open Microsoft.Quantum.Diagnostics; operation QubitsDemo () : Unit { // This line allocates a qubit in state |0⟩ use q = Qubit(); Message("State |0⟩:"); // This line prints out the state of the quantum computer // Since only one qubit is allocated, only its state is printed DumpMachine(); // This line changes the qubit from state |0⟩ to state |1⟩ X(q); Message("State |1⟩:"); DumpMachine(); // This line changes the qubit to state |-⟩ = (1/sqrt(2))(|0⟩ - |1⟩) // That is, this puts the qubit into a superposition // 1/sqrt(2) is approximately 0.707107 H(q); Message("State |-⟩:"); DumpMachine(); // This line changes the qubit to state |-i⟩ = (1/sqrt(2))(|0⟩ - i|1⟩) S(q); Message("State |-i⟩:"); DumpMachine(); // This will put the qubit into an uneven superposition, // where the amplitudes of |0⟩ and |1⟩ have different moduli Rx(2.0, q); Ry(1.0, q); Message("Uneven superposition state:"); DumpMachine(); // This line returns the qubit to state |0⟩ Reset(q); } %simulate QubitsDemo ###Output _____no_output_____
src/notebooks/exp1.ipynb
###Markdown Data ###Code dss_nn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_wn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_nw = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_ww = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_wn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nw = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_ww = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nn_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_wn_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nw_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_ww_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day for ds in [dss_nn, dss_nn, dss_nn, dss_nn, dst_nn, dst_nn, dst_nn, dst_nn]: ds.mod.attrs['units'] = r'$mm day^{-1}$' ds.obs.attrs['units'] = r'$mm day^{-1}$' ds.mod.attrs['long_name'] = 'ET' ds.obs.attrs['long_name'] = 'ET' ###Output _____no_output_____ ###Markdown Aggregate to monthly ###Code dst_nn_m = dst_nn.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_wn_m = dst_wn.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nw_m = dst_nw.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_ww_m = dst_ww.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nn_tr_m = dst_nn_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_wn_tr_m = dst_wn_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nw_tr_m = dst_nw_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_ww_tr_m = dst_ww_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() ###Output /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) ###Markdown MSC and anomalies ###Code def add_msc_and_ano(ds, time_agg='month'): for s in ['mod', 'obs']: msc = ds[s].groupby('time.' + time_agg).mean('time', keep_attrs=True).compute() ano = (ds[s].groupby('time.' + time_agg) - msc).compute() if time_agg == 'month': msc[time_agg] = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec'] iav = ds[s].groupby('time.year').mean('time', keep_attrs=True) iav -= iav.mean('year') ds[s + '_msc'] = msc ds[s + '_ano'] = ano ds[s + '_iav'] = iav.compute() return ds i = 0 for ds in [dst_nn, dst_wn, dst_nw, dst_ww, dst_nn_tr, dst_wn_tr, dst_nw_tr, dst_ww_tr]: i += 1 print(i) ds = add_msc_and_ano(ds) for ds in [dst_nn_m, dst_wn_m, dst_nw_m, dst_ww_m, dst_nn_tr_m, dst_wn_tr_m, dst_nw_tr_m, dst_ww_tr_m]: i += 1 print(i) ds = add_msc_and_ano(ds) target_dir = '/scratch/dl_chapter14/experiments/et/derived' if not os.path.isdir(target_dir): %mkdir {target_dir} for ds, ds_name in zip( [dst_nn, dst_wn, dst_nw, dst_ww, dst_nn_m, dst_wn_m, dst_nw_m, dst_ww_m, dst_nn_tr, dst_wn_tr, dst_nw_tr, dst_ww_tr, dst_nn_tr_m, dst_wn_tr_m, dst_nw_tr_m, dst_ww_tr_m], ['dst_nn', 'dst_wn', 'dst_nw', 'dst_ww', 'dst_nn_m', 'dst_wn_m', 'dst_nw_m', 'dst_ww_m', 'dst_nn_tr', 'dst_wn_tr', 'dst_nw_tr', 'dst_ww_tr', 'dst_nn_tr_m', 'dst_wn_tr_m', 'dst_nw_tr_m', 'dst_ww_tr_m']): print(ds_name) with ProgressBar(): ds.to_netcdf(f'/scratch/dl_chapter14/experiments/et/derived/{ds_name}.nc') ###Output dst_nn [########################################] | 100% Completed | 15min 37.5s dst_wn [########################################] | 100% Completed | 14min 56.4s dst_nw [########################################] | 100% Completed | 14min 52.7s dst_ww [########################################] | 100% Completed | 15min 8.8s dst_nn_m dst_wn_m dst_nw_m dst_ww_m dst_nn_tr [########################################] | 100% Completed | 21min 2.4s dst_wn_tr [########################################] | 100% Completed | 21min 2.6s dst_nw_tr [########################################] | 100% Completed | 15min 45.2s dst_ww_tr [########################################] | 100% Completed | 16min 5.4s dst_nn_tr_m dst_wn_tr_m dst_nw_tr_m dst_ww_tr_m ###Markdown Spatial analysis ###Code dst_nn.mod dst_nn = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn.nc') dst_wn = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn.nc') dst_nw = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw.nc') dst_ww = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww.nc') dst_nn_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_m.nc') dst_wn_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_m.nc') dst_nw_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_m.nc') dst_ww_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_m.nc') dst_nn_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_tr.nc') dst_wn_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_tr.nc') dst_nw_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_tr.nc') dst_ww_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_tr.nc') dst_nn_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_tr_m.nc') dst_wn_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_tr_m.nc') dst_nw_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_tr_m.nc') dst_ww_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_tr_m.nc') metrics = [] i = 0 for s, s_name in zip(['', '_msc', '_ano', '_iav'], ['raw', 'msc', 'ano', 'iav']): for ds, n, ts in zip( [dst_wn, dst_nn, dst_ww, dst_nw, dst_wn_m, dst_nn_m, dst_ww_m, dst_nw_m], ['wn', 'nn', 'ww', 'nw'] * 2, np.repeat(['daily', 'monthly'], 4)): i += 1 print(i) ds = ds.chunk({'time': -1, 'month': -1, 'year': -1}) if s_name == 'msc': timedim = 'month' elif s_name == 'iav': timedim = 'year' else: timedim = 'time' m = get_metrics(ds['mod' + s], ds['obs' + s], ['mef', 'rmse'], dim=timedim, verbose=False).compute() m = m.expand_dims({'set': [s_name], 'model': [n], 'timeres': [ts], 'cvset': ['eval']}, axis=(0, 1, 2, 3)) metrics.append(m) i = 0 for s, s_name in zip(['', '_msc', '_ano', '_iav'], ['raw', 'msc', 'ano', 'iav']): for ds, n, ts in zip( [dst_wn_tr, dst_nn_tr, dst_ww_tr, dst_nw_tr, dst_wn_tr_m, dst_nn_tr_m, dst_ww_tr_m, dst_nw_tr_m], ['wn', 'nn', 'ww', 'nw'] * 2, np.repeat(['daily', 'monthly'], 4)): i += 1 print(i) ds = ds.chunk({'time': -1, 'month': -1, 'year': -1}) if s_name == 'msc': timedim = 'month' elif s_name == 'iav': timedim = 'year' else: timedim = 'time' m = get_metrics(ds['mod' + s], ds['obs' + s], ['mef', 'rmse'], dim=timedim, verbose=False).compute() m = m.expand_dims({'set': [s_name], 'model': [n], 'timeres': [ts], 'cvset': ['train']}, axis=(0, 1, 2, 3)) metrics.append(m) xr.merge(metrics).to_netcdf('/scratch/dl_chapter14/experiments/et/derived/spatial_metrics.nc') #metrics = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/spatial_metrics.nc') metrics = xr.open_dataset('/workspace/bkraft/dlchapter_backup/dl_chapter14/experiments/et/derived/spatial_metrics.nc') metrics['pft'] = data.pft.astype('int') df = metrics.to_dataframe().dropna().reset_index(level=['model', 'set', 'timeres', 'cvset']) df def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'msc') & (df.timeres == 'daily') ].groupby('model').agg(f).round(2) def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'raw') & (df.timeres == 'daily') & (df.pft != 2) ].groupby('model').agg(f).round(2) def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'raw') & (df.timeres == 'daily') & (df.pft == 2) ].groupby('model').agg(f).round(2) sb.palplot(sb.color_palette("Paired")) plt.scatter(1, 2, color=sb.color_palette("Paired")[3]) order = ['raw', 'msc', 'ano', 'iav'] hue_order = ['wn', 'nn', 'ww', 'nw'] np.arange(8) % 4 mpl.rcParams['hatch.linewidth'] = 0.3 fig, ax = new_subplots(2, 2, 0.7, sharey='row', sharex=True, gridspec_kw={'wspace': 0.02, 'hspace': 0.04}) plot_kwargs = dict( order=order, hue_order=hue_order, palette='Paired' ) p1 = sb.boxplot(x="set", y="mef", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[0, 0], **plot_kwargs) p2 = sb.boxplot(x="set", y="mef", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[0, 1], **plot_kwargs) p3 = sb.boxplot(x="set", y="rmse", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[1, 0], **plot_kwargs) p4 = sb.boxplot(x="set", y="rmse", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[1, 1], **plot_kwargs) ax[0, 0].legend().set_visible(False) ax[0, 1].legend().set_visible(False) ax[1, 0].legend().set_visible(False) ax[1, 1].legend().set_visible(False) ax[1, 0].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 1].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 0].set_xlabel('') ax[1, 1].set_xlabel('') ax[1, 1].set_xlabel('') ax[0, 0].set_ylabel('NSE ($-$)') ax[1, 0].set_ylabel('RMSE ($mm \ day^{-1}$)') ax[0, 1].set_ylim(-0.6, 1.001); ax[1, 1].set_ylim(0, 1.0); for ax_ in ax.flat: ax_.label_outer() ax_.set_axisbelow(True) ax_.grid(axis='y', color='0.5', linewidth=0.2) ax[0, 0].set_title('Training') ax[0, 1].set_title('Test') ax[0, 0].tick_params(axis='x', which='both',length=0) ax[0, 1].tick_params(axis='both', which='both',length=0) ax[1, 1].tick_params(axis='y', which='both',length=0) ax[0, 0].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax[1, 0].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) fig.align_ylabels(ax[:, 0]) colors = [ sb.color_palette("Paired")[0], sb.color_palette("Paired")[0], sb.color_palette("Paired")[3], sb.color_palette("Paired")[3] ] * 4 for ax_ in ax.flat: ax_.spines['top'].set_visible(False) ax_.spines['right'].set_visible(False) for p, (patch, color) in enumerate(zip(ax_.artists, colors)): patch.set_facecolor(color) patch.set_edgecolor('k') patch.set_alpha(0.8) patch.set_linewidth(0.) for q in range(p*5, p*5+5): # print(len(ax_.lines), q) line = ax_.lines[q] line.set_color(color) # whiskers if (q % 5 == 0) or (q % 5 == 1): line.set_linewidth(0.9) # caps if (q % 5 == 2) or (q % 5 == 3): line.set_linewidth(0.9) # median if (q % 5 == 4): line.set_linewidth(0) x, y = line.get_data() xn = (x-(x.sum()/2.))*0.95+(x.sum()/2.) color = [c - o for c, o in zip(color, [0.12, 0.12, 0.12, 0])] ax_.plot(xn, y, color=color, linewidth=1.2, solid_capstyle="butt", zorder=4, alpha=1) if p % 2 == 1: patch.set_hatch('////////////') legend = ax[1, 0].legend(loc='upper center', bbox_to_anchor=(1.0, -0.2), ncol=4, title='', frameon=False) for t, l in zip(legend.texts, [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$']): t.set_text(l) for i, legpatch in enumerate(legend.get_patches()): col = legpatch.get_facecolor() col_alpha = list(col) col_alpha[-1] = 0.8 legpatch.set_edgecolor('k') legpatch.set_facecolor(col_alpha) legpatch.set_linewidth(0) if i % 2 == 1: legpatch.set_hatch('////////////') # fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot.pgf') savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/Fig4') fig, axes = subplots_robinson(4, 2, figsize=(fig_size[0], fig_size[0]*1.2), gridspec_kw={'wspace': 0.01, 'hspace': 0.01}) for i, met in enumerate(['mef', 'rmse']): for j, (mod, mod_name) in enumerate(zip(['wn', 'nn', 'ww', 'nw'], [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$'])): ax = axes[j, i] dt = metrics[met].sel(model=mod, timeres='daily', set='raw', cvset='eval') label = 'NSE ($-$)' if met=='mef' else 'RMSE ($mm \ day^{-1}$)' plot_map( dt, label=' ', rasterized=True, vmin=0 if met=='mef' else 0.1, vmax=1 if met=='mef' else 1, cmap='inferno' if met=='mef' else 'plasma_r', ax=ax, histogram_placement=[0.05, 0.27, 0.2, 0.25], hist_kw={'bins': 20, 'edgecolor': 'none'}, cbar_kwargs={'extend': 'min'}, landcolor='0.0') ax.set_title('') if i == 0: ax.text(-0.02, 0.45, mod_name, horizontalalignment='right', verticalalignment='center', transform=ax.transAxes, rotation=90, size=9) if j == 0: ax.set_title(label, size=9) ax.outline_patch.set_linewidth(0.5) savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/Fig2', dpi=300) plt.pcolormesh() !cat '/scratch/dl_chapter14/experiments/et/w_sm.w_perm/hptune/summary/best_params.json' for m in ('wn', 'nn', 'ww', 'nw'): print('model: ', m) pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']==m), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) met = pd_pft.loc[(pd_pft['pft']==2) & (pd_pft['set']=='raw'),:].mean() print(f'NSE: {met["mef"]:0.2f}, {met["rmse"]:0.2f}') met['rmse'].values for m in ('wn', 'nn', 'ww', 'nw'): print('model: ', m) pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']==m), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) met = pd_pft.loc[(pd_pft['pft']!=2) & (pd_pft['set']=='raw'),:].groupby(('set')).mean() print(f'NSE: {met["mef"].values[0]:0.2f}, {met["rmse"].values[0]:0.2f}') pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']=='nn'), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) pd_pft for s in ['raw', 'msc', 'ano', 'iav']: pd_pft.loc[pd_pft['set']==s, :].plot.scatter(x='rmse', y='model') fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) # use mef, bias, metrics in columns for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) plot_map(metrics.isel(set=1, timeres=1, model=2).mef, robust=True) fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) # use mef, bias, metrics in columns for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) fig, axes = subplots_robinson(figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) plot_map((met_nn.corr-met_nw.corr)-(met_ww.corr-met_nw.corr), ax=axes, vmin=0, robust=True) fig, axes = subplots_robinson(figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) plot_map((met_nn.corr-met_nw.corr)-(met_ww.corr-met_nw.corr), ax=axes, vmin=0, robust=True) dst_nn_msc = dst_nn.groupby('time.dayofyear').mean('time') dst_nn with ProgressBar(): msc = dst_nn.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_ww.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_nw.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_wn.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') (dst_nn - msc).to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_msc.zarr/') (dst_ww - msc).to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_msc.zarr/') (dst_nw - msc).to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_msc.zarr/') (dst_wn - msc).to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_ano.zarr/') (dst_nn - msc) msc msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') msc.sel(lat=45, lon=8, method='nearest').mod.plot() msc.sel(lat=45, lon=8, method='nearest').obs.plot() xr.apply_ufunc( calc_msc_ano, dst_nn, dask='parallelized', output_dtypes=[float] ).compute() with ProgressBar(): dst_nn_msc = dst_nn.groupby('time.dayofyear').mean('time').compute() def plot_time_series( x, y, xlabel='mod', ylabel='obs', timeagg=None, title='', ax=None, figsize=(15, 5), subplot_kw={}, **kwargs): if ax is None: plt.figure(figsize=figsize) ax = plt.subplot(**subplot_kw) if timeagg is not None: x = x.resample(time=timeagg).mean() y = y.resample(time=timeagg).mean() time = x.time.values x_m = x.mean(dim=['lat', 'lon']) y_m = y.mean(dim=['lat', 'lon']) x_qs = xr_quantile(x, [0.25, 0.75], dim=['lat', 'lon']) y_qs = xr_quantile(y, [0.25, 0.75], dim=['lat', 'lon']) ax.fill_between(time, x_qs.isel(quantile=0), x_qs.isel(quantile=1), alpha=0.2, facecolor='tab:blue', label=r'$modeled_{q0.2-0.8}$') ax.fill_between(time, y_qs.isel(quantile=0), y_qs.isel(quantile=1), alpha=0.2, facecolor='k', label=r'$observed_{q0.2-0.8}$') ax.plot(time, x_m, alpha=1.0, color='tab:blue', linewidth=2., label=r'$modeled_{median}$') ax.plot(time, y_m, color='k', linewidth=1.2, linestyle='--', label=r'$observed_{median}$') # ax.text(0.05, 0.95, f'r={r:.3f}', horizontalalignment='left', verticalalignment='top', transform=ax.transAxes) # ax.set_ylabel(var) ax.legend() ax.patch.set_facecolor('white') plot_time_series(dss.mod, dss.obs) plot_time_series(dss.mod, dss.obs) ts = dst_nw.sel(lat=46, lon=8, method='nearest') ts plt.figure(figsize=(35, 7)) ts.obs.isel(time=slice(0, 1000)).plot(alpha=0.5, color='k') ts.mod.isel(time=slice(0, 1000)).plot(color='orangered', alpha=0.5) %time dst_nn_msc = dst_nn.groupby('time.dayofyear').mean().compute() ts_msc.mod.plot() dst_nn_msc dst_nn_msc.mod.values[1, 1, 1] plot_map(dst_nn_msc.mod.isel(dayofyear=0)) plot_map(dst_nn_msc.obs.isel(dayofyear=0)) def _single_xr_quantile(x, q, dim): if isinstance(dim, str): dim = [dim] ndims = len(dim) axes = tuple(np.arange(ndims)-ndims) m = xr.apply_ufunc( np.nanquantile, x, input_core_dims=[dim], dask='parallelized', output_dtypes=[float], keep_attrs=True, kwargs={'q': q, 'axis': axes}) m.name = 'quantile' return m def xr_quantile(x, q, dim): if not hasattr([1, 2], '__iter__'): q = [q] quantiles = [] for i, q_ in enumerate(q): r = _single_xr_quantile(x, q_, dim).compute() quantiles.append(r) quantiles = xr.concat(quantiles, 'quantile') quantiles['quantile'] = q quantiles.attrs.update({**x.attrs}) return quantiles qs = xr_quantile(dss.obs, [0.25, 0.5, 0.75], dim=['lat', 'lon']) qs qs.isel(quantile=0).plot() qs.isel(quantile=1).plot() qs.isel(quantile=2).plot() med. xr.merge(qs) med = xr_quantile(dss.obs, [0.25, 0.75], dim=['lat', 'lon']).compute() plt.plot(med) med = xr_median(dst.obs, dim=['time']).compute() plot_map(med) plt.figure(figsize=(30, 7)) plt.plot(dss.obs.mean(('lat', 'lon')), 'k--', label='obs global mean') plt.plot(dss.mod.mean(('lat', 'lon')), alpha=0.5, label='mod global mean') plt.legend() import numpy as np import matplotlib.pyplot as plt def f0(x): return x**(-0.5) def f1(x): return x**(-0.2) + 0.01 * x def f_gl(x): gl = np.zeros_like(x) for t, v in enumerate(x): gl[t] = 100 * (v / np.min(x[:t+1]) - 1) return gl def f_min(x): m = np.zeros_like(x) for t, v in enumerate(x): m[t] = np.min(x[:t+1]) return m t = np.linspace(0, 60, 400) e_tr = f0(t) e_va = f1(t) gl = f_gl(e_va) gl_tr = f_gl(e_tr) plt.plot(e_va) plt.plot(e_tr) plt.plot(gl) # plt.plot(gl_tr) def f_gl(x): per_impr = np.zeros_like(x) for t in range(1, len(x)): per_impr[t] = 100 * (1 - x[t] / x[t-1]) return per_impr gl = f_gl(e_va) plt.plot(e_va[1:]) plt.plot(gl[1:]) import pandas as pd df = pd.read_csv('/scratch/dl_chapter14/experiments/hydro/default/tune/hydro/Emulator_117_dropout_in=0.6,dropout_linear=0.1,dropout_lstm=0.6,batch_size=32,dynamic_path=_scratch_dl_chapter14_input_dynamic_gsw_2019-11-14_08-49-30_h07l5k6/progress.csv') df.keys() df plt.plot(df['wloss_eval']) plt.plot(df['loss_eval']) plt.plot(df['perc_improved']) plt.plot(df['patience_counter']) l = {'a': 1, 'b': 2} options_str = ", ".join(l.keys()) print(f'[{options_str}]') print(f'{getattr.__name__}') from datetime import datetime tic = datetime.now() tic.strftime("%m/%d/%Y, %H:%M:%S") toc = datetime.now() elapsed = toc- tic elapsed.seconds mins = int(elapsed.seconds/60) mins secs = int(elapsed.seconds - 60 * mins) secs import xarray as xr import numpy as np import zarr ds_ssd = zarr.open_group('/scratch/dl_chapter14/experiments/hydro/default/pred/predictions.zarr/') ds_ram = zarr.open_group('/run/user/196') ds_ram.tree() def read_zarr(ds, n): for i in range(n): lat = np.random.choice(360) lon = np.random.choice(720) s = ds['mrro'][:, lat, lon] + 1 %time read_zarr(ds_ssd, 1000) %time read_zarr(ds_ssd, 1000) ls -l /run/user/1968 /scratch/ import torch import numpy as np class RNN(torch.nn.Module): def __init__(self): super(RNN, self).__init__() self.rnn = torch.nn.RNN(1, 5) def forward(self, x): out = self.rnn(x) return out import xarray as xr ds = xr.open_zarr('/scratch/dl_chapter14/experiments/hydro/default/pred/predictions.zarr/') ds pred = xr.Dataset({ 'obs': xr.DataArray(ds.values, coords=[ds.lat, ds.lon, ds.time]), 'mod': xr.DataArray(ds.values, coords=[ds.lat, ds.lon, ds.time]) }) pred.attrs = ds.attrs xr.Dataset({ 'mod': xr.DataArray(ds.mrro.values, coords=[dslat, ds.lon, ds.time), 'obs': xr.ones_like(ds.coords) * ds.mrro.values }) ds ds = xr.open_zarr('/scratch/dl_chapter14/target/dynamic/koirala2017.zarr/') ds plot_map(ds.wtd.isel(time=0), robust=True) p = xr.open_zarr('/scratch/hydrodl/data/bucket.zarr/prec/') p_missing = p.data.isnull().sum('time').compute() plot_map(p_missing) xr.open_zarr('/scratch/dl_chapter14/target/dynamic/koirala2017.zarr/') xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc') ds_frac = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc')['mrlslfrac'] ds = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc')['rzwc'] ds ds_frac_mean = ds_frac.mean(('time', 'level')) ds_mean = ds.mean(('time', 'level')) plt.figure(figsize=(17, 12)) plt.imshow(ds_frac_mean.values, vmin=0.2, vmax=0.6) plt.figure(figsize=(17, 12)) plt.imshow(ds_mean.values) plot_map(ds_mean) "et", "tws", "mrro", "wtd", "mrlslfrac" t = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_1950.nc') t plot_map(t.mrlslfrac.isel(levelc=1, time=0), robust=True) ls -l /workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/ e ###Output _____no_output_____ ###Markdown input vars| var | dims | path || --- | --- | --- || Rainf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Rainf/' || Snowf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Snowf/' || SWdown | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/SWdown/' || LWdown | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/LWdown/' || Tair | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Tair/' || Wind | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Wind/' || Qair | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Qair/' || PSurf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/PSurf/' || lai | lat, lon, time | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_lai.nc' || ccover | lat, lon, time | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_lai.nc' || PFT | lat, lon | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_PFT.nc' || soil_properties | lat, lon | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_soil_propreties.nc' | Target vars| var | dims | path || --- | --- | --- || et (also input) | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || mrlslfrac (mean of first 4 levels) | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || tws | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || mrro | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' | ###Code import xarray as xr d = xr.open_dataset('/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_soil_propreties.nc') d plot_map(d.PFT) from matplotlib import animation import cartopy import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER from matplotlib.colors import LogNorm from matplotlib import animation, rc from IPython.display import HTML, Image rc('animation', html='html5') %matplotlib inline ds = xr.open_zarr('/scratch/hydrodl/data/bucket.zarr/et/').data ds.isel(time=[0, 4, 8]).values.shape ax = plt.subplot(projection=ccrs.Robinson()) ax.set_global() ax.coastlines(color='white') ax.gridlines() ax.imshow(ds.isel(time=[0, 4, 8]).values.transpose(1, 2, 0)[-1::-1,:,:], transform=ccrs.PlateCarree()) # (ds.isel(time=[0, 4, 8])).plot.imshow(robust=True, ax=ax, transform=ccrs.PlateCarree()) import cartopy.crs as ccrs import xarray import matplotlib.pyplot as plt #ds = xr.load_dataset(...) ax = plt.subplot(projection=ccrs.Robinson()) (ds.isel(time=[0, 4, 8])).plot.imshow(ax=ax, transform=ccrs.PlateCarree()) ax = plt.subplot(projection=ccrs.Robinson()) ax.imshow(ds.isel(time=[0, 4, 8]).transpose('lat', 'lon', 'time').values[-1::-1, :, :], transform=ccrs.PlateCarree()) ax = plt.subplot(projection=ccrs.Robinson()) plt.plot(ds.isel(time=[0, 4, 8]).transpose('lat', 'lon', 'time').values[-1::-1, :, :], transform=ccrs.PlateCarree(), ax=ax) import cartopy.crs as ccrs import xarray import matplotlib.pyplot as plt #ds = xr.load_dataset(...) ax = plt.subplot(projection=ccrs.Robinson()) (ds.isel(time=0)).plot.imshow(robust=True, ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False) xr.__version__ mpl.__version__ ds et fig_width_pt = 443.57848 # Get this from LaTeX using \showthe\columnwidth inches_per_pt = 1.0/72. # Convert pt to inches golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio fig_width = fig_width_pt*inches_per_pt # width in inches fig_height =fig_width*golden_mean # height in inches fig_size = [fig_width,fig_height] pgf_with_latex = { # setup matplotlib to use latex for output "pgf.texsystem": "xelatex", # change this if using xetex or lautex "text.usetex": True, # use LaTeX to write all text "font.family": "serif", "font.serif": [], # blank entries should cause plots to inherit fonts from the document "font.sans-serif": [], "font.monospace": [], "axes.labelsize": 9, # LaTeX default is 10pt font. "font.size": 7, "legend.fontsize": 9, # Make the legend/label fonts a little smaller "xtick.labelsize": 7, "ytick.labelsize": 7, "figure.figsize": fig_size, # default fig size of 0.9 textwidth "pgf.preamble": [ r"\usepackage[utf8x]{inputenc}", # use utf8 fonts becasue your computer can handle it :) r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble ] } mpl.rcParams.update(pgf_with_latex) pgf_with_latex = { # setup matplotlib to use latex for output "pgf.texsystem": "xelatex", # change this if using xetex or lautex "text.usetex": True, # use LaTeX to write all text "font.family": "serif", "font.serif": [], # blank entries should cause plots to inherit fonts from the document "font.sans-serif": [], "font.monospace": [], "axes.labelsize": 9, # LaTeX default is 10pt font. "font.size": 7, "legend.fontsize": 9, # Make the legend/label fonts a little smaller "xtick.labelsize": 7, "ytick.labelsize": 7, "figure.figsize": fig_size, # default fig size of 0.9 textwidth "pgf.preamble": [ r"\usepackage[utf8x]{inputenc}", # use utf8 fonts becasue your computer can handle it :) r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble ] } mpl.rcParams.update(pgf_with_latex) def savefig(filename, **kwargs): #plt.savefig('{}.pgf'.format(filename), pad_inches = 0, bbox_inches='tight') plt.savefig('{}.pdf'.format(filename), pad_inches = 0, bbox_inches='tight', **kwargs) fig, ax = plt.subplots() ax.scatter([1, 1], [2, 2]) ax.set_title('The title') ax.set_xlabel('The x axis') ax.set_ylabel('The y axis') savefig('/workspace/bkraft/dl_chapter14/src/notebooks/test') fig_size fig, axes = subplots_robinson(4, 2, figsize=(6.160812222222222, 6.160812222222222*1.2), gridspec_kw={'wspace': 0, 'hspace': 0.01}) for i, met in enumerate(['mef', 'rmse']): for j, (mod, mod_name) in enumerate(zip(['wn', 'nn', 'ww', 'nw'], [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$'])): ax = axes[j, i] dt = metrics[met].sel(model=mod, timeres='daily', set='raw', cvset='eval') label = f"{'NSE' if met=='mef' else 'RMSE'} ({'-' if met=='mef' else r'mm d-1'})" plot_map( dt, label=' ', vmin=0 if met=='mef' else 0.1, vmax=1 if met=='mef' else 1, cmap='plasma' if met=='mef' else 'plasma_r', ax=ax, histogram_placement=[0.05, 0.27, 0.2, 0.25], hist_kw={'bins': 20, 'edgecolor': 'none'}, cbar_kwargs={'extend': 'min'}, rasterize=True) ax.set_title('') if i == 0: ax.text(-0.02, 0.45, mod_name, horizontalalignment='right', verticalalignment='center', transform=ax.transAxes, rotation=90, size=9) if j == 0: ax.set_title(label, size=9) savefig('/workspace/bkraft/dl_chapter14/src/notebooks/test') #fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/map_mef_bias_raw_daily.png', dpi=300, bbox_inches='tight') from inspect import getmembers, isclass import matplotlib import matplotlib.pyplot as plt import numpy as np def rasterize_and_save(fname, rasterize_list=None, fig=None, dpi=None, savefig_kw={}): """Save a figure with raster and vector components This function lets you specify which objects to rasterize at the export stage, rather than within each plotting call. Rasterizing certain components of a complex figure can significantly reduce file size. Inputs ------ fname : str Output filename with extension rasterize_list : list (or object) List of objects to rasterize (or a single object to rasterize) fig : matplotlib figure object Defaults to current figure dpi : int Resolution (dots per inch) for rasterizing savefig_kw : dict Extra keywords to pass to matplotlib.pyplot.savefig If rasterize_list is not specified, then all contour, pcolor, and collects objects (e.g., ``scatter, fill_between`` etc) will be rasterized Note: does not work correctly with round=True in Basemap Example ------- Rasterize the contour, pcolor, and scatter plots, but not the line >>> import matplotlib.pyplot as plt >>> from numpy.random import random >>> X, Y, Z = random((9, 9)), random((9, 9)), random((9, 9)) >>> fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(ncols=2, nrows=2) >>> cax1 = ax1.contourf(Z) >>> cax2 = ax2.scatter(X, Y, s=Z) >>> cax3 = ax3.pcolormesh(Z) >>> cax4 = ax4.plot(Z[:, 0]) >>> rasterize_list = [cax1, cax2, cax3] >>> rasterize_and_save('out.svg', rasterize_list, fig=fig, dpi=300) """ # Behave like pyplot and act on current figure if no figure is specified fig = plt.gcf() if fig is None else fig # Need to set_rasterization_zorder in order for rasterizing to work zorder = -5 # Somewhat arbitrary, just ensuring less than 0 if rasterize_list is None: # Have a guess at stuff that should be rasterised types_to_raster = ['QuadMesh', 'Contour', 'collections'] rasterize_list = [] print(""" No rasterize_list specified, so the following objects will be rasterized: """) # Get all axes, and then get objects within axes for ax in fig.get_axes(): for item in ax.get_children(): if any(x in str(item) for x in types_to_raster): rasterize_list.append(item) print('\n'.join([str(x) for x in rasterize_list])) else: # Allow rasterize_list to be input as an object to rasterize if type(rasterize_list) != list: rasterize_list = [rasterize_list] for item in rasterize_list: # Whether or not plot is a contour plot is important is_contour = (isinstance(item, matplotlib.contour.QuadContourSet) or isinstance(item, matplotlib.tri.TriContourSet)) # Whether or not collection of lines # This is commented as we seldom want to rasterize lines # is_lines = isinstance(item, matplotlib.collections.LineCollection) # Whether or not current item is list of patches all_patch_types = tuple( x[1] for x in getmembers(matplotlib.patches, isclass)) try: is_patch_list = isinstance(item[0], all_patch_types) except TypeError: is_patch_list = False # Convert to rasterized mode and then change zorder properties if is_contour: curr_ax = item.ax.axes curr_ax.set_rasterization_zorder(zorder) # For contour plots, need to set each part of the contour # collection individually for contour_level in item.collections: contour_level.set_zorder(zorder - 1) contour_level.set_rasterized(True) elif is_patch_list: # For list of patches, need to set zorder for each patch for patch in item: curr_ax = patch.axes curr_ax.set_rasterization_zorder(zorder) patch.set_zorder(zorder - 1) patch.set_rasterized(True) else: # For all other objects, we can just do it all at once curr_ax = item.axes curr_ax.set_rasterization_zorder(zorder) item.set_rasterized(True) item.set_zorder(zorder - 1) # dpi is a savefig keyword argument, but treat it as special since it is # important to this function if dpi is not None: savefig_kw['dpi'] = dpi # Save resulting figure fig.savefig(fname, **savefig_kw) fig, ax = subplots_robinson() plot_map(metrics['mef'].sel(model='nn', timeres='daily', set='raw', cvset='eval'), robust=True, rasterized=True, ax=ax) fig.savefig('/workspace/bkraft/plots/test.pdf') img.axes.artists[0] fig, ax = new_subplots(2, 2, 0.7, sharey='row', sharex=True, gridspec_kw={'wspace': 0.01, 'hspace': 0.1}) sb.set_style("whitegrid") sb.boxplot(x="set", y="mef", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[0, 0], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="mef", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[0, 1], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="rmse", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[1, 0], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="rmse", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[1, 1], order=order, hue_order=hue_order, palette='Paired') ax[0, 0].legend().set_visible(False) ax[0, 1].legend().set_visible(False) ax[1, 0].legend().set_visible(False) ax[1, 1].legend().set_visible(False) legend = ax[1, 0].legend(loc='upper center', bbox_to_anchor=(1.0, -0.15), ncol=4, title='', frameon=False) for t, l in zip(legend.texts, [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$']): t.set_text(l) ax[1, 0].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 1].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 0].set_xlabel('', fontsize=12) ax[1, 1].set_xlabel('', fontsize=12) ax[0, 0].set_ylabel('NSE (-)') ax[1, 0].set_ylabel('RMSE (mm d-1)') ax[0, 1].set_ylim(-0.6, 1.001); ax[1, 1].set_ylim(0, 1.0); for ax_ in ax.flat: ax_.label_outer() # ax_.grid(axis='y', linestyle='--', color='0.2') ax[0, 0].set_title('training', size=9) ax[0, 1].set_title('test', size=9) fig.align_ylabels(ax[:, 0]) # fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot.pgf') # savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot') ###Output _____no_output_____ ###Markdown Data ###Code dss_nn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_wn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_nw = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dss_ww = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_so.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_wn = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nw = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_ww = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nn_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_wn_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_nw_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day dst_ww_tr = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_trainset.zarr/') * 86400 # 1 kg/m2/s = 86400 mm/day for ds in [dss_nn, dss_nn, dss_nn, dss_nn, dst_nn, dst_nn, dst_nn, dst_nn]: ds.mod.attrs['units'] = r'$mm day^{-1}$' ds.obs.attrs['units'] = r'$mm day^{-1}$' ds.mod.attrs['long_name'] = 'ET' ds.obs.attrs['long_name'] = 'ET' ###Output _____no_output_____ ###Markdown Aggregate to monthly ###Code dst_nn_m = dst_nn.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_wn_m = dst_wn.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nw_m = dst_nw.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_ww_m = dst_ww.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nn_tr_m = dst_nn_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_wn_tr_m = dst_wn_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_nw_tr_m = dst_nw_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() dst_ww_tr_m = dst_ww_tr.resample(time='1MS', keep_attrs=True).mean('time', keep_attrs=True).compute() ###Output /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) /opt/conda/lib/python3.6/site-packages/dask/array/numpy_compat.py:40: RuntimeWarning: invalid value encountered in true_divide x = np.divide(x1, x2, out) ###Markdown MSC and anomalies ###Code def add_msc_and_ano(ds, time_agg='month'): for s in ['mod', 'obs']: msc = ds[s].groupby('time.' + time_agg).mean('time', keep_attrs=True).compute() ano = (ds[s].groupby('time.' + time_agg) - msc).compute() if time_agg == 'month': msc[time_agg] = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sept', 'Oct', 'Nov', 'Dec'] iav = ds[s].groupby('time.year').mean('time', keep_attrs=True) iav -= iav.mean('year') ds[s + '_msc'] = msc ds[s + '_ano'] = ano ds[s + '_iav'] = iav.compute() return ds i = 0 for ds in [dst_nn, dst_wn, dst_nw, dst_ww, dst_nn_tr, dst_wn_tr, dst_nw_tr, dst_ww_tr]: i += 1 print(i) ds = add_msc_and_ano(ds) for ds in [dst_nn_m, dst_wn_m, dst_nw_m, dst_ww_m, dst_nn_tr_m, dst_wn_tr_m, dst_nw_tr_m, dst_ww_tr_m]: i += 1 print(i) ds = add_msc_and_ano(ds) target_dir = '/scratch/dl_chapter14/experiments/et/derived' if not os.path.isdir(target_dir): %mkdir {target_dir} for ds, ds_name in zip( [dst_nn, dst_wn, dst_nw, dst_ww, dst_nn_m, dst_wn_m, dst_nw_m, dst_ww_m, dst_nn_tr, dst_wn_tr, dst_nw_tr, dst_ww_tr, dst_nn_tr_m, dst_wn_tr_m, dst_nw_tr_m, dst_ww_tr_m], ['dst_nn', 'dst_wn', 'dst_nw', 'dst_ww', 'dst_nn_m', 'dst_wn_m', 'dst_nw_m', 'dst_ww_m', 'dst_nn_tr', 'dst_wn_tr', 'dst_nw_tr', 'dst_ww_tr', 'dst_nn_tr_m', 'dst_wn_tr_m', 'dst_nw_tr_m', 'dst_ww_tr_m']): print(ds_name) with ProgressBar(): ds.to_netcdf(f'/scratch/dl_chapter14/experiments/et/derived/{ds_name}.nc') ###Output dst_nn [########################################] | 100% Completed | 15min 37.5s dst_wn [########################################] | 100% Completed | 14min 56.4s dst_nw [########################################] | 100% Completed | 14min 52.7s dst_ww [########################################] | 100% Completed | 15min 8.8s dst_nn_m dst_wn_m dst_nw_m dst_ww_m dst_nn_tr [########################################] | 100% Completed | 21min 2.4s dst_wn_tr [########################################] | 100% Completed | 21min 2.6s dst_nw_tr [########################################] | 100% Completed | 15min 45.2s dst_ww_tr [########################################] | 100% Completed | 16min 5.4s dst_nn_tr_m dst_wn_tr_m dst_nw_tr_m dst_ww_tr_m ###Markdown Spatial analysis ###Code dst_nn.mod dst_nn = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn.nc') dst_wn = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn.nc') dst_nw = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw.nc') dst_ww = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww.nc') dst_nn_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_m.nc') dst_wn_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_m.nc') dst_nw_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_m.nc') dst_ww_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_m.nc') dst_nn_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_tr.nc') dst_wn_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_tr.nc') dst_nw_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_tr.nc') dst_ww_tr = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_tr.nc') dst_nn_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nn_tr_m.nc') dst_wn_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_wn_tr_m.nc') dst_nw_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_nw_tr_m.nc') dst_ww_tr_m = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/dst_ww_tr_m.nc') metrics = [] i = 0 for s, s_name in zip(['', '_msc', '_ano', '_iav'], ['raw', 'msc', 'ano', 'iav']): for ds, n, ts in zip( [dst_wn, dst_nn, dst_ww, dst_nw, dst_wn_m, dst_nn_m, dst_ww_m, dst_nw_m], ['wn', 'nn', 'ww', 'nw'] * 2, np.repeat(['daily', 'monthly'], 4)): i += 1 print(i) ds = ds.chunk({'time': -1, 'month': -1, 'year': -1}) if s_name == 'msc': timedim = 'month' elif s_name == 'iav': timedim = 'year' else: timedim = 'time' m = get_metrics(ds['mod' + s], ds['obs' + s], ['mef', 'rmse'], dim=timedim, verbose=False).compute() m = m.expand_dims({'set': [s_name], 'model': [n], 'timeres': [ts], 'cvset': ['eval']}, axis=(0, 1, 2, 3)) metrics.append(m) i = 0 for s, s_name in zip(['', '_msc', '_ano', '_iav'], ['raw', 'msc', 'ano', 'iav']): for ds, n, ts in zip( [dst_wn_tr, dst_nn_tr, dst_ww_tr, dst_nw_tr, dst_wn_tr_m, dst_nn_tr_m, dst_ww_tr_m, dst_nw_tr_m], ['wn', 'nn', 'ww', 'nw'] * 2, np.repeat(['daily', 'monthly'], 4)): i += 1 print(i) ds = ds.chunk({'time': -1, 'month': -1, 'year': -1}) if s_name == 'msc': timedim = 'month' elif s_name == 'iav': timedim = 'year' else: timedim = 'time' m = get_metrics(ds['mod' + s], ds['obs' + s], ['mef', 'rmse'], dim=timedim, verbose=False).compute() m = m.expand_dims({'set': [s_name], 'model': [n], 'timeres': [ts], 'cvset': ['train']}, axis=(0, 1, 2, 3)) metrics.append(m) xr.merge(metrics).to_netcdf('/scratch/dl_chapter14/experiments/et/derived/spatial_metrics.nc') metrics = xr.open_dataset('/scratch/dl_chapter14/experiments/et/derived/spatial_metrics.nc') metrics['pft'] = data.pft.astype('int') df = metrics.to_dataframe().dropna().reset_index(level=['model', 'set', 'timeres', 'cvset']) df def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'msc') & (df.timeres == 'daily') ].groupby('model').agg(f).round(2) def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'raw') & (df.timeres == 'daily') & (df.pft != 2) ].groupby('model').agg(f).round(2) def q25(x): return x.quantile(0.25) def q50(x): return x.quantile(0.50) def q75(x): return x.quantile(0.75) f = {'mef': ['mean',q25,q50,q75], 'rmse': ['mean',q25,q50,q75]} df[ (df.cvset == 'eval') & (df.set == 'raw') & (df.timeres == 'daily') & (df.pft == 2) ].groupby('model').agg(f).round(2) sb.palplot(sb.color_palette("Paired")) plt.scatter(1, 2, color=sb.color_palette("Paired")[3]) order = ['raw', 'msc', 'ano', 'iav'] hue_order = ['wn', 'nn', 'ww', 'nw'] fig, ax = new_subplots(2, 2, 0.7, sharey='row', sharex=True, gridspec_kw={'wspace': 0.02, 'hspace': 0.04}) plot_kwargs = dict( order=order, hue_order=hue_order, palette='Paired' ) p1 = sb.boxplot(x="set", y="mef", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[0, 0], **plot_kwargs) p2 = sb.boxplot(x="set", y="mef", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[0, 1], **plot_kwargs) p3 = sb.boxplot(x="set", y="rmse", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[1, 0], **plot_kwargs) p4 = sb.boxplot(x="set", y="rmse", hue="model", data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[1, 1], **plot_kwargs) ax[0, 0].legend().set_visible(False) ax[0, 1].legend().set_visible(False) ax[1, 0].legend().set_visible(False) ax[1, 1].legend().set_visible(False) ax[1, 0].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 1].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 0].set_xlabel('') ax[1, 1].set_xlabel('') ax[1, 1].set_xlabel('') ax[0, 0].set_ylabel('NSE ($-$)') ax[1, 0].set_ylabel('RMSE ($mm \ day^{-1}$)') ax[0, 1].set_ylim(-0.6, 1.001); ax[1, 1].set_ylim(0, 1.0); for ax_ in ax.flat: ax_.label_outer() ax_.set_axisbelow(True) ax_.grid(axis='y', color='0.5', linewidth=0.2) ax[0, 0].set_title('training') ax[0, 1].set_title('test') ax[0, 0].tick_params(axis='x', which='both',length=0) ax[0, 1].tick_params(axis='both', which='both',length=0) ax[1, 1].tick_params(axis='y', which='both',length=0) ax[0, 0].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) ax[1, 0].yaxis.set_major_formatter(FormatStrFormatter('%.2f')) fig.align_ylabels(ax[:, 0]) colors = [ sb.color_palette("Paired")[0], sb.color_palette("Paired")[1], sb.color_palette("Paired")[2], sb.color_palette("Paired")[3] ] * 4 for ax_ in ax.flat: ax_.spines['top'].set_visible(False) ax_.spines['right'].set_visible(False) for p, (patch, color) in enumerate(zip(ax_.artists, colors)): patch.set_facecolor(color) patch.set_alpha(0.8) patch.set_linewidth(0.) for q in range(p*5, p*5+5): # print(len(ax_.lines), q) line = ax_.lines[q] line.set_color(color) # whiskers if (q % 5 == 0) or (q % 5 == 1): line.set_linewidth(0.9) # caps if (q % 5 == 2) or (q % 5 == 3): line.set_linewidth(0.9) # median if (q % 5 == 4): line.set_linewidth(0) x, y = line.get_data() xn = (x-(x.sum()/2.))*0.95+(x.sum()/2.) color = [c - o for c, o in zip(color, [0.12, 0.12, 0.12, 0])] ax_.plot(xn, y, color=color, linewidth=1.2, solid_capstyle="butt", zorder=4, alpha=1) legend = ax[1, 0].legend(loc='upper center', bbox_to_anchor=(1.0, -0.2), ncol=4, title='', frameon=False) for t, l in zip(legend.texts, [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$']): t.set_text(l) for legpatch in legend.get_patches(): col = legpatch.get_facecolor() col_alpha = list(col) col_alpha[-1] = 0.8 legpatch.set_edgecolor(col) legpatch.set_facecolor(col_alpha) legpatch.set_linewidth(0) # fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot.pgf') savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/Fig4') fig, axes = subplots_robinson(4, 2, figsize=(fig_size[0], fig_size[0]*1.2), gridspec_kw={'wspace': 0.01, 'hspace': 0.01}) for i, met in enumerate(['mef', 'rmse']): for j, (mod, mod_name) in enumerate(zip(['wn', 'nn', 'ww', 'nw'], [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$'])): ax = axes[j, i] dt = metrics[met].sel(model=mod, timeres='daily', set='raw', cvset='eval') label = 'NSE ($-$)' if met=='mef' else 'RMSE ($mm \ day^{-1}$)' plot_map( dt, label=' ', rasterized=True, vmin=0 if met=='mef' else 0.1, vmax=1 if met=='mef' else 1, cmap='plasma' if met=='mef' else 'plasma_r', ax=ax, histogram_placement=[0.05, 0.27, 0.2, 0.25], hist_kw={'bins': 20, 'edgecolor': 'none'}, cbar_kwargs={'extend': 'min'}) ax.set_title('') if i == 0: ax.text(-0.02, 0.45, mod_name, horizontalalignment='right', verticalalignment='center', transform=ax.transAxes, rotation=90, size=9) if j == 0: ax.set_title(label, size=9) ax.outline_patch.set_linewidth(0.5) savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/Fig2', dpi=300) plt.pcolormesh() !cat '/scratch/dl_chapter14/experiments/et/w_sm.w_perm/hptune/summary/best_params.json' for m in ('wn', 'nn', 'ww', 'nw'): print('model: ', m) pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']==m), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) met = pd_pft.loc[(pd_pft['pft']==2) & (pd_pft['set']=='raw'),:].mean() print(f'NSE: {met["mef"]:0.2f}, {met["rmse"]:0.2f}') met['rmse'].values for m in ('wn', 'nn', 'ww', 'nw'): print('model: ', m) pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']==m), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) met = pd_pft.loc[(pd_pft['pft']!=2) & (pd_pft['set']=='raw'),:].groupby(('set')).mean() print(f'NSE: {met["mef"].values[0]:0.2f}, {met["rmse"].values[0]:0.2f}') pd_pft = df.loc[(df['timeres']=='daily') & (df['cvset']=='eval') & (df['model']=='nn'), :].groupby(('pft', 'set')) pd_pft = pd.concat((pd_pft.mean(), pd_pft.count()['model']), axis=1).reset_index(level=['pft','set']) pd_pft for s in ['raw', 'msc', 'ano', 'iav']: pd_pft.loc[pd_pft['set']==s, :].plot.scatter(x='rmse', y='model') fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) # use mef, bias, metrics in columns for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) plot_map(metrics.isel(set=1, timeres=1, model=2).mef, robust=True) fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) # use mef, bias, metrics in columns for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) fig, axes = subplots_robinson(4, 4, figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) for i, (ds, title) in enumerate(zip([met_wn, met_nn, met_ww , met_nw], ['with SM\nno perm', 'no SM\nno perm', 'with SM\nwith perm', 'no SM\nwith perm'])): ax = axes[:, i] plot_map(ds.corr, vmin=0.5, vmax=1, ax=ax[0]) plot_map(ds.rmse, vmin=0, vmax=0.00001, ax=ax[1]) plot_map(ds.mef, vmin=0, vmax=1, ax=ax[2]) plot_map(ds.bias, vmin=-0.000003, vmax=0.000003, ax=ax[3]) ax[0].set_title(title) fig, axes = subplots_robinson(figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) plot_map((met_nn.corr-met_nw.corr)-(met_ww.corr-met_nw.corr), ax=axes, vmin=0, robust=True) fig, axes = subplots_robinson(figsize=(26, 16), gridspec_kw={'hspace': 0.0, 'wspace': 0.0}) plot_map((met_nn.corr-met_nw.corr)-(met_ww.corr-met_nw.corr), ax=axes, vmin=0, robust=True) dst_nn_msc = dst_nn.groupby('time.dayofyear').mean('time') dst_nn with ProgressBar(): msc = dst_nn.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_ww.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_nw.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = dst_wn.groupby('time.dayofyear').mean('time') msc.to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_msc.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') (dst_nn - msc).to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_msc.zarr/') (dst_ww - msc).to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.w_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_msc.zarr/') (dst_nw - msc).to_zarr('/scratch/dl_chapter14/experiments/et/n_sm.w_perm/inference/pred_to_ano.zarr/') with ProgressBar(): msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_msc.zarr/') (dst_wn - msc).to_zarr('/scratch/dl_chapter14/experiments/et/w_sm.n_perm/inference/pred_to_ano.zarr/') (dst_nn - msc) msc msc = xr.open_zarr('/scratch/dl_chapter14/experiments/et/n_sm.n_perm/inference/pred_to_msc.zarr/') msc.sel(lat=45, lon=8, method='nearest').mod.plot() msc.sel(lat=45, lon=8, method='nearest').obs.plot() xr.apply_ufunc( calc_msc_ano, dst_nn, dask='parallelized', output_dtypes=[float] ).compute() with ProgressBar(): dst_nn_msc = dst_nn.groupby('time.dayofyear').mean('time').compute() def plot_time_series( x, y, xlabel='mod', ylabel='obs', timeagg=None, title='', ax=None, figsize=(15, 5), subplot_kw={}, **kwargs): if ax is None: plt.figure(figsize=figsize) ax = plt.subplot(**subplot_kw) if timeagg is not None: x = x.resample(time=timeagg).mean() y = y.resample(time=timeagg).mean() time = x.time.values x_m = x.mean(dim=['lat', 'lon']) y_m = y.mean(dim=['lat', 'lon']) x_qs = xr_quantile(x, [0.25, 0.75], dim=['lat', 'lon']) y_qs = xr_quantile(y, [0.25, 0.75], dim=['lat', 'lon']) ax.fill_between(time, x_qs.isel(quantile=0), x_qs.isel(quantile=1), alpha=0.2, facecolor='tab:blue', label=r'$modeled_{q0.2-0.8}$') ax.fill_between(time, y_qs.isel(quantile=0), y_qs.isel(quantile=1), alpha=0.2, facecolor='k', label=r'$observed_{q0.2-0.8}$') ax.plot(time, x_m, alpha=1.0, color='tab:blue', linewidth=2., label=r'$modeled_{median}$') ax.plot(time, y_m, color='k', linewidth=1.2, linestyle='--', label=r'$observed_{median}$') # ax.text(0.05, 0.95, f'r={r:.3f}', horizontalalignment='left', verticalalignment='top', transform=ax.transAxes) # ax.set_ylabel(var) ax.legend() ax.patch.set_facecolor('white') plot_time_series(dss.mod, dss.obs) plot_time_series(dss.mod, dss.obs) ts = dst_nw.sel(lat=46, lon=8, method='nearest') ts plt.figure(figsize=(35, 7)) ts.obs.isel(time=slice(0, 1000)).plot(alpha=0.5, color='k') ts.mod.isel(time=slice(0, 1000)).plot(color='orangered', alpha=0.5) %time dst_nn_msc = dst_nn.groupby('time.dayofyear').mean().compute() ts_msc.mod.plot() dst_nn_msc dst_nn_msc.mod.values[1, 1, 1] plot_map(dst_nn_msc.mod.isel(dayofyear=0)) plot_map(dst_nn_msc.obs.isel(dayofyear=0)) def _single_xr_quantile(x, q, dim): if isinstance(dim, str): dim = [dim] ndims = len(dim) axes = tuple(np.arange(ndims)-ndims) m = xr.apply_ufunc( np.nanquantile, x, input_core_dims=[dim], dask='parallelized', output_dtypes=[float], keep_attrs=True, kwargs={'q': q, 'axis': axes}) m.name = 'quantile' return m def xr_quantile(x, q, dim): if not hasattr([1, 2], '__iter__'): q = [q] quantiles = [] for i, q_ in enumerate(q): r = _single_xr_quantile(x, q_, dim).compute() quantiles.append(r) quantiles = xr.concat(quantiles, 'quantile') quantiles['quantile'] = q quantiles.attrs.update({**x.attrs}) return quantiles qs = xr_quantile(dss.obs, [0.25, 0.5, 0.75], dim=['lat', 'lon']) qs qs.isel(quantile=0).plot() qs.isel(quantile=1).plot() qs.isel(quantile=2).plot() med. xr.merge(qs) med = xr_quantile(dss.obs, [0.25, 0.75], dim=['lat', 'lon']).compute() plt.plot(med) med = xr_median(dst.obs, dim=['time']).compute() plot_map(med) plt.figure(figsize=(30, 7)) plt.plot(dss.obs.mean(('lat', 'lon')), 'k--', label='obs global mean') plt.plot(dss.mod.mean(('lat', 'lon')), alpha=0.5, label='mod global mean') plt.legend() import numpy as np import matplotlib.pyplot as plt def f0(x): return x**(-0.5) def f1(x): return x**(-0.2) + 0.01 * x def f_gl(x): gl = np.zeros_like(x) for t, v in enumerate(x): gl[t] = 100 * (v / np.min(x[:t+1]) - 1) return gl def f_min(x): m = np.zeros_like(x) for t, v in enumerate(x): m[t] = np.min(x[:t+1]) return m t = np.linspace(0, 60, 400) e_tr = f0(t) e_va = f1(t) gl = f_gl(e_va) gl_tr = f_gl(e_tr) plt.plot(e_va) plt.plot(e_tr) plt.plot(gl) # plt.plot(gl_tr) def f_gl(x): per_impr = np.zeros_like(x) for t in range(1, len(x)): per_impr[t] = 100 * (1 - x[t] / x[t-1]) return per_impr gl = f_gl(e_va) plt.plot(e_va[1:]) plt.plot(gl[1:]) import pandas as pd df = pd.read_csv('/scratch/dl_chapter14/experiments/hydro/default/tune/hydro/Emulator_117_dropout_in=0.6,dropout_linear=0.1,dropout_lstm=0.6,batch_size=32,dynamic_path=_scratch_dl_chapter14_input_dynamic_gsw_2019-11-14_08-49-30_h07l5k6/progress.csv') df.keys() df plt.plot(df['wloss_eval']) plt.plot(df['loss_eval']) plt.plot(df['perc_improved']) plt.plot(df['patience_counter']) l = {'a': 1, 'b': 2} options_str = ", ".join(l.keys()) print(f'[{options_str}]') print(f'{getattr.__name__}') from datetime import datetime tic = datetime.now() tic.strftime("%m/%d/%Y, %H:%M:%S") toc = datetime.now() elapsed = toc- tic elapsed.seconds mins = int(elapsed.seconds/60) mins secs = int(elapsed.seconds - 60 * mins) secs import xarray as xr import numpy as np import zarr ds_ssd = zarr.open_group('/scratch/dl_chapter14/experiments/hydro/default/pred/predictions.zarr/') ds_ram = zarr.open_group('/run/user/196') ds_ram.tree() def read_zarr(ds, n): for i in range(n): lat = np.random.choice(360) lon = np.random.choice(720) s = ds['mrro'][:, lat, lon] + 1 %time read_zarr(ds_ssd, 1000) %time read_zarr(ds_ssd, 1000) ls -l /run/user/1968 /scratch/ import torch import numpy as np class RNN(torch.nn.Module): def __init__(self): super(RNN, self).__init__() self.rnn = torch.nn.RNN(1, 5) def forward(self, x): out = self.rnn(x) return out import xarray as xr ds = xr.open_zarr('/scratch/dl_chapter14/experiments/hydro/default/pred/predictions.zarr/') ds pred = xr.Dataset({ 'obs': xr.DataArray(ds.values, coords=[ds.lat, ds.lon, ds.time]), 'mod': xr.DataArray(ds.values, coords=[ds.lat, ds.lon, ds.time]) }) pred.attrs = ds.attrs xr.Dataset({ 'mod': xr.DataArray(ds.mrro.values, coords=[dslat, ds.lon, ds.time), 'obs': xr.ones_like(ds.coords) * ds.mrro.values }) ds ds = xr.open_zarr('/scratch/dl_chapter14/target/dynamic/koirala2017.zarr/') ds plot_map(ds.wtd.isel(time=0), robust=True) p = xr.open_zarr('/scratch/hydrodl/data/bucket.zarr/prec/') p_missing = p.data.isnull().sum('time').compute() plot_map(p_missing) xr.open_zarr('/scratch/dl_chapter14/target/dynamic/koirala2017.zarr/') xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc') ds_frac = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc')['mrlslfrac'] ds = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/matsiro-gw.run1_20180805/matsiro-gw_experiment_1_1982.nc')['rzwc'] ds ds_frac_mean = ds_frac.mean(('time', 'level')) ds_mean = ds.mean(('time', 'level')) plt.figure(figsize=(17, 12)) plt.imshow(ds_frac_mean.values, vmin=0.2, vmax=0.6) plt.figure(figsize=(17, 12)) plt.imshow(ds_mean.values) plot_map(ds_mean) "et", "tws", "mrro", "wtd", "mrlslfrac" t = xr.open_dataset('/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_1950.nc') t plot_map(t.mrlslfrac.isel(levelc=1, time=0), robust=True) ls -l /workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/ e ###Output _____no_output_____ ###Markdown input vars| var | dims | path || --- | --- | --- || Rainf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Rainf/' || Snowf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Snowf/' || SWdown | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/SWdown/' || LWdown | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/LWdown/' || Tair | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Tair/' || Wind | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Wind/' || Qair | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/Qair/' || PSurf | lat, lon, time | '/workspace/BGI/data/DataStructureMDI/DATA/grid/Global/0d50_daily/GSWP3/EXP1/Data/PSurf/' || lai | lat, lon, time | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_lai.nc' || ccover | lat, lon, time | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_lai.nc' || PFT | lat, lon | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_PFT.nc' || soil_properties | lat, lon | '/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_soil_propreties.nc' | Target vars| var | dims | path || --- | --- | --- || et (also input) | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || mrlslfrac (mean of first 4 levels) | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || tws | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' || mrro | lat, lon, time | '/workspace/BGI/people/skoirala/spmip/full_matsiro-gw_exp3.run1_20180805_latlonReverse/full_matsiro-gw_exp3_experiment_3_YYYY.nc' | ###Code import xarray as xr d = xr.open_dataset('/workspace/BGI/work_3/dl_chapter14/input_data/org_data/matsiro-gw_soil_propreties.nc') d plot_map(d.PFT) from matplotlib import animation import cartopy import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER from matplotlib.colors import LogNorm from matplotlib import animation, rc from IPython.display import HTML, Image rc('animation', html='html5') %matplotlib inline ds = xr.open_zarr('/scratch/hydrodl/data/bucket.zarr/et/').data ds.isel(time=[0, 4, 8]).values.shape ax = plt.subplot(projection=ccrs.Robinson()) ax.set_global() ax.coastlines(color='white') ax.gridlines() ax.imshow(ds.isel(time=[0, 4, 8]).values.transpose(1, 2, 0)[-1::-1,:,:], transform=ccrs.PlateCarree()) # (ds.isel(time=[0, 4, 8])).plot.imshow(robust=True, ax=ax, transform=ccrs.PlateCarree()) import cartopy.crs as ccrs import xarray import matplotlib.pyplot as plt #ds = xr.load_dataset(...) ax = plt.subplot(projection=ccrs.Robinson()) (ds.isel(time=[0, 4, 8])).plot.imshow(ax=ax, transform=ccrs.PlateCarree()) ax = plt.subplot(projection=ccrs.Robinson()) ax.imshow(ds.isel(time=[0, 4, 8]).transpose('lat', 'lon', 'time').values[-1::-1, :, :], transform=ccrs.PlateCarree()) ax = plt.subplot(projection=ccrs.Robinson()) plt.plot(ds.isel(time=[0, 4, 8]).transpose('lat', 'lon', 'time').values[-1::-1, :, :], transform=ccrs.PlateCarree(), ax=ax) import cartopy.crs as ccrs import xarray import matplotlib.pyplot as plt #ds = xr.load_dataset(...) ax = plt.subplot(projection=ccrs.Robinson()) (ds.isel(time=0)).plot.imshow(robust=True, ax=ax, transform=ccrs.PlateCarree(), add_colorbar=False) xr.__version__ mpl.__version__ ds et fig_width_pt = 443.57848 # Get this from LaTeX using \showthe\columnwidth inches_per_pt = 1.0/72. # Convert pt to inches golden_mean = (np.sqrt(5)-1.0)/2.0 # Aesthetic ratio fig_width = fig_width_pt*inches_per_pt # width in inches fig_height =fig_width*golden_mean # height in inches fig_size = [fig_width,fig_height] pgf_with_latex = { # setup matplotlib to use latex for output "pgf.texsystem": "xelatex", # change this if using xetex or lautex "text.usetex": True, # use LaTeX to write all text "font.family": "serif", "font.serif": [], # blank entries should cause plots to inherit fonts from the document "font.sans-serif": [], "font.monospace": [], "axes.labelsize": 9, # LaTeX default is 10pt font. "font.size": 7, "legend.fontsize": 9, # Make the legend/label fonts a little smaller "xtick.labelsize": 7, "ytick.labelsize": 7, "figure.figsize": fig_size, # default fig size of 0.9 textwidth "pgf.preamble": [ r"\usepackage[utf8x]{inputenc}", # use utf8 fonts becasue your computer can handle it :) r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble ] } mpl.rcParams.update(pgf_with_latex) pgf_with_latex = { # setup matplotlib to use latex for output "pgf.texsystem": "xelatex", # change this if using xetex or lautex "text.usetex": True, # use LaTeX to write all text "font.family": "serif", "font.serif": [], # blank entries should cause plots to inherit fonts from the document "font.sans-serif": [], "font.monospace": [], "axes.labelsize": 9, # LaTeX default is 10pt font. "font.size": 7, "legend.fontsize": 9, # Make the legend/label fonts a little smaller "xtick.labelsize": 7, "ytick.labelsize": 7, "figure.figsize": fig_size, # default fig size of 0.9 textwidth "pgf.preamble": [ r"\usepackage[utf8x]{inputenc}", # use utf8 fonts becasue your computer can handle it :) r"\usepackage[T1]{fontenc}", # plots will be generated using this preamble ] } mpl.rcParams.update(pgf_with_latex) def savefig(filename, **kwargs): #plt.savefig('{}.pgf'.format(filename), pad_inches = 0, bbox_inches='tight') plt.savefig('{}.pdf'.format(filename), pad_inches = 0, bbox_inches='tight', **kwargs) fig, ax = plt.subplots() ax.scatter([1, 1], [2, 2]) ax.set_title('The title') ax.set_xlabel('The x axis') ax.set_ylabel('The y axis') savefig('/workspace/bkraft/dl_chapter14/src/notebooks/test') fig_size fig, axes = subplots_robinson(4, 2, figsize=(6.160812222222222, 6.160812222222222*1.2), gridspec_kw={'wspace': 0, 'hspace': 0.01}) for i, met in enumerate(['mef', 'rmse']): for j, (mod, mod_name) in enumerate(zip(['wn', 'nn', 'ww', 'nw'], [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$'])): ax = axes[j, i] dt = metrics[met].sel(model=mod, timeres='daily', set='raw', cvset='eval') label = f"{'NSE' if met=='mef' else 'RMSE'} ({'-' if met=='mef' else r'mm d-1'})" plot_map( dt, label=' ', vmin=0 if met=='mef' else 0.1, vmax=1 if met=='mef' else 1, cmap='plasma' if met=='mef' else 'plasma_r', ax=ax, histogram_placement=[0.05, 0.27, 0.2, 0.25], hist_kw={'bins': 20, 'edgecolor': 'none'}, cbar_kwargs={'extend': 'min'}, rasterize=True) ax.set_title('') if i == 0: ax.text(-0.02, 0.45, mod_name, horizontalalignment='right', verticalalignment='center', transform=ax.transAxes, rotation=90, size=9) if j == 0: ax.set_title(label, size=9) savefig('/workspace/bkraft/dl_chapter14/src/notebooks/test') #fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/map_mef_bias_raw_daily.png', dpi=300, bbox_inches='tight') from inspect import getmembers, isclass import matplotlib import matplotlib.pyplot as plt import numpy as np def rasterize_and_save(fname, rasterize_list=None, fig=None, dpi=None, savefig_kw={}): """Save a figure with raster and vector components This function lets you specify which objects to rasterize at the export stage, rather than within each plotting call. Rasterizing certain components of a complex figure can significantly reduce file size. Inputs ------ fname : str Output filename with extension rasterize_list : list (or object) List of objects to rasterize (or a single object to rasterize) fig : matplotlib figure object Defaults to current figure dpi : int Resolution (dots per inch) for rasterizing savefig_kw : dict Extra keywords to pass to matplotlib.pyplot.savefig If rasterize_list is not specified, then all contour, pcolor, and collects objects (e.g., ``scatter, fill_between`` etc) will be rasterized Note: does not work correctly with round=True in Basemap Example ------- Rasterize the contour, pcolor, and scatter plots, but not the line >>> import matplotlib.pyplot as plt >>> from numpy.random import random >>> X, Y, Z = random((9, 9)), random((9, 9)), random((9, 9)) >>> fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(ncols=2, nrows=2) >>> cax1 = ax1.contourf(Z) >>> cax2 = ax2.scatter(X, Y, s=Z) >>> cax3 = ax3.pcolormesh(Z) >>> cax4 = ax4.plot(Z[:, 0]) >>> rasterize_list = [cax1, cax2, cax3] >>> rasterize_and_save('out.svg', rasterize_list, fig=fig, dpi=300) """ # Behave like pyplot and act on current figure if no figure is specified fig = plt.gcf() if fig is None else fig # Need to set_rasterization_zorder in order for rasterizing to work zorder = -5 # Somewhat arbitrary, just ensuring less than 0 if rasterize_list is None: # Have a guess at stuff that should be rasterised types_to_raster = ['QuadMesh', 'Contour', 'collections'] rasterize_list = [] print(""" No rasterize_list specified, so the following objects will be rasterized: """) # Get all axes, and then get objects within axes for ax in fig.get_axes(): for item in ax.get_children(): if any(x in str(item) for x in types_to_raster): rasterize_list.append(item) print('\n'.join([str(x) for x in rasterize_list])) else: # Allow rasterize_list to be input as an object to rasterize if type(rasterize_list) != list: rasterize_list = [rasterize_list] for item in rasterize_list: # Whether or not plot is a contour plot is important is_contour = (isinstance(item, matplotlib.contour.QuadContourSet) or isinstance(item, matplotlib.tri.TriContourSet)) # Whether or not collection of lines # This is commented as we seldom want to rasterize lines # is_lines = isinstance(item, matplotlib.collections.LineCollection) # Whether or not current item is list of patches all_patch_types = tuple( x[1] for x in getmembers(matplotlib.patches, isclass)) try: is_patch_list = isinstance(item[0], all_patch_types) except TypeError: is_patch_list = False # Convert to rasterized mode and then change zorder properties if is_contour: curr_ax = item.ax.axes curr_ax.set_rasterization_zorder(zorder) # For contour plots, need to set each part of the contour # collection individually for contour_level in item.collections: contour_level.set_zorder(zorder - 1) contour_level.set_rasterized(True) elif is_patch_list: # For list of patches, need to set zorder for each patch for patch in item: curr_ax = patch.axes curr_ax.set_rasterization_zorder(zorder) patch.set_zorder(zorder - 1) patch.set_rasterized(True) else: # For all other objects, we can just do it all at once curr_ax = item.axes curr_ax.set_rasterization_zorder(zorder) item.set_rasterized(True) item.set_zorder(zorder - 1) # dpi is a savefig keyword argument, but treat it as special since it is # important to this function if dpi is not None: savefig_kw['dpi'] = dpi # Save resulting figure fig.savefig(fname, **savefig_kw) fig, ax = subplots_robinson() plot_map(metrics['mef'].sel(model='nn', timeres='daily', set='raw', cvset='eval'), robust=True, rasterized=True, ax=ax) fig.savefig('/workspace/bkraft/plots/test.pdf') img.axes.artists[0] fig, ax = new_subplots(2, 2, 0.7, sharey='row', sharex=True, gridspec_kw={'wspace': 0.01, 'hspace': 0.1}) sb.set_style("whitegrid") sb.boxplot(x="set", y="mef", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[0, 0], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="mef", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[0, 1], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="rmse", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='train'), :], showfliers=False, ax=ax[1, 0], order=order, hue_order=hue_order, palette='Paired') sb.boxplot(x="set", y="rmse", hue="model", linewidth=0.5, data=df.loc[(df['timeres']=='daily') & (df['cvset']=='eval'), :], showfliers=False, ax=ax[1, 1], order=order, hue_order=hue_order, palette='Paired') ax[0, 0].legend().set_visible(False) ax[0, 1].legend().set_visible(False) ax[1, 0].legend().set_visible(False) ax[1, 1].legend().set_visible(False) legend = ax[1, 0].legend(loc='upper center', bbox_to_anchor=(1.0, -0.15), ncol=4, title='', frameon=False) for t, l in zip(legend.texts, [r'$\mathrm{LSTM_{SM}}$', r'$\mathrm{LSTM_{\neg SM}}$', r'$\mathrm{FC_{SM}}$', r'$\mathrm{FC_{\neg SM}}$']): t.set_text(l) ax[1, 0].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 1].set_xticklabels(['daily', 'daily\nseas. cycle', 'daily\nanomalies', 'interannual\nanomalies']); ax[1, 0].set_xlabel('', fontsize=12) ax[1, 1].set_xlabel('', fontsize=12) ax[0, 0].set_ylabel('NSE (-)') ax[1, 0].set_ylabel('RMSE (mm d-1)') ax[0, 1].set_ylim(-0.6, 1.001); ax[1, 1].set_ylim(0, 1.0); for ax_ in ax.flat: ax_.label_outer() # ax_.grid(axis='y', linestyle='--', color='0.2') ax[0, 0].set_title('training', size=9) ax[0, 1].set_title('test', size=9) fig.align_ylabels(ax[:, 0]) # fig.savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot.pgf') # savefig('/workspace/bkraft/dl_chapter14/src/notebooks/exp1_figures/boxplot') ###Output _____no_output_____
3_5_classifying_newswires_128Units.ipynb
###Markdown ###Code import keras keras.__version__ ###Output _____no_output_____ ###Markdown The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away: ###Code from keras.datasets import reuters (train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000) ###Output Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/reuters.npz 2113536/2110848 [==============================] - 0s 0us/step ###Markdown Preparing the dataWe can vectorize the data with the exact same code as in our previous example: ###Code import numpy as np def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): results[i, sequence] = 1. return results # Our vectorized training data x_train = vectorize_sequences(train_data) # Our vectorized test data x_test = vectorize_sequences(test_data) ###Output _____no_output_____ ###Markdown Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units: ###Code from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(128, activation='relu', input_shape=(10000,))) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy']) ###Output _____no_output_____ ###Markdown Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set: ###Code y_train = np.array(train_labels) y_test = np.array(test_labels) x_val = x_train[:1000] partial_x_train = x_train[1000:] y_val = y_train[:1000] partial_y_train = y_train[1000:] ###Output _____no_output_____ ###Markdown Now let's train our network for 20 epochs: ###Code history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=512, validation_data=(x_val, y_val)) ###Output Epoch 1/20 16/16 [==============================] - 1s 78ms/step - loss: 2.2592 - accuracy: 0.5298 - val_loss: 1.4837 - val_accuracy: 0.6720 Epoch 2/20 16/16 [==============================] - 1s 69ms/step - loss: 1.1801 - accuracy: 0.7364 - val_loss: 1.1731 - val_accuracy: 0.7530 Epoch 3/20 16/16 [==============================] - 1s 67ms/step - loss: 0.8238 - accuracy: 0.8196 - val_loss: 1.0787 - val_accuracy: 0.7730 Epoch 4/20 16/16 [==============================] - 1s 67ms/step - loss: 0.6082 - accuracy: 0.8664 - val_loss: 0.9800 - val_accuracy: 0.8070 Epoch 5/20 16/16 [==============================] - 1s 67ms/step - loss: 0.4419 - accuracy: 0.9059 - val_loss: 0.9105 - val_accuracy: 0.8110 Epoch 6/20 16/16 [==============================] - 1s 67ms/step - loss: 0.3216 - accuracy: 0.9291 - val_loss: 0.9087 - val_accuracy: 0.8170 Epoch 7/20 16/16 [==============================] - 1s 69ms/step - loss: 0.2684 - accuracy: 0.9370 - val_loss: 0.9637 - val_accuracy: 0.8050 Epoch 8/20 16/16 [==============================] - 1s 67ms/step - loss: 0.2010 - accuracy: 0.9501 - val_loss: 1.0705 - val_accuracy: 0.7910 Epoch 9/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1847 - accuracy: 0.9524 - val_loss: 0.9898 - val_accuracy: 0.8080 Epoch 10/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1669 - accuracy: 0.9548 - val_loss: 1.1100 - val_accuracy: 0.7870 Epoch 11/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1508 - accuracy: 0.9533 - val_loss: 1.0103 - val_accuracy: 0.8090 Epoch 12/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1370 - accuracy: 0.9543 - val_loss: 1.0698 - val_accuracy: 0.8030 Epoch 13/20 16/16 [==============================] - 1s 66ms/step - loss: 0.1414 - accuracy: 0.9559 - val_loss: 1.0541 - val_accuracy: 0.8040 Epoch 14/20 16/16 [==============================] - 1s 68ms/step - loss: 0.1234 - accuracy: 0.9569 - val_loss: 1.2184 - val_accuracy: 0.7830 Epoch 15/20 16/16 [==============================] - 1s 68ms/step - loss: 0.1218 - accuracy: 0.9564 - val_loss: 1.1184 - val_accuracy: 0.7990 Epoch 16/20 16/16 [==============================] - 1s 68ms/step - loss: 0.1158 - accuracy: 0.9577 - val_loss: 1.1319 - val_accuracy: 0.8040 Epoch 17/20 16/16 [==============================] - 1s 68ms/step - loss: 0.1095 - accuracy: 0.9579 - val_loss: 1.1767 - val_accuracy: 0.8000 Epoch 18/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1051 - accuracy: 0.9579 - val_loss: 1.2685 - val_accuracy: 0.7990 Epoch 19/20 16/16 [==============================] - 1s 68ms/step - loss: 0.1086 - accuracy: 0.9560 - val_loss: 1.1939 - val_accuracy: 0.7980 Epoch 20/20 16/16 [==============================] - 1s 67ms/step - loss: 0.1020 - accuracy: 0.9570 - val_loss: 1.1264 - val_accuracy: 0.8030 ###Markdown Let's display its loss and accuracy curves: ###Code import matplotlib.pyplot as plt loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(loss) + 1) plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() history.history.keys() plt.clf() # clear figure acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() ###Output _____no_output_____ ###Markdown It seems that the network starts overfitting after 5 epochs. Let's train a new network from scratch for 5 epochs, then let's evaluate it on the test set: ###Code model = models.Sequential() model.add(layers.Dense(128, activation='relu', input_shape=(10000,))) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(46, activation='softmax')) model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(partial_x_train, partial_y_train, epochs=5, batch_size=512, validation_data=(x_val, y_val)) results = model.evaluate(x_test, y_test) results ###Output _____no_output_____ ###Markdown Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data: ###Code predictions = model.predict(x_test) ###Output _____no_output_____ ###Markdown Each entry in `predictions` is a vector of length 46: ###Code predictions[0] ###Output _____no_output_____
notebooks/genmodel/BirthProcess.ipynb
###Markdown Birth Process ###Code import numpy as np import numpy.random as rd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set('poster', 'whitegrid', 'dark', rc={"lines.linewidth": 2, 'grid.linestyle': '-'}) import warnings warnings.filterwarnings('ignore') ###Output _____no_output_____ ###Markdown 一度だけシミュレーション ###Code rd.seed(20200801) beta = 0.0006 N = 10000 a = np.zeros(N+1, dtype=int) a[0] = 1 for n in range(N): a[n+1] = a[n] + (1 if rd.random() < beta*a[n] else 0) fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.set_xlim(0, 10000) ax.plot(a, ',k') ###Output _____no_output_____ ###Markdown 10回計算する ###Code def birth(beta): N = 10000 a = np.zeros(N+1, dtype=int) a[0] = 1 for n in range(N): a[n+1] = a[n] + (1 if rd.random() < beta*a[n] else 0) return a a = np.array([birth(beta) for k in range(10)]) fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) for k in range(10): ax.plot(a[k,:]) ###Output _____no_output_____ ###Markdown 平均をプロット ###Code fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(np.mean(a, axis=0)) ###Output _____no_output_____ ###Markdown 片対数プロット ###Code fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(np.log(np.mean(a, axis=0))) ax.plot([beta*k for k in range(N+1)]) ###Output _____no_output_____ ###Markdown 分散 ###Code fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(np.var(a, axis=0)) fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(np.log(np.var(a, axis=0))) ax.plot([2*beta*k + np.log(1 - np.exp(-beta*k)) for k in range(N+1)]) fig = plt.figure(figsize=(10,6)) ax = fig.add_subplot(111) ax.plot(np.std(a, axis=0)) ###Output _____no_output_____
notebooks/GSD/GSD Rpb1_orthologs_in_1011_genomes.ipynb
###Markdown GSD: Rpb1 orthologs in 1011 genomes collectionThis collects Rpb1 gene and protein sequences from a collection of natural isolates of sequenced yeast genomes from [Peter et al 2017](https://www.ncbi.nlm.nih.gov/pubmed/29643504), and then estimates the count of the heptad repeats. It builds directly on the notebook [here](GSD%20Rpb1_orthologs_in_PB_genomes.ipynb), which descends from [Searching for coding sequences in genomes using BLAST and Python](../Searching%20for%20coding%20sequences%20in%20genomes%20using%20BLAST%20and%20Python.ipynb). It also builds on the notebooks shown [here](https://nbviewer.jupyter.org/github/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb) and [here](https://github.com/fomightez/patmatch-binder). Reference for sequence data: [Genome evolution across 1,011 Saccharomyces cerevisiae isolates. Peter J, De Chiara M, Friedrich A, Yue JX, Pflieger D, Bergström A, Sigwalt A, Barre B, Freel K, Llored A, Cruaud C, Labadie K, Aury JM, Istace B, Lebrigand K, Barbry P, Engelen S, Lemainque A, Wincker P, Liti G, Schacherer J. Nature. 2018 Apr;556(7701):339-344. doi: 10.1038/s41586-018-0030-5. Epub 2018 Apr 11. PMID: 29643504](https://www.ncbi.nlm.nih.gov/pubmed/29643504) ----- Overview![overview of steps](../../imgs/ortholog_mining_summarized.png) PreparationGet scripts and sequence data necessary.**DO NOT 'RUN ALL'. AN INTERACTION IS NECESSARY AT CELL FIVE. AFTER THAT INTERACTION, THE REST BELOW IT CAN BE RUN.**(Caveat: right now this is written for genes with no introns. Only a few hundred have in yeast and that is the organism in this example. Intron presence would only become important when trying to translate in late stages of this workflow.) ###Code gene_name = "RPB1" size_expected = 5202 get_seq_from_link = False link_to_FASTA_of_gene = "https://gist.githubusercontent.com/fomightez/f46b0624f1d8e3abb6ff908fc447e63b/raw/625eaba76bb54e16032f90c8812350441b753a0c/uz_S288C_YOR270C_VPH1_coding.fsa" #**Possible future enhancement would be to add getting the FASTA of the gene from Yeastmine with just systematic id** ###Output _____no_output_____ ###Markdown Get the `blast_to_df` script by running this commands. ###Code import os file_needed = "blast_to_df.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/blast-utilities/blast_to_df.py import pandas as pd ###Output _____no_output_____ ###Markdown **Now to get the entire collection or a subset of the 1011 genomes, the next cell will need to be edited.** I'll probably leave it with a small set for typical running purposes. However, to make it run fast, try the 'super-tiny' set with just two. ###Code # Method to get ALL the genomes. TAKES A WHILE!!! # (ca. 1 hour and 15 minutes to download alone? + Extracting is a while.) # Easiest way to minotor extracting step is to open terminal, cd to # `GENOMES_ASSEMBLED`, & use `ls | wc -l` to count files extracted. #!curl -O http://1002genomes.u-strasbg.fr/files/1011Assemblies.tar.gz #!tar xzf 1011Assemblies.tar.gz #!rm 1011Assemblies.tar.gz # Small development set small_set = True !curl -OL https://www.dropbox.com/s/f42tiygq9tr1545/medium_setGENOMES_ASSEMBLED.tar.gz !tar xzf medium_setGENOMES_ASSEMBLED.tar.gz # Tiny development set #!curl -OL https://www.dropbox.com/s/txufq2jflkgip82/tiny_setGENOMES_ASSEMBLED.tar.gz #!tar xzf tiny_setGENOMES_ASSEMBLED.tar.gz #!mv tiny_setGENOMES_ASSEMBLED GENOMES_ASSEMBLED #define directory with genomes genomes_dirn = "GENOMES_ASSEMBLED" ###Output _____no_output_____ ###Markdown Before processing the list of all of the assemblies, some data cleaning needs to be done. Specifically, we need to fix three that have an file name mismatches with contents in the description lines so this is consistent with the over 1000 others and for about a dozen we need to fix the description lines to match the conventions used in all others. The next two cells do these steps but **the cells must not be run until the data is unpacked**. (I tried to build in some checking to the first cell to alert the user.) *Note:* the tiny set doesn't contain any pertinent files so you can just skip these two cells for that set. While the small set also doesn't contain any pertinent files, I built in conditionals so they can be run for with either the small set or the entire collection of assemblies, as these are the sets most likely to be used. The tiny set is just for very simple debugging purposes. The next cell address simple file name mismatch so that the file name matches the description line in all. ###Code # fix names three files that don't match naming convention (small set has none of them) if not small_set: import os import sys error2fix_dict = { #"CDH.re.fa":"CDH_3.re.fa", "CFH.re.fa":"CFH_4.re.fa", "CRL_.re.fa":"CRL_1.re.fa" } for fn,fn_fix in error2fix_dict.items(): output_file_name = "temp.txt" if os.path.isfile("GENOMES_ASSEMBLED/"+fn): sys.stderr.write("\nFile with name non-matching entries ('{}') observed and" " fixed.".format(fn)) !mv GENOMES_ASSEMBLED/{fn} GENOMES_ASSEMBLED/{fn_fix} #pause and then check if file with original name is there still because # it means this was attempted too soon and need to start over. import time time.sleep(12) #12 seconds if os.path.isfile("GENOMES_ASSEMBLED/"+fn): sys.stderr.write("\n***PROBLEM. TRIED THIS CELL BEFORE FINISHED UPLOADING.\n" "DELETE FILES ASSOCIATED AND START ALL OVER AGAIN WITH UPLOAD STEP***.") else: sys.stderr.write("\nFile '{}' not seen and so nothing done" ". Seems wrong.".format(fn)) sys.exit(1) ###Output _____no_output_____ ###Markdown This next cell addresses changing the description line for about a dozen FASTA files. ###Code # remove text in description lines of about a dozen files so they follow # convention of other files (skipped automatically in case of small set because not pertinent) if not small_set: import os import sys fn_and_text2remove_dict = { "BVH_1.re.fa":"_AC1LP.IND2", "BEB_6.re.fa":"_C37T3ACXX.IND41b", "CEN_4.re.fa":"_C3MA4ACXX.IND41b", "CHE_4.re.fa":"_C3MC3ACXX.IND41b", "CKT_5.re.fa":"_C4AK9ACXX.IND41b", "BHS_1.re.fa":"_C399DACXX.IND41b", "BTD_1.re.fa":"_AC1LP.IND45", "CRL_.re.fa":"_AB3AC.IND41b", "BTL_3.re.fa":"_C3G4CACXX.IND41b", "CPB_4.re.fa":"_C4VAEACXX.IND41b", "CAS_1.re.fa":"_AC1LP.IND6", "BLH_1.re.fa":"_C37YTACXX.IND41b", } output_file_name = "temp.txt" for fn,text2remove in fn_and_text2remove_dict.items(): if os.path.isfile("GENOMES_ASSEMBLED/"+fn): # prepare output file for saving so it will be open and ready with open(output_file_name, 'w') as output_file: # read in the input file with open("GENOMES_ASSEMBLED/"+fn, 'r') as input_handler: for line in input_handler: if line.startswith(">"): new_line = line.replace(text2remove,"") else: new_line = line # Send text to output output_file.write(new_line) # replace the original file with edited !mv temp.txt GENOMES_ASSEMBLED/{fn} sys.stderr.write("\nDescription lines fixed to match final file name in '{}'" " fixed.".format(fn)) else: sys.stderr.write("\nFile '{}' not seen and so nothing done" ". Seems wrong.".format(fn)) sys.exit(1) # Get SGD gene sequence in FASTA format to search for best matches in the genomes import sys gene_filen = gene_name + ".fsa" if get_seq_from_link: !curl -o {gene_filen} {link_to_FASTA_of_gene} else: !touch {gene_filen} sys.stderr.write("\nEDIT THE FILE '{}' TO CONTAIN " "YOUR GENE OF INTEREST (FASTA-FORMATTED)" ".".format(gene_filen)) sys.exit(0) ###Output _____no_output_____ ###Markdown **I PUT CONTENTS OF FILE `S288C_YDL140C_RPO21_coding.fsa` downloaded from [here](https://www.yeastgenome.org/locus/S000002299/sequence) as 'RPB1.fsa'.**Now you are prepared to run BLAST to search each PacBio-sequenced genomes for the best match to a gene from the Saccharomyces cerevisiae strain S288C reference sequence. Use BLAST to search the genomes for matches to the gene in the reference genome at SGDSGD is the [Saccharomyces cerevisiae Genome Database site](http:yeastgenome.org) and the reference genome is from S288C.This is going to go through each genome and make a database so it is searchable and then search for matches to the gene. The information on the best match will be collected. One use for that information will be collecting the corresponding sequences later.Import the script that allows sending BLAST output to Python dataframes so that we can use it here. ###Code from blast_to_df import blast_to_df # Make a list of all `genome.fa` files, excluding `genome.fa.nhr` and `genome.fa.nin` and `genome.fansq` # The excluding was only necessary because I had run some queries preliminarily in development. Normally, it would just be the `.re.fa` at the outset. fn_to_check = "re.fa" genomes = [] import os import fnmatch for file in os.listdir(genomes_dirn): if fnmatch.fnmatch(file, '*'+fn_to_check): if not file.endswith(".nhr") and not file.endswith(".nin") and not file.endswith(".nsq") : # plus skip hidden files if not file.startswith("._"): genomes.append(file) len(genomes) ###Output _____no_output_____ ###Markdown Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from BLAST for many sequences from filling up cell. (You can monitor the making of files ending in `.nhr` for all the FASTA files in `GENOMES_ASSEMBLED` to monitor progress'.) ###Code %%time %%capture SGD_gene = gene_filen dfs = [] for genome in genomes: !makeblastdb -in {genomes_dirn}/{genome} -dbtype nucl result = !blastn -query {SGD_gene} -db {genomes_dirn}/{genome} -outfmt "6 qseqid sseqid stitle pident qcovs length mismatch gapopen qstart qend sstart send qframe sframe frames evalue bitscore qseq sseq" -task blastn from blast_to_df import blast_to_df blast_df = blast_to_df(result.n) dfs.append(blast_df.head(1)) # merge the dataframes in the list `dfs` into one dataframe df = pd.concat(dfs) #Save the df filen_prefix = gene_name + "_orthologBLASTdf" df.to_pickle(filen_prefix+".pkl") df.to_csv(filen_prefix+'.tsv', sep='\t',index = False) #df ###Output _____no_output_____ ###Markdown Computationally check if any genomes missing from the BLAST results list? ###Code subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("results:",len(r)) print ("missing:",len(a-r)) if len(a-r): print("\n") print("ids missing:",a-r) #a - r ###Output _____no_output_____ ###Markdown Sanity check: Report on how expected size compares to max size seen? ###Code size_seen = df.length.max(0) print ("Expected size of gene:", size_expected) print ("Most frequent size of matches:", df.length.mode()[0]) print ("Maximum size of matches:", df.length.max(0)) ###Output _____no_output_____ ###Markdown Collect the identified, raw sequencesGet the expected size centered on the best match, plus a little flanking each because they might not exactly cover the entire open reading frame. (Although, the example here all look to be full size.) ###Code # Get the script for extracting based on position (and install dependency pyfaidx) import os file_needed = "extract_subsequence_from_FASTA.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/Extract_from_FASTA/extract_subsequence_from_FASTA.py !pip install pyfaidx ###Output _____no_output_____ ###Markdown For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output.For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be because the generated files only moved into the `raw` directory as last step of cell: ls seq_extracted* | wc -l (**NOTE: WHEN RUNNING WITH THE FULL SET, THIS CELL BELOW WILL REPORT AROUND A DOZEN `FileNotFoundError:`/Exceptions. HOWEVER, THEY DON'T CAUSE THE NOTEBOOK ITSELF TO CEASE TO RUN. SO DISREGARD THEM FOR THE TIME BEING.** ) ###Code %%capture size_expected = size_expected # use value from above, or alter at this point. #size_expected = df.length.max(0) #bp length of SGD coding sequence; should be equivalent and that way not hardcoded? extra_add_to_start = 51 #to allow for 'fuzziness' at starting end extra_add_to_end = 51 #to allow for 'fuzziness' at far end genome_fn_end = "re.fa" def midpoint(items): ''' takes a iterable of items and returns the midpoint (integer) of the first and second values ''' return int((int(items[0])+int(items[1]))/2) #midpoint((1,100)) def determine_pos_to_get(match_start,match_end): ''' Take the start and end of the matched region. Calculate midpoint between those and then center expected size on that to determine preliminary start and preliminary end to get. Add the extra basepairs to get at each end to allow for fuzziness/differences of actual gene ends for orthologs. Return the final start and end positions to get. ''' center_of_match = midpoint((match_start,match_end)) half_size_expected = int(size_expected/2.0) if size_expected % 2 != 0: half_size_expected += 1 start_pos = center_of_match - half_size_expected end_pos = center_of_match + half_size_expected start_pos -= extra_add_to_start end_pos += extra_add_to_end # Because of getting some flanking sequences to account for 'fuzziness', it # is possible the start and end can exceed possible. 'End' is not a problem # because the `extract_subsequence_from_FASTA.py` script will get as much as # it from the indicated sequence if a larger than possible number is # provided. However,'start' can become negative and because the region to # extract is provided as a string the dash can become a problem. Dealing # with it here by making sequence positive only. # Additionally, because I rely on center of match to position where to get, # part being cut-off due to absence on sequence fragment will shift center # of match away from what is actually center of gene and to counter-balance # add twice the amount to the other end. (Actually, I feel I should adjust # the start end likewise if the sequence happens to be shorter than portion # I would like to capture but I don't know length of involved hit yet and # that would need to be added to allow that to happen!<--TO DO) if start_pos < 0: raw_amount_missing_at_start = abs(start_pos)# for counterbalancing; needs # to be collected before `start_pos` adjusted start_pos = 1 end_pos += 2 * raw_amount_missing_at_start return start_pos, end_pos # go through the dataframe using information on each to come up with sequence file, # specific indentifier within sequence file, and the start and end to extract # store these valaues as a list in a dictionary with the strain identifier as the key. extracted_info = {} start,end = 0,0 for row in df.itertuples(): #print (row.length) start_to_get, end_to_get = determine_pos_to_get(row.sstart, row.send) posns_to_get = "{}-{}".format(start_to_get, end_to_get) record_id = row.sseqid strain_id = row.sseqid.split("-")[0] seq_fn = strain_id + "." + genome_fn_end extracted_info[strain_id] = [seq_fn, record_id, posns_to_get] # Use the dictionary to get the sequences for id_ in extracted_info: #%run extract_subsequence_from_FASTA.py {*extracted_info[id_]} #unpacking doesn't seem to work here in `%run` %run extract_subsequence_from_FASTA.py {genomes_dirn}/{extracted_info[id_][0]} {extracted_info[id_][1]} {extracted_info[id_][2]} #package up the retrieved sequences archive_file_name = gene_name+"_raw_ortholog_seqs.tar.gz" # make list of extracted files using fnmatch fn_part_to_match = "seq_extracted" collected_seq_files_list = [] import os import sys import fnmatch for file in os.listdir('.'): if fnmatch.fnmatch(file, fn_part_to_match+'*'): #print (file) collected_seq_files_list.append(file) !tar czf {archive_file_name} {" ".join(collected_seq_files_list)} # use the list for archiving command sys.stderr.write("\n\nCollected RAW sequences gathered and saved as " "`{}`.".format(archive_file_name)) # move the collected raw sequences to a folder in preparation for # extracting encoding sequence from original source below !mkdir raw !mv seq_extracted*.fa raw ###Output _____no_output_____ ###Markdown That archive should contain the "raw" sequence for each gene, even if the ends are a little different for each. At minimum the entire gene sequence needs to be there at this point; extra at each end is preferable at this point.You should inspect them as soon as possible and adjust the extra sequence to add higher or lower depending on whether the ortholog genes vary more or less, respectively. The reason they don't need to be perfect yet though is because next we are going to extract the longest open reading frame, which presumably demarcates the entire gene. Then we can return to use that information to clean up the collected sequences to just be the coding sequence. Collect protein translations of the genes and then clean up "raw" sequences to just be codingWe'll assume the longest translatable frame in the collected "raw" sequences encodes the protein sequence for the gene orthologs of interest. Well base these steps on the [section '20.1.13 Identifying open reading frames'](http://biopython.org/DIST/docs/tutorial/Tutorial.htmlhtoc299) in the present version of the [Biopython Tutorial and Cookbook](http://biopython.org/DIST/docs/tutorial/Tutorial.html) (Last Update – 18 December 2018 (Biopython 1.73). (First run the next cell to get a script needed for dealing with the strand during the translation and gathering of thge encoding sequence.) ###Code import os file_needed = "convert_fasta_to_reverse_complement.py" if not os.path.isfile(file_needed): !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/ConvertSeq/convert_fasta_to_reverse_complement.py ###Output _____no_output_____ ###Markdown Now to perform the work described in the header to this section...For the next cell, I am going to use the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from the entire set making a long list of output.For ease just monitor the progress in a launched terminal with the following code run in the directory where this notebook will be: ls *_ortholog_gene.fa | wc -l ###Code %%capture # find the featured open reading frame and collect presumed protein sequences # Collect the corresponding encoding sequence from the original source def len_ORF(items): # orf is fourth item in the tuples return len(items[3]) def find_orfs_with_trans(seq, trans_table, min_protein_length): ''' adapted from the present section '20.1.13 Identifying open reading frames' http://biopython.org/DIST/docs/tutorial/Tutorial.html#htoc299 in the present version of the [Biopython Tutorial and Cookbook at http://biopython.org/DIST/docs/tutorial/Tutorial.html (Last Update – 18 December 2018 (Biopython 1.73) Same as there except altered to sort on the length of the open reading frame. ''' answer = [] seq_len = len(seq) for strand, nuc in [(+1, seq), (-1, seq.reverse_complement())]: for frame in range(3): trans = str(nuc[frame:].translate(trans_table)) trans_len = len(trans) aa_start = 0 aa_end = 0 while aa_start < trans_len: aa_end = trans.find("*", aa_start) if aa_end == -1: aa_end = trans_len if aa_end-aa_start >= min_protein_length: if strand == 1: start = frame+aa_start*3 end = min(seq_len,frame+aa_end*3+3) else: start = seq_len-frame-aa_end*3-3 end = seq_len-frame-aa_start*3 answer.append((start, end, strand, trans[aa_start:aa_end])) aa_start = aa_end+1 answer.sort(key=len_ORF, reverse = True) return answer def generate_rcoutput_file_name(file_name,suffix_for_saving = "_rc"): ''' from https://github.com/fomightez/sequencework/blob/master/ConvertSeq/convert_fasta_to_reverse_complement.py Takes a file name as an argument and returns string for the name of the output file. The generated name is based on the original file name. Specific example ================= Calling function with ("sequence.fa", "_rc") returns "sequence_rc.fa" ''' main_part_of_name, file_extension = os.path.splitext( file_name) #from #http://stackoverflow.com/questions/541390/extracting-extension-from-filename-in-python if '.' in file_name: #I don't know if this is needed with the os.path.splitext method but I had it before so left it return main_part_of_name + suffix_for_saving + file_extension else: return file_name + suffix_for_saving + ".fa" def add_strand_to_description_line(file,strand="-1"): ''' Takes a file and edits description line to add strand info at end. Saves the fixed file ''' import sys output_file_name = "temp.txt" # prepare output file for saving so it will be open and ready with open(output_file_name, 'w') as output_file: # read in the input file with open(file, 'r') as input_handler: # prepare to give feeback later or allow skipping to certain start lines_processed = 0 for line in input_handler: lines_processed += 1 if line.startswith(">"): new_line = line.strip() + "; {} strand\n".format(strand) else: new_line = line # Send text to output output_file.write(new_line) # replace the original file with edited !mv temp.txt {file} # Feedback sys.stderr.write("\nIn {}, strand noted.".format(file)) table = 1 #sets translation table to standard nuclear, see # https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi min_pro_len = 80 #cookbook had the standard `100`. Feel free to adjust. prot_seqs_info = {} #collect as dictionary with strain_id as key. Values to # be list with source id as first item and protein length as second and # strand in source seq as third item, and start and end in source sequence as fourth and fifth, # and file name of protein and gene as sixth and seventh. # Example key and value pair: 'YPS138':['<source id>','<protein length>',-1,52,2626,'<gene file name>','<protein file name>'] gene_seqs_fn_list = [] prot_seqs_fn_list = [] from Bio import SeqIO for raw_seq_filen in collected_seq_files_list: #strain_id = raw_seq_filen[:-len_genome_fn_end] #if was dealing with source seq strain_id = raw_seq_filen.split("-")[0].split("seq_extracted")[1] record = SeqIO.read("raw/"+raw_seq_filen,"fasta") raw_seq_source_fn = strain_id + "." + genome_fn_end raw_seq_source_id = record.description.split(":")[0] orf_list = find_orfs_with_trans(record.seq, table, min_pro_len) orf_start, orf_end, strand, prot_seq = orf_list[0] #longest ORF seq for protein coding location_raw_seq = record.description.rsplit(":",1)[1] #get to use in calculating # the start and end position in original genome sequence. raw_loc_parts = location_raw_seq.split("-") start_from_raw_seq = int(raw_loc_parts[0]) end_from_raw_seq = int(raw_loc_parts[1]) length_extracted = len(record) #also to use in calculating relative original #Fix negative value. (Somehow Biopython can report negative value when hitting # end of sequence without encountering stop codon and negatives messes up # indexing later it seems.) if orf_start < 0: orf_start = 0 # Trim back to the first Methionine, assumed to be the initiating MET. # (THIS MIGHT BE A SOURCE OF EXTRA 'LEADING' RESIDUES IN SOME CASES & ARGUES # FOR LIMITING THE AMOUNT OF FLANKING SEQUENCE ADDED TO ALLOW FOR FUZINESS.) try: amt_resi_to_trim = prot_seq.index("M") except ValueError: sys.stderr.write("**ERROR**When searching for initiating methionine,\n" "no Methionine found in the traslated protein sequence.**ERROR**") sys.exit(1) prot_seq = prot_seq[amt_resi_to_trim:] len_seq_trimmed = amt_resi_to_trim * 3 # Calculate the adjusted start and end values for the untrimmed ORF adj_start = start_from_raw_seq + orf_start adj_end = end_from_raw_seq - (length_extracted - orf_end) # Adjust for trimming for appropriate strand. if strand == 1: adj_start += len_seq_trimmed #adj_end += 3 # turns out stop codon is part of numbering biopython returns elif strand == -1: adj_end -= len_seq_trimmed #adj_start -= 3 # turns out stop codon is part of numbering biopython returns else: sys.stderr.write("**ERROR**No strand match option detected!**ERROR**") sys.exit(1) # Collect the sequence for the actual gene encoding region from # the original sequence. This way the original numbers will # be put in the file. start_n_end_str = "{}-{}".format(adj_start,adj_end) %run extract_subsequence_from_FASTA.py {genomes_dirn}/{raw_seq_source_fn} {raw_seq_source_id} {start_n_end_str} # rename the extracted subsequence a more distinguishing name and notify g_output_file_name = strain_id +"_" + gene_name + "_ortholog_gene.fa" !mv {raw_seq_filen} {g_output_file_name} # because the sequence saved happens to # be same as raw sequence file saved previously, that name can be used to # rename new file. gene_seqs_fn_list.append(g_output_file_name) sys.stderr.write("\n\nRenamed gene file to " "`{}`.".format(g_output_file_name)) # Convert extracted sequence to reverse complement if translation was on negative strand. if strand == -1: %run convert_fasta_to_reverse_complement.py {g_output_file_name} # replace original sequence file with the produced file produced_fn = generate_rcoutput_file_name(g_output_file_name) !mv {produced_fn} {g_output_file_name} # add (after saved) onto the end of the description line for that `-1 strand` # No way to do this in my current version of convert sequence. So editing descr line. add_strand_to_description_line(g_output_file_name) #When settled on actual protein encoding sequence, fill out # description to use for saving the protein sequence. prot_descr = (record.description.rsplit(":",1)[0]+ " "+ gene_name + "_ortholog"+ "| " +str(len(prot_seq)) + " aas | from " + raw_seq_source_id + " " + str(adj_start) + "-"+str(adj_end)) if strand == -1: prot_descr += "; {} strand".format(strand) # save the protein sequence as FASTA chunk_size = 70 #<---amino acids per line to have in FASTA prot_seq_chunks = [prot_seq[i:i+chunk_size] for i in range( 0, len(prot_seq),chunk_size)] prot_seq_fa = ">" + prot_descr + "\n"+ "\n".join(prot_seq_chunks) p_output_file_name = strain_id +"_" + gene_name + "_protein_ortholog.fa" with open(p_output_file_name, 'w') as output: output.write(prot_seq_fa) prot_seqs_fn_list.append(p_output_file_name) sys.stderr.write("\n\nProtein sequence saved as " "`{}`.".format(p_output_file_name)) # at end store information in `prot_seqs_info` for later making a dataframe # and then text table for saving summary #'YPS138':['<source id>',<protein length>,-1,52,2626,'<gene file name>','<protein file name>'] prot_seqs_info[strain_id] = [raw_seq_source_id,len(prot_seq),strand,adj_start,adj_end, g_output_file_name,p_output_file_name] sys.stderr.write("\n******END OF A SET OF PROTEIN ORTHOLOG " "AND ENCODING GENE********") # use `prot_seqs_info` for saving a summary text table (first convert to dataframe?) table_fn_prefix = gene_name + "_orthologs_table" table_fn = table_fn_prefix + ".tsv" pkl_table_fn = table_fn_prefix + ".pkl" import pandas as pd info_df = pd.DataFrame.from_dict(prot_seqs_info, orient='index', columns=['descr_id', 'length', 'strand', 'start','end','gene_file','prot_file']) # based on # https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html and # note from Python 3.6 that `pd.DataFrame.from_items` is deprecated; #"Please use DataFrame.from_dict" info_df.to_pickle(pkl_table_fn) info_df.to_csv(table_fn, sep='\t') # keep index is default sys.stderr.write("Text file of associated details saved as '{}'.".format(table_fn)) # pack up archive of gene and protein sequences plus the table seqs_list = gene_seqs_fn_list + prot_seqs_fn_list + [table_fn,pkl_table_fn] archive_file_name = gene_name+"_ortholog_seqs.tar.gz" !tar czf {archive_file_name} {" ".join(seqs_list)} # use the list for archiving command sys.stderr.write("\nCollected gene and protein sequences" " (plus table of details) gathered and saved as " "`{}`.".format(archive_file_name)) ###Output _____no_output_____ ###Markdown Save the tarballed archive to your local machine. ----- Estimate the count of the heptad repeatsMake a table of the estimate of heptad repeats for each orthlogous protein sequence. ###Code # get the 'patmatch results to dataframe' script !curl -O https://raw.githubusercontent.com/fomightez/sequencework/master/patmatch-utilities/patmatch_results_to_df.py ###Output _____no_output_____ ###Markdown Using the trick of putting `%%capture` on first line from [here](https://stackoverflow.com/a/23692951/8508004) to suppress the output from `patmatch_results_to_df` function from filling up cell. ###Code %%time %%capture # Go through each protein sequence file and look for matches to heptad pattern # LATER POSSIBLE IMPROVEMENT. Translate pasted gene sequence and add SGD REF S228C as first in list `prot_seqs_fn_list`. Because # although this set of orthologs includes essentially S228C, other lists won't and best to have reference for comparing. heptad_pattern = "[YF]SP[TG]SP[STAGN]" # will catch repeats#2 through #26 of S288C according to Corden, 2013 PMID: 24040939 from patmatch_results_to_df import patmatch_results_to_df sum_dfs = [] raw_dfs = [] for prot_seq_fn in prot_seqs_fn_list: !perl ../../patmatch_1.2/unjustify_fasta.pl {prot_seq_fn} output = !perl ../../patmatch_1.2/patmatch.pl -p {heptad_pattern} {prot_seq_fn}.prepared os.remove(os.path.join(prot_seq_fn+".prepared")) #delete file made for PatMatch raw_pm_df = patmatch_results_to_df(output.n, pattern=heptad_pattern, name="CTD_heptad") raw_pm_df.sort_values('hit_number', ascending=False, inplace=True) sum_dfs.append(raw_pm_df.groupby('FASTA_id').head(1)) raw_dfs.append(raw_pm_df) sum_pm_df = pd.concat(sum_dfs, ignore_index=True) sum_pm_df.sort_values('hit_number', ascending=False, inplace=True) sum_pm_df = sum_pm_df[['FASTA_id','hit_number']] #make protein length into dictionary with ids as keys to map to FASTA_ids in # order to add protein length as a column in summary table length_info_by_id= dict(zip(info_df.descr_id,info_df.length)) sum_pm_df['prot_length'] = sum_pm_df['FASTA_id'].map(length_info_by_id) sum_pm_df = sum_pm_df.reset_index(drop=True) raw_pm_df = pd.concat(raw_dfs, ignore_index=True) ###Output _____no_output_____ ###Markdown Because of use of `%%capture` to suppress output, need a separate cell to see results summary. (Only showing parts here because will add more useful information below.) ###Code sum_pm_df.head() # don't show all yet since lots and want to make this dataframe more useful below sum_pm_df.tail() # don't show all yet since lots and want to make this dataframe more useful below ###Output _____no_output_____ ###Markdown I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. WHAT ONES MISSING NOW? Computationally check if any genomes missing from the list of orthologs? ###Code subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) ortholg_ids = sum_pm_df.FASTA_id.tolist() ortholg_ids = [x.split("-")[0] for x in ortholg_ids] a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("BLAST results:",len(r)) print ("missing from BLAST:",len(a-r)) if len(a-r): #print("\n") print("ids missing in BLAST results:",a-r) #a - r print ("\n\n=====POST-BLAST=======\n\n") o = set(ortholg_ids) print("orthologs extracted:",len(o)) print ("missing post-BLAST:",len(r-o)) if len(r-o): print("\n") print("ids lost post-BLAST:",r-o) #r - o print ("\n\n\n=====SUMMARY=======\n\n") if len(a-r) and len(r-o): print("\nAll missing in end:",(a-r) | (r-o)) ###Output _____no_output_____ ###Markdown Make the Summarizing Dataframe more informativeAdd information on whether a stretch of 'N's is present. Making the data suspect and fit to be filtered out. Distinguish between cases where it is in what corresponds to the last third of the protein vs. elsewhere, if possible. Plus whether stop codon is present at end of encoding sequence because such cases also probably should be filtered out.Add information from the supplemental data table so possible patterns can be assessed more easily. Add information about N stretches and stop codon ###Code # Collect following information for each gene sequence: # N stretch of at least two or more present in first 2/3 of gene sequence # N stretch of at least two or more present in last 1/3 of gene sequence # stop codon encoded at end of sequence? import re min_number_Ns_in_row_to_collect = 2 pattern_obj = re.compile("N{{{},}}".format(min_number_Ns_in_row_to_collect), re.I) # adpated from # code worked out in `collapse_large_unknown_blocks_in_DNA_sequence.py`, which relied heavily on # https://stackoverflow.com/a/250306/8508004 def longest_stretch2ormore_found(string, pattern_obj): ''' Check if a string has stretches of Ns of length two or more. If it does, return the length of longest stretch. If it doesn't return zero. Based on https://stackoverflow.com/a/1155805/8508004 and GSD Assessing_ambiguous_nts_in_nuclear_PB_genomes.ipynb ''' longest_match = '' for m in pattern_obj.finditer(string): if len(m.group()) > len(longest_match): longest_match = m.group() if longest_match == '': return 0 else: return len(longest_match) def chunk(xs, n): '''Split the list, xs, into n chunks; from http://wordaligned.org/articles/slicing-a-list-evenly-with-python''' L = len(xs) assert 0 < n <= L s, r = divmod(L, n) chunks = [xs[p:p+s] for p in range(0, L, s)] chunks[n-1:] = [xs[-r-s:]] return chunks n_stretch_last_third_by_id = {} n_stretch_first_two_thirds_by_id = {} stop_codons = ['TAA','TAG','TGA'] stop_codon_presence_by_id = {} for fn in gene_seqs_fn_list: # read in sequence without using pyfaidx because small and not worth making indexing files lines = [] with open(fn, 'r') as seqfile: for line in seqfile: lines.append(line.strip()) descr_line = lines[0] seq = ''.join(lines[1:]) gene_seq_id = descr_line.split(":")[0].split(">")[1]#first line parsed for all in front of ":" and without caret # determine first two-thirds and last third chunks = chunk(seq,3) assert len(chunks) == 3, ("The sequence must be split in three parts'.") first_two_thirds = chunks[0] + chunks[1] last_third = chunks[-1] # Examine each part n_stretch_last_third_by_id[gene_seq_id] = longest_stretch2ormore_found(last_third,pattern_obj) n_stretch_first_two_thirds_by_id[gene_seq_id] = longest_stretch2ormore_found(first_two_thirds,pattern_obj) #print(gene_seq_id) #print (seq[-3:] in stop_codons) #stop_codon_presence_by_id[gene_seq_id] = seq[-3:] in stop_codons stop_codon_presence_by_id[gene_seq_id] = "+" if seq[-3:] in stop_codons else "-" # Add collected information to sum_pm_df sum_pm_df['NstretchLAST_THIRD'] = sum_pm_df['FASTA_id'].map(n_stretch_last_third_by_id) sum_pm_df['NstretchELSEWHERE'] = sum_pm_df['FASTA_id'].map(n_stretch_first_two_thirds_by_id) sum_pm_df['stop_codon'] = sum_pm_df['FASTA_id'].map(stop_codon_presence_by_id) # Safe to ignore any warnings about copy. I think because I swapped columns in and out # of sum_pm_df earlier perhaps. ###Output _____no_output_____ ###Markdown Add details on strains from the published supplemental informationThis section is based on [this notebook entitled 'GSD: Add Supplemental data info to nt count data for 1011 cerevisiae collection'](https://github.com/fomightez/cl_sq_demo-binder/blob/master/notebooks/GSD/GSD%20Add_Supplemental_data_info_to_nt_count%20data%20for%201011_cerevisiae_collection.ipynb). ###Code !curl -OL https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-018-0030-5/MediaObjects/41586_2018_30_MOESM3_ESM.xls !pip install xlrd import pandas as pd #sum_pm_TEST_df = sum_pm_df.copy() supp_df = pd.read_excel('41586_2018_30_MOESM3_ESM.xls', sheet_name=0, header=3, skipfooter=31) supp_df['Standardized name'] = supp_df['Standardized name'].str.replace('SACE_','') suppl_info_dict = supp_df.set_index('Standardized name').to_dict('index') #Make new column with simplified strain_id tags to use for relating to supplemental table def add_id_tags(fasta_fn): return fasta_fn[:3] sum_pm_df["id_tag"] = sum_pm_df['FASTA_id'].apply(add_id_tags) ploidy_dict_by_id = {x:suppl_info_dict[x]['Ploidy'] for x in suppl_info_dict} aneuploidies_dict_by_id = {x:suppl_info_dict[x]['Aneuploidies'] for x in suppl_info_dict} eco_origin_dict_by_id = {x:suppl_info_dict[x]['Ecological origins'] for x in suppl_info_dict} clade_dict_by_id = {x:suppl_info_dict[x]['Clades'] for x in suppl_info_dict} sum_pm_df['Ploidy'] = sum_pm_df.id_tag.map(ploidy_dict_by_id) #Pandas docs has `Index.map` (uppercase `I`) but only lowercase works. sum_pm_df['Aneuploidies'] = sum_pm_df.id_tag.map(aneuploidies_dict_by_id) sum_pm_df['Ecological origin'] = sum_pm_df.id_tag.map(eco_origin_dict_by_id) sum_pm_df['Clade'] = sum_pm_df.id_tag.map(clade_dict_by_id) # remove the `id_tag` column add for relating details from supplemental to summary df sum_pm_df = sum_pm_df.drop('id_tag',1) # use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE #with pd.option_context('display.max_rows', None, 'display.max_columns', None): # display(sum_pm_df) sum_pm_df ###Output _____no_output_____ ###Markdown I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. Filter collected set to those that are 'complete'For plotting and summarizing with a good set of information, best to remove any where the identified ortholog gene has stretches of 'N's or lacks a stop codon.(Keep unfiltered dataframe around though.) ###Code sum_pm_UNFILTEREDdf = sum_pm_df.copy() #subset to those where there noth columns for Nstretch assessment are zero sum_pm_df = sum_pm_df[(sum_pm_df[['NstretchLAST_THIRD','NstretchELSEWHERE']] == 0).all(axis=1)] # based on https://codereview.stackexchange.com/a/185390 #remove any where there isn't a stop codon sum_pm_df = sum_pm_df.drop(sum_pm_df[sum_pm_df.stop_codon != '+'].index) ###Output _____no_output_____ ###Markdown Computationally summarize result of filtering in comparison to previous steps: ###Code subjids = df.sseqid.tolist() #print (subjids) #print (subjids[0:10]) subjids = [x.split("-")[0] for x in subjids] #print (subjids) #print (subjids[0:10]) len_genome_fn_end = len(fn_to_check) + 1 # plus one to accound for the period that will be # between `fn_to_check` and strain_id`, such as `SK1.genome.fa` genome_ids = [x[:-len_genome_fn_end] for x in genomes] #print (genome_ids[0:10]) ortholg_ids = sum_pm_UNFILTEREDdf.FASTA_id.tolist() ortholg_ids = [x.split("-")[0] for x in ortholg_ids] filtered_ids = sum_pm_df.FASTA_id.tolist() filtered_ids =[x.split("-")[0] for x in filtered_ids] a = set(genome_ids) #print (a) print ("initial:",len(a)) r = set(subjids) print("BLAST results:",len(r)) print ("missing from BLAST:",len(a-r)) if len(a-r): #print("\n") print("ids missing in BLAST results:",a-r) #a - r print ("\n\n=====POST-BLAST=======\n\n") o = set(ortholg_ids) print("orthologs extracted:",len(o)) print ("missing post-BLAST:",len(r-o)) if len(r-o): print("\n") print("ids lost post-BLAST:",r-o) #r - o print ("\n\n\n=====PRE-FILTERING=======\n\n") print("\nNumber before filtering:",len(sum_pm_UNFILTEREDdf)) if len(a-r) and len(r-o): print("\nAll missing in unfiltered:",(a-r) | (r-o)) print ("\n\n\n=====POST-FILTERING SUMMARY=======\n\n") f = set(filtered_ids) print("\nNumber left in filtered set:",len(sum_pm_df)) print ("Number removed by filtering:",len(o-f)) if len(a-r) and len(r-o) and len(o-f): print("\nAll missing in filtered:",(a-r) | (r-o) | (o-f)) # use following two lines when sure want to see all and COMMENT OUT BOTTOM LINE with pd.option_context('display.max_rows', None, 'display.max_columns', None): display(sum_pm_df) #sum_pm_df ###Output _____no_output_____ ###Markdown I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. Archive the 'Filtered' set of sequencesAbove I saved all the gene and deduced protein sequences of the orthologs in a single archive. It might be useful to just have an archive of the 'filtered' set. ###Code # pack up archive of gene and protein sequences for the 'filtered' set. # Include the summary table too. # This is different than the other sets I made because this 'filtering' was # done using the dataframe and so I don't have the file associations. The file names # though can be generated using the unfiltered file names for the genes and proteins # and sorting which ones don't remain in the filtered set using 3-letter tags at # the beginning of the entries in `FASTA_id` column to relate them. # Use the `FASTA_id` column of sum_pm_df to make a list of tags that remain in filtered set tags_remaining_in_filtered = [x[:3] for x in sum_pm_df.FASTA_id.tolist()] # Go through the gene and protein sequence list and collect those where the first # three letters match the tag gene_seqs_FILTfn_list = [x for x in gene_seqs_fn_list if x[:3] in tags_remaining_in_filtered] prot_seqs_FILTfn_list = [x for x in prot_seqs_fn_list if x[:3] in tags_remaining_in_filtered] # Save the files in those two lists along with the sum_pm_df (as tabular data and pickled form) patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary" patmatchsum_fn = patmatchsum_fn_prefix + ".tsv" pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl" import pandas as pd sum_pm_df.to_pickle(pklsum_patmatch_fn) sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default FILTEREDseqs_n_df_list = gene_seqs_FILTfn_list + prot_seqs_FILTfn_list + [patmatchsum_fn,pklsum_patmatch_fn] archive_file_name = gene_name+"_ortholog_seqsFILTERED.tar.gz" !tar czf {archive_file_name} {" ".join(FILTEREDseqs_n_df_list)} # use the list for archiving command sys.stderr.write("\nCollected gene and protein sequences" " (plus table of details) for 'FILTERED' set gathered and saved as " "`{}`.".format(archive_file_name)) ###Output _____no_output_____ ###Markdown Download the 'filtered' sequences to your local machine. Summarizing with filtered setPlot distribution. ###Code %matplotlib inline import math import matplotlib.pyplot as plt import seaborn as sns sns.set() #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeat_distribution' #sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"])); p= sns.countplot(sum_pm_df["hit_number"], order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)), color="C0", alpha= 0.93) #palette="Blues"); # `order` to get those categories with zero # counts to show up from https://stackoverflow.com/a/45359713/8508004 p.set_xlabel("heptad repeats") #add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004 ncount = len(sum_pm_df) for pat in p.patches: x=pat.get_bbox().get_points()[:,0] y=pat.get_bbox().get_points()[1,1] # note that this check on the next line was necessary to add when I went back to cases where there's # no counts for certain categories and so `y` was coming up `nan` for for thos and causing error # about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004 if not math.isnan(y): p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333') if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ###Output _____no_output_____ ###Markdown However, with the entire 1011 collection, those at the bottom can not really be seen. The next plot shows this by limiting y-axis to 103.It should be possible to make a broken y-axis plot for this eventually but not right now as there is no automagic way. So for now will need to composite the two plots together outside.(Note that adding percents annotations makes height of this plot look odd in the notebook cell for now.) ###Code %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns sns.set() #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeat_distributionLIMIT103' #sns.distplot(sum_pm_df["hit_number"], kde=False, bins = max(sum_pm_df["hit_number"])); p= sns.countplot(sum_pm_df["hit_number"], order = list(range(sum_pm_df.hit_number.min(),sum_pm_df.hit_number.max()+1)), color="C0", alpha= 0.93) #palette="Blues"); # `order` to get those categories with zero # counts to show up from https://stackoverflow.com/a/45359713/8508004 p.set_xlabel("heptad repeats") plt.ylim(0, 103) #add percent above bars, based on code in middle of https://stackoverflow.com/a/33259038/8508004 ncount = len(sum_pm_df) for pat in p.patches: x=pat.get_bbox().get_points()[:,0] y=pat.get_bbox().get_points()[1,1] # note that this check on the next line was necessary to add when I went back to cases where there's # no counts for certain categories and so `y` was coming up `nan` for those and causing error # about needing positive value for the y value; `math.isnan(y)` based on https://stackoverflow.com/a/944733/8508004 if not math.isnan(y): p.annotate('{:.1f}%'.format(100.*y/(ncount)), (x.mean(), y), ha='center', va='bottom', size = 9, color='#333333') if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png') fig.savefig(saveplot_fn_prefix + '.svg'); ###Output _____no_output_____ ###Markdown I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ###Code %matplotlib inline # above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic. # Visualization # This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts. # For example, see `GC-clusters relative mito chromosome and feature` where I ran # `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom` # add the strain info for listing that without chr info & add species information for coloring on that chromosome_id_prefix = "-" def FASTA_id_to_strain(FAid): ''' use FASTA_id column value to convert to strain_id and then return the strain_id ''' return FAid.split(chromosome_id_prefix)[0] sum_pm_df_for_plot = sum_pm_df.copy() sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain) # sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips # it is easier to add species column first and then use map instead of doing both at same with one `apply` # of a function or both separately, both with `apply` of two different function. # sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species) sum_pm_df_for_plot['species'] = 'cerevisiae' #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeats_by_strain' import matplotlib.pyplot as plt if len(sum_pm_df) > 60: plt.figure(figsize=(8,232)) else: plt.figure(figsize=(8,12)) import seaborn as sns sns.set() # Simple look - Comment out everything below to the next two lines to see it again. p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b") p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="Clade") # NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data # and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller. p.set_xlabel("heptad repeats") #p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below # and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up # needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept #p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks ''' xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004 for i in range(len(xticks)): #print (i) # WAS FOR DEBUGGING keep_ticks = [1,3,5] #harcoding essentially again, but at least it works if i not in keep_ticks: xticks[i].set_visible(False) ''' ''' # Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get # those with highest number of repeats with combination I could come up with. sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or # tried `.astype('category')` get plotting of the 0.5 values too sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when # added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise # it was putting the first rows on the left, which happened to be the 'higher' repeat values #p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot? p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D", # size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have # strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored p.set_xlabel("heptad repeats") sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df; ''' if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ###Output _____no_output_____ ###Markdown (Hexagons are used for those without an assigned clade in [the supplemental data Table 1](https://www.nature.com/articles/s41586-018-0030-5) in the plot above.)I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. ###Code %matplotlib inline # above line works for JupyterLab which I was developing in. Try `%matplotlib notebook` for when in classic. # Visualization # This is loosely based on my past use of seaborn when making `plot_sites_position_across_chromosome.py` and related scripts. # For example, see `GC-clusters relative mito chromosome and feature` where I ran # `%run plot_sites_position_across_chromosome.py GC_df_for_merging.pkl -o strand_ofGCacross_mito_chrom` # add the strain info for listing that without chr info & add species information for coloring on that chromosome_id_prefix = "-" def FASTA_id_to_strain(FAid): ''' use FASTA_id column value to convert to strain_id and then return the strain_id ''' return FAid.split(chromosome_id_prefix)[0] sum_pm_df_for_plot = sum_pm_df.copy() sum_pm_df_for_plot['strain'] = sum_pm_df['FASTA_id'].apply(FASTA_id_to_strain) # sum_pm_df['species'] = sum_pm_df['FASTA_id'].apply(strain_to_species) # since need species for label plot strips # it is easier to add species column first and then use map instead of doing both at same with one `apply` # of a function or both separately, both with `apply` of two different function. # sum_pm_df['species'] = sum_pm_df['strain'].apply(strain_to_species) sum_pm_df_for_plot['species'] = 'cerevisiae' #Want an image file of the figure saved? saveplot = True saveplot_fn_prefix = 'heptad_repeats_by_proteinlen' import matplotlib.pyplot as plt if len(sum_pm_df) > 60: plt.figure(figsize=(8,232)) else: plt.figure(figsize=(8,12)) import seaborn as sns sns.set() # Simple look - Comment out everything below to the next two lines to see it again. #p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="h", size=7.5, alpha=.98, palette="tab20b") p = sns.stripplot(x="hit_number", y="strain", data=sum_pm_df_for_plot, marker="D", size=9.5, alpha=.98, hue="prot_length") # NOTE CANNOT JUST USE ONE WITH `hue` by 'Clase' because several don't Clades assigned in the supplemental data # and so those left off. This overlays the two and doesn't cause artifacts when size of first maker smaller. p.set_xlabel("heptad repeats") #p.set_xticklabels([" ","23"," ","24", " ", "25"]) # This was much easier than all the stuff I tried for `Adjusted` look below # and the only complaint I have with the results is that what I assume are the `minor` tick lines show up; still ended up # needing this when added `xticks = p.xaxis.get_major_ticks()` in order to not show decimals for ones I kept #p.set(xticks=[]) # this works to remove the ticks entirely; however, I want to keep major ticks ''' xticks = p.xaxis.get_major_ticks() #based on https://stackoverflow.com/q/50820043/8508004 for i in range(len(xticks)): #print (i) # WAS FOR DEBUGGING keep_ticks = [1,3,5] #harcoding essentially again, but at least it works if i not in keep_ticks: xticks[i].set_visible(False) ''' ''' # Highly Adjusted look - Comment out default look parts above. Ended up going with simple above because still couldn't get # those with highest number of repeats with combination I could come up with. sum_pm_df_for_plot["repeats"] = sum_pm_df_for_plot["hit_number"].astype(str) # when not here (use `x="hit_number"` in plot) or # tried `.astype('category')` get plotting of the 0.5 values too sum_pm_df_for_plot.sort_values('hit_number', ascending=True, inplace=True) #resorting again was necessary when # added `sum_pm_df["hit_number"].astype(str)` to get 'lower' to 'higher' as left to right for x-axis; otherwise # it was putting the first rows on the left, which happened to be the 'higher' repeat values #p = sns.catplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #marker size ignored in catplot? p = sns.stripplot(x="repeats", y="strain", hue="species", data=sum_pm_df, marker="D", size=10, alpha=.98) #p = sns.stripplot(x="repeats", y="strain", hue="species", order = list(species_dict.keys()), data=sum_pm_df_for_plot, marker="D", # size=10, alpha=.98) # not fond of essentially harcoding to strain order but makes more logical sense to have # strains with most repeats at the top of the y-axis; adding `order` makes `sort` order be ignored p.set_xlabel("heptad repeats") sum_pm_df_for_plot.sort_values('hit_number', ascending=False, inplace=True) #revert to descending sort for storing df; ''' if saveplot: fig = p.get_figure() #based on https://stackoverflow.com/a/39482402/8508004 fig.savefig(saveplot_fn_prefix + '.png', bbox_inches='tight') fig.savefig(saveplot_fn_prefix + '.svg'); ###Output _____no_output_____ ###Markdown I assume that '+ 2' should be added to the hit_number for each based on S288C according to [Corden, 2013](https://www.ncbi.nlm.nih.gov/pubmed/24040939) (or `+1` like [Hsin and Manley, 2012](https://www.ncbi.nlm.nih.gov/pubmed/23028141)); however, that is something that could be explored further. Make raw and summary data available for use elsewhereAll the raw data is there for each strain in `raw_pm_df`. For example, the next cell shows how to view the data associated with the summary table for isolate ADK_8: ###Code ADK_8_raw = raw_pm_df[raw_pm_df['FASTA_id'] == 'ADK_8-20587'].sort_values('hit_number', ascending=True).reset_index(drop=True) ADK_8_raw ###Output _____no_output_____ ###Markdown The summary and raw data will be packaged up into one file in the cell below. One of the forms will be a tabular text data ('.tsv') files that can be opened in any spreadsheet software. ###Code # save summary and raw results for use elsewhere (or use `.pkl` files for reloading the pickled dataframe into Python/pandas) patmatch_fn_prefix = gene_name + "_orthologs_patmatch_results" patmatchsum_fn_prefix = gene_name + "_orthologs_patmatch_results_summary" patmatchsumFILTERED_fn_prefix = gene_name + "_orthologs_patmatch_results_summaryFILTERED" patmatch_fn = patmatch_fn_prefix + ".tsv" pkl_patmatch_fn = patmatch_fn_prefix + ".pkl" patmatchsumUNF_fn = patmatchsumFILTERED_fn_prefix + ".tsv" pklsum_patmatchUNF_fn = patmatchsumFILTERED_fn_prefix + ".pkl" patmatchsum_fn = patmatchsum_fn_prefix + ".tsv" pklsum_patmatch_fn = patmatchsum_fn_prefix + ".pkl" import pandas as pd sum_pm_df.to_pickle(pklsum_patmatch_fn) sum_pm_df.to_csv(patmatchsum_fn, sep='\t') # keep index is default sys.stderr.write("Text file of summary details after filtering saved as '{}'.".format(patmatchsum_fn)) sum_pm_UNFILTEREDdf.to_pickle(pklsum_patmatchUNF_fn) sum_pm_UNFILTEREDdf.to_csv(patmatchsumUNF_fn, sep='\t') # keep index is default sys.stderr.write("\nText file of summary details before filtering saved as '{}'.".format(patmatchsumUNF_fn)) raw_pm_df.to_pickle(pkl_patmatch_fn) raw_pm_df.to_csv(patmatch_fn, sep='\t') # keep index is default sys.stderr.write("\nText file of raw details saved as '{}'.".format(patmatchsum_fn)) # pack up archive dataframes pm_dfs_list = [patmatch_fn,pkl_patmatch_fn,patmatchsumUNF_fn,pklsum_patmatchUNF_fn, patmatchsum_fn,pklsum_patmatch_fn] archive_file_name = patmatch_fn_prefix+".tar.gz" !tar czf {archive_file_name} {" ".join(pm_dfs_list)} # use the list for archiving command sys.stderr.write("\nCollected pattern matching" " results gathered and saved as " "`{}`.".format(archive_file_name)) ###Output _____no_output_____ ###Markdown Download the tarballed archive of the files to your computer.For now that archive doesn't include the figures generated from the plots because with a lot of strains they can get large. Download those if you want them. (Look for `saveplot_fn_prefix` settings in the code to help identify file names.) ---- ###Code import time def executeSomething(): #code here print ('.') time.sleep(480) #60 seconds times 8 minutes while True: executeSomething() ###Output _____no_output_____
ch12/Knowledge_Graph_spaCy3.ipynb
###Markdown [**Blueprints for Text Analysis Using Python**](https://github.com/blueprints-for-text-analytics-python/blueprints-text) Jens Albrecht, Sidharth Ramachandran, Christian Winkler**If you like the book or the code examples here, please leave a friendly comment on [Amazon.com](https://www.amazon.com/Blueprints-Text-Analytics-Using-Python/dp/149207408X)!** Chapter 12: Building a Knowledge Graph Updated Version for spaCy 3.xYou find the version as printed in the book using spaCy 2.3.2 [here](Knowledge_Graph.ipynb). ###Code import spacy assert spacy.__version__[0] >= '3' ###Output _____no_output_____ ###Markdown We adjusted the this notebook to run with spaCy 3.0.Note, that spaCy 3.0 includes transformer models, which are more accurate than the conventional models. If you go for accuracy in named entity recognition, you should prefer the transformer models. See https://spacy.io/universe/project/spacy-transformers**Changes to `nlp.add_pipe`**: https://spacy.io/api/languageadd_pipe "As of v3.0, the Language.add_pipe method doesn’t take callables anymore and instead expects the name of a component factory registered using @Language.component or @Language.factory. It now takes care of creating the component, adds it to the pipeline and returns it."**Changes to `matcher.add`**: https://spacy.io/api/matcheradd "As of spaCy v3.0, Matcher.add takes a list of patterns as the second argument (instead of a variable number of arguments). The on_match callback becomes an optional keyword argument."**NeuralCoref not yet supported in spaCy 3**But planned: https://github.com/huggingface/neuralcoref/issues/295Currently, we cannot import NeuralCoref, so the functions for anaphora resolution are replaced by dummies in this notebook. RemarkThe code in this notebook differs slightly from the printed book. For example we frequently use pretty print (`pp.pprint`) instead of `print` and `tqdm`'s `progress_apply` instead of Pandas' `apply`. Moreover, several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book.You may also find some lines marked with three hashes . Those are not in the book as well as they don't contribute to the concept.All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting.-------HINWEISE:Zum Installieren des Packets 'neuralcoref' wird git als Client benötigt.Des weiteren muss für die Erstellung des Pakets in Windows 10 Visual Studio für C++ 14.0 oder größer installiert sein (https://visualstudio.microsoft.com/visual-cpp-build-tools/). Der Aufruf der Installation von 'neuralcoref' lautet dann: pip install git+https://github.com/huggingface/neuralcoref.gitEs wirdd eine GEXF-Datei erzeugt, die man sich mit GEPHI (https://gephi.org/) anschauen kann. Für das Starten von GEPHI ist Java Version 1.8 (oder höher) nötig. SetupSet directory locations. If working on Google Colab: copy files and install required libraries.**On Colab:** Use runtime **with GPU (Menu&rarr;Runtime&rarr;Change runtime type)** for better performance **before** you start this notebook. ###Code import sys, os ON_COLAB = 'google.colab' in sys.modules if ON_COLAB: GIT_ROOT = 'https://github.com/blueprints-for-text-analytics-python/blueprints-text/raw/master' os.system(f'wget {GIT_ROOT}/ch12/setup.py') %run -i setup.py ###Output _____no_output_____ ###Markdown Load Python SettingsCommon imports, defaults for formatting in Matplotlib, Pandas etc. ###Code %run "$BASE_DIR/settings.py" %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'png' # to print output of all statements and not just the last from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" sys.path.append(BASE_DIR + '/packages') # to import blueprints package from blueprints.knowledge import display_ner, reset_pipeline, print_dep_tree, alias_lookup ###Output _____no_output_____ ###Markdown What you'll learn and what we build Knowledge Graphs Blueprint to Query Wikidata for Aliases not in BookBelow you find an example of what you can do with public ontologies like Wikidata. Here, we defined a SPARQL query to retrieve the names, aliases and URLs of all entities of type "United States federal executive department" (https://www.wikidata.org/wiki/Q910252). ###Code # pip install sparqlwrapper # https://rdflib.github.io/sparqlwrapper/ import sys from SPARQLWrapper import SPARQLWrapper, JSON endpoint_url = "https://query.wikidata.org/sparql" query = """ SELECT ?org ?orgLabel ?aliases ?urlLabel ?country ?countryLabel WITH { SELECT ?org (group_concat(distinct ?alias;separator=",") as ?aliases) WHERE { ?org wdt:P31 wd:Q910252. # org is(P31) US department (Q910252) ?org skos:altLabel ?alias. filter(lang(?alias)="en") } GROUP BY ?org } AS %i WHERE { include %i ?org wdt:P856 ?url; # has official website (P856) wdt:P17 ?country. # has country (P17) SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } } ORDER BY ?orgLabel """ def sparql_df(endpoint_url, query): user_agent = "Wikidata-Service Python/%s.%s" % (sys.version_info[0], sys.version_info[1]) sparql = SPARQLWrapper(endpoint_url, agent=user_agent) sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() columns = results['head']['vars'] rows = [] for result in results["results"]["bindings"]: row = {} for col in result: row[col] = result[col]['value'] rows.append(row) return pd.DataFrame.from_records(rows, columns=columns) wd_df = sparql_df(endpoint_url, query) # rename columns wd_df.columns = ['org_id', 'org', 'aliases', 'url', 'country_id', 'country'] wd_df['org_id'] = wd_df['org_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['country_id'] = wd_df['country_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['aliases'] = wd_df['aliases'].str.split(',') wd_df.head(10) ###Output _____no_output_____ ###Markdown Building a Knowledge Graph Introducing the Data Set ###Code import nltk nltk.download('reuters') ###Output _____no_output_____ ###Markdown Data Preparation of NLTK Reuters Corpus (not in book)This section contains the steps how to create the data frame for some of the examples. ###Code from nltk.corpus import reuters # List of documents documents = reuters.fileids() print(str(len(documents)) + " documents") print(str(len(reuters.categories())) + " categories:") print(reuters.categories()[:10] + ['...']) print(reuters.readme()[:200]) ###Output _____no_output_____ ###Markdown Each article is stored as a separated file. The data files are identified by a file ID of the form "train/1234" or "test/5678". We first create a data frame with the `fileid` column and then load the raw text for each ID into a second column. Finally, as we don't care whether it's train or test, we just the number from the file ID and use it as the index of our data frame. ###Code from nltk.corpus import reuters # create fileid column df = pd.DataFrame(reuters.fileids("acq"), columns=['fileid']) # load raw texts df['raw'] = df['fileid'].progress_map(lambda f: reuters.raw(f)) # set index to numeric id df.index = df['fileid'].map(lambda f: int(f.split('/')[1])) df.index.name = None df = df.drop(columns=['fileid']).sort_index() df.sample(3, random_state=12) ###Output _____no_output_____ ###Markdown As we see from the example, we will still need some data cleaning before we can expect to get reasonably good results during named entity recognition. First, we separate headlines from the actual news text by splitting at the first newline. ###Code df[['headline', 'raw_text']] = df.progress_apply(lambda row: row['raw'].split('\n', 1), axis='columns', result_type='expand') ###Output _____no_output_____ ###Markdown Now we use the adapted data cleaning blueprint from Chapter 4 for to remove some disturbing artifacts, substitute some abbreviations (like "dlr" for dollar) and repair some typos. ###Code def clean(text): text = text.replace('&lt;','<') # html escape text = re.sub(r'[<>]', '"', text) # quotation marks instead of <> text = re.sub(r'[ ]*"[A-Z\.]+"', '', text) # drop stock symbols text = re.sub(r'[ ]*\([A-Z\.]+\)', '', text) # drop stock symbols text = re.sub(r'\bdlr(s?)\b', r'dollar\1', text, flags=re.I) text = re.sub(r'\bmln(s?)\b', r'million\1', text, flags=re.I) text = re.sub(r'\bpct\b', r'%', text, flags=re.I) # normalize INC to Inc text = re.sub(r'\b(Co|Corp|Inc|Plc|Ltd)\b', lambda m: m.expand(r'\1').capitalize(), text, flags=re.I) text = re.sub(r'"', r'', text) # quotation marks text = re.sub(r'\s+', ' ', text) # multiple whitespace by one text = re.sub(r'acquisiton', 'acquisition', text) # typo text = re.sub(r'Nippon bLife', 'Nippon Life', text) # typo text = re.sub(r'COMSAT.COMSAT', 'COMSAT. COMSAT', text) # missing space at end of sentence #text = re.sub(r'Audio/Video', 'Audio-Video', text) # missing space at end of sentence return text.strip() ###Output _____no_output_____ ###Markdown So let's have a look at the result of our data cleaning steps : ###Code # that's what the substitutions do texts = [ """Trafalgar House Plc &lt;TRAF.L> said it has\n acquired the entire share capital of &lt;Capital Homes Inc> of the\n U.S. For 20 mln dlrs in cash.""", """Equiticorp Holdings Ltd &lt;EQUW.WE> now owns\n or has received acceptances representing 59.93 pct of the\n issued ordinary share capital of Guinness Peat Group Plc\n &lt;GNSP.L>, Equiticorp said in a statement.""", """Computer Terminal Systems Inc said it has completed the sale of 200,000 shares of its common stock, and warrants to acquire an additional one mln shares, to "Sedio N.V." of Lugano, Switzerland for 50,000 dlrs.""", """North American Group Ltd said it has a definitive agreement to buy 100 pct of Pioneer Business Group Inc of Atlanta.""" ] for text in texts: print(clean(text), end="\n\n") ###Output _____no_output_____ ###Markdown We apply it to the `raw_text` and create a new `text` column: ###Code df['text'] = df['raw_text'].progress_map(clean) df['headline'] = df['headline'].progress_map(clean) ###Output _____no_output_____ ###Markdown The newly created column `text` contains the cleaned articles. But we have one disturbing artifact left in the data: a few articles, like the second one in the sample above, consist only of capital letters. In fact, here the raw text is identical to the headlines. We finally drop those because named entity recognition will not yield useful results on such a text. ###Code # we will drop these articles with only capital letters df[df['raw_text'].map(lambda t: t.isupper())][['headline', 'raw_text']].head(3) # drop articles with only capital letters df = df[df['raw_text'].map(lambda t: not t.isupper())] # this is our clean data set df[['headline', 'text']].sample(3, random_state=12) pd.options.display.max_colwidth = 200 ###Output _____no_output_____ ###Markdown Book section continues ... Named-Entity Recognition ###Code nlp = spacy.load('en_core_web_sm') print(*nlp.pipeline, sep='\n') text = """Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp was still under consideration. We hope to come soon to a mutual agreement, Kistler said. The directors of Baker filed a law suit in Texas to force Hughes to complete the merger.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) print(*[(e.text, e.label_) for e in doc.ents], sep=' ') from spacy import displacy displacy.render(doc, style='ent') ###Output _____no_output_____ ###Markdown Blueprint: Rule-based Named-Entity Recognition ###Code reset_pipeline(nlp, pipes=[]) from spacy.pipeline import EntityRuler departments = ['Justice', 'Transportation'] patterns = [{"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": "Department"}, {"TEXT": "of"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}]}, {"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}, {"TEXT": "Department"}]}, {"label": "GOV", "pattern": [{"TEXT": "Securities"}, {"TEXT": "and"}, {"TEXT": "Exchange"}, {"TEXT": "Commission"}]}] # not in book, but useful if you modify the rules if nlp.has_pipe('entity_ruler'): nlp.remove_pipe('entity_ruler') entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) nlp.add_pipe(entity_ruler) text = """Justice Department is an alias for the U.S. Department of Justice. Department of Transportation and the Securities and Exchange Commission are government organisations, but the Sales Department is not.""" #text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) # print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') ### displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Blueprint: Normalizing Named-Entities ###Code reset_pipeline(nlp, [entity_ruler]) text = "Baker International's shares climbed on the New York Stock Exchange." doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') from spacy.tokens import Span from spacy import Language @Language.component("norm_entities") def norm_entities(doc): ents = [] for ent in doc.ents: if ent[0].pos_ == "DET": # leading article ent = Span(doc, ent.start+1, ent.end, label=ent.label) if len(ent) > 0: if ent[-1].pos_ == "PART": # trailing particle like 's ent = Span(doc, ent.start, ent.end-1, label=ent.label) ents.append(ent) doc.ents = tuple(ents) return doc nlp.add_pipe("norm_entities") doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') # not in book displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Merging Entity Tokens ###Code from spacy.pipeline import merge_entities if nlp.has_pipe('merge_entities'): ### _ = nlp.remove_pipe('merge_entities') ### nlp.add_pipe('merge_entities') doc = nlp(text) print(*[(t.text, t.ent_type_) for t in doc if t.ent_type_ != '']) ###Output _____no_output_____ ###Markdown Testing the NER Pipeline on Sample Data (not in book)Take random samples from the text and display the result. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities']) i = df['text'].sample(1).index[0] print("Text Number:", i) text = df['text'].loc[i][:600] text = re.sub(r'\s+', ' ', text.strip()) print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') pd.options.display.max_rows = 500 # blueprint function to show tokens with entity attributes display_ner(doc, include_punct=True).query('ent_type != ""') pd.options.display.max_rows = 60 ###Output _____no_output_____ ###Markdown Coreference Resolution Blueprint: Using spaCy's Token Extensions ###Code # not in book, but usefule if you modify the extension from spacy.tokens import Token if Token.has_extension('ref_n'): _ = Token.remove_extension('ref_n') if Token.has_extension('ref_t'): _ = Token.remove_extension('ref_t') if Token.has_extension('ref_t_'): _ = Token.remove_extension('ref_t_') from spacy.tokens import Token Token.set_extension('ref_n', default='') Token.set_extension('ref_t', default='') @Language.component("init_coref") def init_coref(doc): for e in doc.ents: if e.label_ in ['ORG', 'GOV', 'PERSON']: e[0]._.ref_n, e[0]._.ref_t = e.text, e.label_ return doc ###Output _____no_output_____ ###Markdown Blueprint: Alias Resolution ###Code from blueprints.knowledge import alias_lookup for token in ['Transportation Department', 'DOT', 'SEC', 'TWA']: print(token, ':', alias_lookup[token]) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref']) @Language.component("alias_resolver") def alias_resolver(doc): """Lookup aliases and store result in ref_t, ref_n""" for ent in doc.ents: token = ent[0].text if token in alias_lookup: a_name, a_type = alias_lookup[token] ent[0]._.ref_n, ent[0]._.ref_t = a_name, a_type return propagate_ent_type(doc) @Language.component("propagate_ent_type") def propagate_ent_type(doc): """propagate entity type stored in ref_t""" ents = [] for e in doc.ents: if e[0]._.ref_n != '': # if e is a coreference e = Span(doc, e.start, e.end, label=e[0]._.ref_t) ents.append(e) doc.ents = tuple(ents) return doc nlp.add_pipe('alias_resolver') from blueprints.knowledge import display_ner text = """The deal of Trans World Airlines is under investigation by the U.S. Department of Transportation. The Transportation Department will block the deal of TWA.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Resolving Name Variations ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver']) text = """ Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp. was still under consideration. We hope to come to a mutual agreement, Kistler said. Baker will force Hughes to complete the merger. """ text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) def name_match(m1, m2): m2 = re.sub(r'[()\.]', '', m2) # ignore parentheses and dots m2 = r'\b' + m2 + r'\b' # \b marks word boundary m2 = re.sub(r'\s+', r'\\b.*\\b', m2) return re.search(m2, m1, flags=re.I) is not None @Language.component("name_resolver") def name_resolver(doc): """create name-based reference to e1 as primary mention of e2""" ents = [e for e in doc.ents if e.label_ in ['ORG', 'PERSON']] for i, e1 in enumerate(ents): for e2 in ents[i+1:]: if name_match(e1[0]._.ref_n, e2[0].text): e2[0]._.ref_n = e1[0]._.ref_n e2[0]._.ref_t = e1[0]._.ref_t return propagate_ent_type(doc) nlp.add_pipe('name_resolver') doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Testing Name Coreference Resolution Sample Data (not in book)Take random samples from the text and display the result. You may find examples where the resolution is not working correctly. We have put the emphasis on the simplicity of rules, so there will be cases in which they don't work. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # not in the book: # pick random examples to test the string matching i = df['text'].sample(1).index[0] i = 10 print("Text Number:", i) text = df['text'].loc[i]#[:300] # print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''") ###Output _____no_output_____ ###Markdown Blueprint: Anaphora Resolution with NeuralCoref ###Code text = """Hughes Tool Co said its merger with Baker was still under consideration. Hughes had a board meeting today. W.A. Kistler mentioned that the company hopes for a mutual agreement. He is reasonably confident.""" text = re.sub(r'\s+', ' ', text).strip() ### reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # NEXT CODE BLOCKS ARE COMMENTED UNTIL NEURALCOREF SUPPORTS SPACY 3! # from neuralcoref import NeuralCoref # neural_coref = NeuralCoref(nlp.vocab, greedyness=0.45) # nlp.add_pipe(neural_coref, name='neural_coref') # doc = nlp(text) # print(*doc._.coref_clusters, sep='\n') ###Output _____no_output_____ ###Markdown Not in the book: Try the visualization of NeuralCoref!https://huggingface.co/coref/?text=Hughes%20Tool%20Co%20said%20its%20merger%20with%20Baker%20was%20still%20under%20consideration.%20 ###Code @Language.component("anaphor_coref") def anaphor_coref(doc): """anaphora resolution""" for token in doc: # if token is coref and not already dereferenced if token._.in_coref and token._.ref_n == '': ref_span = token._.coref_clusters[0].main # get referred span if len(ref_span) <= 3: # consider only short spans for ref in ref_span: # find first dereferenced entity if ref._.ref_n != '': token._.ref_n = ref._.ref_n token._.ref_t = ref._.ref_t break return doc # if nlp.has_pipe('anaphor_coref'): ### # nlp.remove_pipe('anaphor_coref') ### # nlp.add_pipe('anaphor_coref') # doc = nlp(text) # display_ner(doc).query("ref_n != ''") \ # [['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] # Dummy components for neural_coref and anaphor_coref # to keep the remaining code working @Language.component("neural_coref") def neural_coref(doc): return doc @Language.component("anaphor_coref") def anaphor_coref(doc): return doc ###Output _____no_output_____ ###Markdown Name Normalization ###Code def strip_legal_suffix(text): return re.sub(r'(\s+and)?(\s+|\b(Co|Corp|Inc|Plc|Ltd)\b\.?)*$', '', text) print(strip_legal_suffix('Hughes Tool Co')) @Language.component("norm_names") def norm_names(doc): for t in doc: if t._.ref_n != '' and t._.ref_t in ['ORG']: t._.ref_n = strip_legal_suffix(t._.ref_n) if t._.ref_n == '': t._.ref_t = '' return doc nlp.add_pipe("norm_names") ###Output _____no_output_____ ###Markdown Entity Linking Testing Coreference Resolution (not in book)Not in the book, but a good demonstration of what works good and what doesn't work, yet. ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) # pick random examples and test i = df['text'].sample(1).index[0] i = 2948 # 1862, 1836,2948,7650,3013,2950,3095 print("Text Number:", i) text = df['text'].loc[i][:500] print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) # display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Creating a Cooccurence Graph **Largest connected component of the cooccurrence graph generated from the Reuters corpus** The visualization was prepared with the help of [Gephi](https://gephi.org/). Extracting Cooccurrences from a Document ###Code from itertools import combinations def extract_coocs(doc, include_types): ents = set([(e[0]._.ref_n, e[0]._.ref_t) for e in doc.ents if e[0]._.ref_t in include_types]) yield from combinations(sorted(ents), 2) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size], disable=['neural_coref', 'anaphor_coref']) for j, doc in enumerate(docs): try: coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) except: print(f"Index {i+j}") print(df['text'][i+j][0:100]) raise print(*coocs[:3], sep='\n') coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'type1', 'ent2', 'type2'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) cooc_df['articles'] = cooc_df['articles'].map( lambda lst: ','.join([str(a) for a in lst[:5]])) cooc_df.head(3) ###Output _____no_output_____ ###Markdown Visualizing the Graph with Gephi ###Code import networkx as nx graph = nx.from_pandas_edgelist( cooc_df[['ent1', 'ent2', 'articles', 'freq']] \ .query('freq > 3').rename(columns={'freq': 'weight'}), source='ent1', target='ent2', edge_attr=True) nx.readwrite.write_gexf(graph, 'cooc.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____ ###Markdown Visualizing the Graph with NetworkX (not in book)We can also use NetworkX for drawing, it's just not that nice. By executing the code below you will see more nodes than in the book, where we manually removed several nodes for the sake of clarity. ###Code # identify the greatest component (connected subgraph) # and plot only that one giant_component = sorted(nx.connected_components(graph), key=len, reverse=True) graph = graph.subgraph(giant_component[0]) pos = nx.kamada_kawai_layout(graph, weight='weight') # pos = nx.fruchterman_reingold_layout(graph, weight='weight') # pos = nx.circular_layout(graph) _ = plt.figure(figsize=(20, 20)) nx.draw(graph, pos, node_size=1000, node_color='skyblue', alpha=0.8, with_labels = True) plt.title('Graph Visualization', size=15) for (node1,node2,data) in graph.edges(data=True): width = data['weight'] _ = nx.draw_networkx_edges(graph,pos, edgelist=[(node1, node2)], width=width, edge_color='#505050', alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown Blueprint: Identifying Acronyms (not in book)It is very easy to generate a very good list of suggestions for acronyms if you search for frequent cooccurrences of acronyms. To find possible acronyms in the cooccurrence data frame, we look for all tuples that have an acronym (all capital letters) either as source or as target. As additional conditions, we require that the first letter in both is the same and the combination exists more than once. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'name_resolver', 'norm_names']) # no alias resolver batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'ent2'])['article_id'] \ .agg(['count']).rename(columns={'count': 'freq'}) \ .reset_index().sort_values('freq', ascending=False) acro_pattern = (cooc_df['ent1'].str.isupper() | cooc_df['ent2'].str.isupper()) & \ (cooc_df['ent1'].str[:1] == cooc_df['ent2'].str[:1]) & \ (cooc_df['freq'] > 1) print(len(cooc_df[acro_pattern])) cooc_df[acro_pattern][:10] ###Output _____no_output_____ ###Markdown For our corpus, this yields about 40 potential acronyms.We save them to a file: ###Code # export to csv cooc_df[acro_pattern][['ent1', 'ent2']] \ .sort_values(['ent1', 'ent2']) \ .to_csv('possible_acronyms.txt', index=False) ###Output _____no_output_____ ###Markdown This file has to be curated manually. After cleaning, we load the remaining acronyms and convert them to a dictionary: ###Code # curate manually the csv acro_df = pd.read_csv('possible_acronyms.txt') acro_df.set_index('ent1')['ent2'].to_dict() ###Output _____no_output_____ ###Markdown We took this list, and curated it to create a dictionary that maps acronyms to their long names. It is provided in the blueprints package for this chapter and part of `alias_lookup`. Here are some example entries: ###Code from blueprints.knowledge import _acronyms for acro in ['TWA', 'UCPB', 'SEC', 'DOT']: print(acro, ' --> ', alias_lookup[acro]) ###Output _____no_output_____ ###Markdown Relation Extraction Blueprint: Relation Extraction by Phrase Matching ###Code # use large model, otherwise the examples look different! # to make it work on Colab, we need to import the model directly # usually you would use nlp = spacy.load('en_core_web_lg') import en_core_web_lg nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger.""" text = re.sub('\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) from spacy.matcher import Matcher matcher = Matcher(nlp.vocab) acq_synonyms = ['acquire', 'buy', 'purchase'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'POS': 'VERB', 'LEMMA': {'IN': acq_synonyms}}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('acquires', [pattern]) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) def extract_rel_match(doc, matcher): for sent in doc.sents: for match_id, start, end in matcher(sent): span = sent[start:end] # matched span pred = nlp.vocab.strings[match_id] # rule name subj, obj = span[0], span[-1] if pred.startswith('rev-'): # reversed relation subj, obj = obj, subj pred = pred[4:] yield ((subj._.ref_n, subj._.ref_t), pred, (obj._.ref_n, obj._.ref_t)) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger. The Schlumberger unit Fairchild Corp received an offer.""" text = re.sub('\s+', ' ', text) ### doc = nlp(text) print(*extract_rel_match(doc, matcher), sep='\n') text = "Fairchild Corp was acquired by Fujitsu." print(*extract_rel_match(nlp(text), matcher), sep='\n') text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." print(*extract_rel_match(nlp(text), matcher), sep='\n') if matcher.has_key("acquires"): matcher.remove("acquires") ###Output _____no_output_____ ###Markdown Blueprint: Relation Extraction using Dependency Trees ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) text = "Fairchild Corp was acquired by Fujitsu." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) # Here is the longer part of the code, that was skipped in the book. # Actually we search for the shortest path between the # subject running through our predicate (verb) to the object. # subject and object are organizations in our examples. # Here are the three helper functions omitted in the book: # - bfs: breadth first searching the closest subject/object # - is_passive: checks if noun or verb is in passive form # - find_subj: searches left part of tree for subject # - find_obj: searches right part of tree for object from collections import deque def bfs(root, ent_type, deps, first_dep_only=False): """Return first child of root (included) that matches ent_type and dependency list by breadth first search. Search stops after first dependency match if first_dep_only (used for subject search - do not "jump" over subjects)""" to_visit = deque([root]) # queue for bfs while len(to_visit) > 0: child = to_visit.popleft() # print("child", child, child.dep_) if child.dep_ in deps: if child._.ref_t == ent_type: return child elif first_dep_only: # first match (subjects) return None elif child.dep_ == 'compound' and \ child.head.dep_ in deps and \ child._.ref_t == ent_type: # check if contained in compound return child to_visit.extend(list(child.children)) return None def is_passive(token): if token.dep_.endswith('pass'): # noun return True for left in token.lefts: # verb if left.dep_ == 'auxpass': return True return False def find_subj(pred, ent_type, passive): """Find closest subject in predicates left subtree or predicates parent's left subtree (recursive). Has a filter on organizations.""" for left in pred.lefts: if passive: # if pred is passive, search for passive subject subj = bfs(left, ent_type, ['nsubjpass', 'nsubj:pass'], True) else: subj = bfs(left, ent_type, ['nsubj'], True) if subj is not None: # found it! return subj if pred.head != pred and not is_passive(pred): return find_subj(pred.head, ent_type, passive) # climb up left subtree else: return None def find_obj(pred, ent_type, excl_prepos): """Find closest object in predicates right subtree. Skip prepositional objects if the preposition is in exclude list. Has a filter on organizations.""" for right in pred.rights: obj = bfs(right, ent_type, ['dobj', 'pobj', 'iobj', 'obj', 'obl']) if obj is not None: if obj.dep_ == 'pobj' and obj.head.lemma_.lower() in excl_prepos: # check preposition continue return obj return None def extract_rel_dep(doc, pred_name, pred_synonyms, excl_prepos=[]): for token in doc: if token.pos_ == 'VERB' and token.lemma_ in pred_synonyms: pred = token passive = is_passive(pred) subj = find_subj(pred, 'ORG', passive) if subj is not None: obj = find_obj(pred, 'ORG', excl_prepos) if obj is not None: if passive: # switch roles obj, subj = subj, obj yield ((subj._.ref_n, subj._.ref_t), pred_name, (obj._.ref_n, obj._.ref_t)) text = """Fujitsu said that Schlumberger Ltd has arranged to sell its stake in Fairchild Inc.""" doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') text = "Schlumberger Ltd has arranged to sell to Fujitsu its stake in Fairchild Inc." doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 80}) print("A:", *extract_rel_dep(doc, 'sells', ['sell'])) print("B:", *extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from'])) texts = [ "Fairchild Corp was bought by Fujitsu.", # 1 "Fujitsu, a competitor of NEC Co, acquired Fairchild Inc.", # 2 "Fujitsu is expanding." + "The company made an offer to acquire 80% of Fairchild Inc.", # 3 "Fujitsu plans to acquire 80% of Fairchild Corp.", # 4 "Fujitsu plans not to acquire Fairchild Corp.", # 5 "The competition forced Fujitsu to aquire Fairchild Corp." # 6 ] acq_synonyms = ['acquire', 'buy', 'purchase'] for i, text in enumerate(texts): doc = nlp(text) rels = extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) print(f'{i+1}:', *rels) ###Output _____no_output_____ ###Markdown Creating the Knowledge Graph **On Colab**: Choose "Runtime"&rarr;"Change Runtime Type"&rarr;"GPU" to benefit from the GPUs. ###Code if spacy.prefer_gpu(): print("Working on GPU.") else: print("No GPU found, working on CPU.") nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) pipes = ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names'] for pipe in pipes: nlp.add_pipe(pipe) # recreate matcher - same definition as above for these rules matcher = Matcher(nlp.vocab) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'POS': 'PART', 'OP': '?'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) ceo_synonyms = ['chairman', 'president', 'director', 'ceo', 'executive'] pattern = [{'ENT_TYPE': 'PERSON'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'LOWER': {'IN': ceo_synonyms}}, {'TEXT': 'of'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'ENT_TYPE': 'ORG'}] matcher.add('executive-of', [pattern]) pattern = [{'ENT_TYPE': 'ORG'}, {'LOWER': {'IN': ceo_synonyms}}, {'ENT_TYPE': 'PERSON'}] matcher.add('rev-executive-of', [pattern]) def extract_rels(doc): yield from extract_rel_match(doc, matcher) yield from extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) yield from extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from']) ###Output _____no_output_____ ###Markdown Testing Relationship Extraction (not in book) ###Code text = """Allied-Signal Inc and Schlumberger Ltd jointly announced that Schlumberger had acquired Allied-Signal's unit Neptune International. """ #text = df.text.loc[19975] text = re.sub(r'\s+', ' ', text).strip() print(*textwrap.wrap(text, 100), sep='\n') print() doc = nlp(text, disable='entity_ruler') #displacy.render(doc, style='ent') print(*extract_rels(doc), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) ###Output _____no_output_____ ###Markdown Extraction of Entities and Relations and Creation of Gephi-File (not in book)Batch-processing for entity extraction with subsequent relation extraction. Takes about 5 minutes, 80% of runtime for NeuralCoref. ###Code from math import ceil batch_size = 20 batches = ceil(len(df) / batch_size) ### rels = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): rels.extend([(df.index[i+j], *r) for r in extract_rels(doc)]) ###Output _____no_output_____ ###Markdown Creation of the relation data frame including final curation: ###Code # unpack subject and object rels = [(a_id, *subj, pred, *obj) for (a_id, subj, pred, obj) in rels] # create data frame rel_df = pd.DataFrame.from_records(rels, columns=('article_id', 'subj', 'subj_type', 'pred', 'obj', 'obj_type')) # false positives: subject cannot be object rel_df = rel_df.query('subj != obj') # filter entities that were not correctly detected # tokenizer produces "-owned XYZ company" rel_df = rel_df[~rel_df['subj'].str.startswith('-own')] rel_df = rel_df[~rel_df['obj'].str.startswith('-own')] # drop duplicate relations (within an article) rel_df = rel_df.drop_duplicates() # aggregate to produce one record per relation rel_df['article_id'] = rel_df['article_id'].map(lambda a: [a]) rel_df = rel_df.groupby(['subj', 'subj_type', 'pred', 'obj', 'obj_type'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) rel_df['articles'] = rel_df['articles'].map(lambda lst: ','.join(list(set([str(a) for a in lst])))) rel_df.head(10) # some statitics rel_df['pred'].value_counts() # try searching for a specific entity search = "Trans World" rel_df[(rel_df.subj.str.lower().str.contains(search.lower()) | rel_df.obj.str.lower().str.contains(search.lower()))] # in fact, TWA acquires and sells parts of USAir according to the messages # look at a specific article text = df['text'][9487] print(*textwrap.wrap(text, 80), sep='\n') ###Output _____no_output_____ ###Markdown To create the NetworkX graph be careful: We need a `MultiDiGraph` here, a directed graph allowing multiple edges between two nodes! ###Code import networkx as nx from networkx import MultiDiGraph graph = MultiDiGraph() for i, row in rel_df.iterrows(): graph.add_node(row['subj'], Type=row['subj_type']) graph.add_node(row['obj'], Type=row['obj_type']) _ = graph.add_edge(row['subj'], row['obj'], Articles=row['articles'], Rel=row['pred']) nx.readwrite.write_gexf(graph, 'knowledge_graph.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____ ###Markdown [**Blueprints for Text Analysis Using Python**](https://github.com/blueprints-for-text-analytics-python/blueprints-text) Jens Albrecht, Sidharth Ramachandran, Christian Winkler**If you like the book or the code examples here, please leave a friendly comment on [Amazon.com](https://www.amazon.com/Blueprints-Text-Analytics-Using-Python/dp/149207408X)!** Chapter 12: Building a Knowledge Graph Updated Version for spaCy 3.xYou find the version as printed in the book using spaCy 2.3.2 [here](Knowledge_Graph.ipynb). ###Code import spacy assert spacy.__version__[0] >= '3' ###Output _____no_output_____ ###Markdown We adjusted the this notebook to run with spaCy 3.0.Note, that spaCy 3.0 includes transformer models, which are more accurate than the conventional models. If you go for accuracy in named entity recognition, you should prefer the transformer models. See https://spacy.io/universe/project/spacy-transformers**Changes to `nlp.add_pipe`**: https://spacy.io/api/languageadd_pipe "As of v3.0, the Language.add_pipe method doesn’t take callables anymore and instead expects the name of a component factory registered using @Language.component or @Language.factory. It now takes care of creating the component, adds it to the pipeline and returns it."**Changes to `matcher.add`**: https://spacy.io/api/matcheradd "As of spaCy v3.0, Matcher.add takes a list of patterns as the second argument (instead of a variable number of arguments). The on_match callback becomes an optional keyword argument."**NeuralCoref not yet supported in spaCy 3**But planned: https://github.com/huggingface/neuralcoref/issues/295Currently, we cannot import NeuralCoref, so the functions for anaphora resolution are replaced by dummies in this notebook. RemarkThe code in this notebook differs slightly from the printed book. For example we frequently use pretty print (`pp.pprint`) instead of `print` and `tqdm`'s `progress_apply` instead of Pandas' `apply`. Moreover, several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book.You may also find some lines marked with three hashes . Those are not in the book as well as they don't contribute to the concept.All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting. SetupSet directory locations. If working on Google Colab: copy files and install required libraries.**On Colab:** Use runtime **with GPU (Menu&rarr;Runtime&rarr;Change runtime type)** for better performance **before** you start this notebook. ###Code import sys, os ON_COLAB = 'google.colab' in sys.modules if ON_COLAB: GIT_ROOT = 'https://github.com/blueprints-for-text-analytics-python/blueprints-text/raw/master' os.system(f'wget {GIT_ROOT}/ch12/setup.py') %run -i setup.py ###Output _____no_output_____ ###Markdown Load Python SettingsCommon imports, defaults for formatting in Matplotlib, Pandas etc. ###Code %run "$BASE_DIR/settings.py" %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'png' # to print output of all statements and not just the last from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" sys.path.append(BASE_DIR + '/packages') # to import blueprints package from blueprints.knowledge import display_ner, reset_pipeline, print_dep_tree, alias_lookup ###Output _____no_output_____ ###Markdown What you'll learn and what we build Knowledge Graphs Blueprint to Query Wikidata for Aliases not in BookBelow you find an example of what you can do with public ontologies like Wikidata. Here, we defined a SPARQL query to retrieve the names, aliases and URLs of all entities of type "United States federal executive department" (https://www.wikidata.org/wiki/Q910252). ###Code # pip install sparqlwrapper # https://rdflib.github.io/sparqlwrapper/ import sys from SPARQLWrapper import SPARQLWrapper, JSON endpoint_url = "https://query.wikidata.org/sparql" query = """ SELECT ?org ?orgLabel ?aliases ?urlLabel ?country ?countryLabel WITH { SELECT ?org (group_concat(distinct ?alias;separator=",") as ?aliases) WHERE { ?org wdt:P31 wd:Q910252. # org is(P31) US department (Q910252) ?org skos:altLabel ?alias. filter(lang(?alias)="en") } GROUP BY ?org } AS %i WHERE { include %i ?org wdt:P856 ?url; # has official website (P856) wdt:P17 ?country. # has country (P17) SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } } ORDER BY ?orgLabel """ def sparql_df(endpoint_url, query): user_agent = "Wikidata-Service Python/%s.%s" % (sys.version_info[0], sys.version_info[1]) sparql = SPARQLWrapper(endpoint_url, agent=user_agent) sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() columns = results['head']['vars'] rows = [] for result in results["results"]["bindings"]: row = {} for col in result: row[col] = result[col]['value'] rows.append(row) return pd.DataFrame.from_records(rows, columns=columns) wd_df = sparql_df(endpoint_url, query) # rename columns wd_df.columns = ['org_id', 'org', 'aliases', 'url', 'country_id', 'country'] wd_df['org_id'] = wd_df['org_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['country_id'] = wd_df['country_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['aliases'] = wd_df['aliases'].str.split(',') wd_df.head(10) ###Output _____no_output_____ ###Markdown Building a Knowledge Graph Introducing the Data Set ###Code import nltk nltk.download('reuters') ###Output _____no_output_____ ###Markdown Data Preparation of NLTK Reuters Corpus (not in book)This section contains the steps how to create the data frame for some of the examples. ###Code from nltk.corpus import reuters # List of documents documents = reuters.fileids() print(str(len(documents)) + " documents") print(str(len(reuters.categories())) + " categories:") print(reuters.categories()[:10] + ['...']) print(reuters.readme()[:200]) ###Output _____no_output_____ ###Markdown Each article is stored as a separated file. The data files are identified by a file ID of the form "train/1234" or "test/5678". We first create a data frame with the `fileid` column and then load the raw text for each ID into a second column. Finally, as we don't care whether it's train or test, we just the number from the file ID and use it as the index of our data frame. ###Code from nltk.corpus import reuters # create fileid column df = pd.DataFrame(reuters.fileids("acq"), columns=['fileid']) # load raw texts df['raw'] = df['fileid'].progress_map(lambda f: reuters.raw(f)) # set index to numeric id df.index = df['fileid'].map(lambda f: int(f.split('/')[1])) df.index.name = None df = df.drop(columns=['fileid']).sort_index() df.sample(3, random_state=12) ###Output _____no_output_____ ###Markdown As we see from the example, we will still need some data cleaning before we can expect to get reasonably good results during named entity recognition. First, we separate headlines from the actual news text by splitting at the first newline. ###Code df[['headline', 'raw_text']] = df.progress_apply(lambda row: row['raw'].split('\n', 1), axis='columns', result_type='expand') ###Output _____no_output_____ ###Markdown Now we use the adapted data cleaning blueprint from Chapter 4 for to remove some disturbing artifacts, substitute some abbreviations (like "dlr" for dollar) and repair some typos. ###Code def clean(text): text = text.replace('&lt;','<') # html escape text = re.sub(r'[<>]', '"', text) # quotation marks instead of <> text = re.sub(r'[ ]*"[A-Z\.]+"', '', text) # drop stock symbols text = re.sub(r'[ ]*\([A-Z\.]+\)', '', text) # drop stock symbols text = re.sub(r'\bdlr(s?)\b', r'dollar\1', text, flags=re.I) text = re.sub(r'\bmln(s?)\b', r'million\1', text, flags=re.I) text = re.sub(r'\bpct\b', r'%', text, flags=re.I) # normalize INC to Inc text = re.sub(r'\b(Co|Corp|Inc|Plc|Ltd)\b', lambda m: m.expand(r'\1').capitalize(), text, flags=re.I) text = re.sub(r'"', r'', text) # quotation marks text = re.sub(r'\s+', ' ', text) # multiple whitespace by one text = re.sub(r'acquisiton', 'acquisition', text) # typo text = re.sub(r'Nippon bLife', 'Nippon Life', text) # typo text = re.sub(r'COMSAT.COMSAT', 'COMSAT. COMSAT', text) # missing space at end of sentence #text = re.sub(r'Audio/Video', 'Audio-Video', text) # missing space at end of sentence return text.strip() ###Output _____no_output_____ ###Markdown So let's have a look at the result of our data cleaning steps : ###Code # that's what the substitutions do texts = [ """Trafalgar House Plc &lt;TRAF.L> said it has\n acquired the entire share capital of &lt;Capital Homes Inc> of the\n U.S. For 20 mln dlrs in cash.""", """Equiticorp Holdings Ltd &lt;EQUW.WE> now owns\n or has received acceptances representing 59.93 pct of the\n issued ordinary share capital of Guinness Peat Group Plc\n &lt;GNSP.L>, Equiticorp said in a statement.""", """Computer Terminal Systems Inc said it has completed the sale of 200,000 shares of its common stock, and warrants to acquire an additional one mln shares, to "Sedio N.V." of Lugano, Switzerland for 50,000 dlrs.""", """North American Group Ltd said it has a definitive agreement to buy 100 pct of Pioneer Business Group Inc of Atlanta.""" ] for text in texts: print(clean(text), end="\n\n") ###Output _____no_output_____ ###Markdown We apply it to the `raw_text` and create a new `text` column: ###Code df['text'] = df['raw_text'].progress_map(clean) df['headline'] = df['headline'].progress_map(clean) ###Output _____no_output_____ ###Markdown The newly created column `text` contains the cleaned articles. But we have one disturbing artifact left in the data: a few articles, like the second one in the sample above, consist only of capital letters. In fact, here the raw text is identical to the headlines. We finally drop those because named entity recognition will not yield useful results on such a text. ###Code # we will drop these articles with only capital letters df[df['raw_text'].map(lambda t: t.isupper())][['headline', 'raw_text']].head(3) # drop articles with only capital letters df = df[df['raw_text'].map(lambda t: not t.isupper())] # this is our clean data set df[['headline', 'text']].sample(3, random_state=12) pd.options.display.max_colwidth = 200 ###Output _____no_output_____ ###Markdown Book section continues ... Named-Entity Recognition ###Code nlp = spacy.load('en_core_web_sm') print(*nlp.pipeline, sep='\n') text = """Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp was still under consideration. We hope to come soon to a mutual agreement, Kistler said. The directors of Baker filed a law suit in Texas to force Hughes to complete the merger.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) print(*[(e.text, e.label_) for e in doc.ents], sep=' ') from spacy import displacy displacy.render(doc, style='ent') ###Output _____no_output_____ ###Markdown Blueprint: Rule-based Named-Entity Recognition ###Code reset_pipeline(nlp, pipes=[]) from spacy.pipeline import EntityRuler departments = ['Justice', 'Transportation'] patterns = [{"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": "Department"}, {"TEXT": "of"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}]}, {"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}, {"TEXT": "Department"}]}, {"label": "GOV", "pattern": [{"TEXT": "Securities"}, {"TEXT": "and"}, {"TEXT": "Exchange"}, {"TEXT": "Commission"}]}] # not in book, but useful if you modify the rules if nlp.has_pipe('entity_ruler'): nlp.remove_pipe('entity_ruler') entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) nlp.add_pipe('entity_ruler') text = """Justice Department is an alias for the U.S. Department of Justice. Department of Transportation and the Securities and Exchange Commission are government organisations, but the Sales Department is not.""" #text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) # print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') ### displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Blueprint: Normalizing Named-Entities ###Code reset_pipeline(nlp, ['entity_ruler']) text = "Baker International's shares climbed on the New York Stock Exchange." doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') from spacy.tokens import Span from spacy import Language @Language.component("norm_entities") def norm_entities(doc): ents = [] for ent in doc.ents: if ent[0].pos_ == "DET": # leading article ent = Span(doc, ent.start+1, ent.end, label=ent.label) if len(ent) > 0: if ent[-1].pos_ == "PART": # trailing particle like 's ent = Span(doc, ent.start, ent.end-1, label=ent.label) ents.append(ent) doc.ents = tuple(ents) return doc nlp.add_pipe("norm_entities") doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') # not in book displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Merging Entity Tokens ###Code from spacy.pipeline import merge_entities if nlp.has_pipe('merge_entities'): ### _ = nlp.remove_pipe('merge_entities') ### nlp.add_pipe('merge_entities') doc = nlp(text) print(*[(t.text, t.ent_type_) for t in doc if t.ent_type_ != '']) ###Output _____no_output_____ ###Markdown Testing the NER Pipeline on Sample Data (not in book)Take random samples from the text and display the result. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities']) i = df['text'].sample(1).index[0] print("Text Number:", i) text = df['text'].loc[i][:600] text = re.sub(r'\s+', ' ', text.strip()) print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') pd.options.display.max_rows = 500 # blueprint function to show tokens with entity attributes display_ner(doc, include_punct=True).query('ent_type != ""') pd.options.display.max_rows = 60 ###Output _____no_output_____ ###Markdown Coreference Resolution Blueprint: Using spaCy's Token Extensions ###Code # not in book, but usefule if you modify the extension from spacy.tokens import Token if Token.has_extension('ref_n'): _ = Token.remove_extension('ref_n') if Token.has_extension('ref_t'): _ = Token.remove_extension('ref_t') if Token.has_extension('ref_t_'): _ = Token.remove_extension('ref_t_') from spacy.tokens import Token Token.set_extension('ref_n', default='') Token.set_extension('ref_t', default='') @Language.component("init_coref") def init_coref(doc): for e in doc.ents: if e.label_ in ['ORG', 'GOV', 'PERSON']: e[0]._.ref_n, e[0]._.ref_t = e.text, e.label_ return doc ###Output _____no_output_____ ###Markdown Blueprint: Alias Resolution ###Code from blueprints.knowledge import alias_lookup for token in ['Transportation Department', 'DOT', 'SEC', 'TWA']: print(token, ':', alias_lookup[token]) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref']) @Language.component("alias_resolver") def alias_resolver(doc): """Lookup aliases and store result in ref_t, ref_n""" for ent in doc.ents: token = ent[0].text if token in alias_lookup: a_name, a_type = alias_lookup[token] ent[0]._.ref_n, ent[0]._.ref_t = a_name, a_type return propagate_ent_type(doc) @Language.component("propagate_ent_type") def propagate_ent_type(doc): """propagate entity type stored in ref_t""" ents = [] for e in doc.ents: if e[0]._.ref_n != '': # if e is a coreference e = Span(doc, e.start, e.end, label=e[0]._.ref_t) ents.append(e) doc.ents = tuple(ents) return doc nlp.add_pipe('alias_resolver') from blueprints.knowledge import display_ner text = """The deal of Trans World Airlines is under investigation by the U.S. Department of Transportation. The Transportation Department will block the deal of TWA.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Resolving Name Variations ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver']) text = """ Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp. was still under consideration. We hope to come to a mutual agreement, Kistler said. Baker will force Hughes to complete the merger. """ text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) def name_match(m1, m2): m2 = re.sub(r'[()\.]', '', m2) # ignore parentheses and dots m2 = r'\b' + m2 + r'\b' # \b marks word boundary m2 = re.sub(r'\s+', r'\\b.*\\b', m2) return re.search(m2, m1, flags=re.I) is not None @Language.component("name_resolver") def name_resolver(doc): """create name-based reference to e1 as primary mention of e2""" ents = [e for e in doc.ents if e.label_ in ['ORG', 'PERSON']] for i, e1 in enumerate(ents): for e2 in ents[i+1:]: if name_match(e1[0]._.ref_n, e2[0].text): e2[0]._.ref_n = e1[0]._.ref_n e2[0]._.ref_t = e1[0]._.ref_t return propagate_ent_type(doc) nlp.add_pipe('name_resolver') doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Testing Name Coreference Resolution Sample Data (not in book)Take random samples from the text and display the result. You may find examples where the resolution is not working correctly. We have put the emphasis on the simplicity of rules, so there will be cases in which they don't work. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # not in the book: # pick random examples to test the string matching i = df['text'].sample(1).index[0] i = 10 print("Text Number:", i) text = df['text'].loc[i]#[:300] # print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''") ###Output _____no_output_____ ###Markdown Blueprint: Anaphora Resolution with NeuralCoref ###Code text = """Hughes Tool Co said its merger with Baker was still under consideration. Hughes had a board meeting today. W.A. Kistler mentioned that the company hopes for a mutual agreement. He is reasonably confident.""" text = re.sub(r'\s+', ' ', text).strip() ### reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # NEXT CODE BLOCKS ARE COMMENTED UNTIL NEURALCOREF SUPPORTS SPACY 3! # from neuralcoref import NeuralCoref # neural_coref = NeuralCoref(nlp.vocab, greedyness=0.45) # nlp.add_pipe(neural_coref, name='neural_coref') # doc = nlp(text) # print(*doc._.coref_clusters, sep='\n') ###Output _____no_output_____ ###Markdown Not in the book: Try the visualization of NeuralCoref!https://huggingface.co/coref/?text=Hughes%20Tool%20Co%20said%20its%20merger%20with%20Baker%20was%20still%20under%20consideration.%20 ###Code @Language.component("anaphor_coref") def anaphor_coref(doc): """anaphora resolution""" for token in doc: # if token is coref and not already dereferenced if token._.in_coref and token._.ref_n == '': ref_span = token._.coref_clusters[0].main # get referred span if len(ref_span) <= 3: # consider only short spans for ref in ref_span: # find first dereferenced entity if ref._.ref_n != '': token._.ref_n = ref._.ref_n token._.ref_t = ref._.ref_t break return doc # if nlp.has_pipe('anaphor_coref'): ### # nlp.remove_pipe('anaphor_coref') ### # nlp.add_pipe('anaphor_coref') # doc = nlp(text) # display_ner(doc).query("ref_n != ''") \ # [['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] # Dummy components for neural_coref and anaphor_coref # to keep the remaining code working @Language.component("neural_coref") def neural_coref(doc): return doc @Language.component("anaphor_coref") def anaphor_coref(doc): return doc ###Output _____no_output_____ ###Markdown Name Normalization ###Code def strip_legal_suffix(text): return re.sub(r'(\s+and)?(\s+|\b(Co|Corp|Inc|Plc|Ltd)\b\.?)*$', '', text) print(strip_legal_suffix('Hughes Tool Co')) @Language.component("norm_names") def norm_names(doc): for t in doc: if t._.ref_n != '' and t._.ref_t in ['ORG']: t._.ref_n = strip_legal_suffix(t._.ref_n) if t._.ref_n == '': t._.ref_t = '' return doc nlp.add_pipe("norm_names") ###Output _____no_output_____ ###Markdown Entity Linking Testing Coreference Resolution (not in book)Not in the book, but a good demonstration of what works good and what doesn't work, yet. ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) # pick random examples and test i = df['text'].sample(1).index[0] i = 2948 # 1862, 1836,2948,7650,3013,2950,3095 print("Text Number:", i) text = df['text'].loc[i][:500] print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) # display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Creating a Cooccurence Graph **Largest connected component of the cooccurrence graph generated from the Reuters corpus** The visualization was prepared with the help of [Gephi](https://gephi.org/). Extracting Cooccurrences from a Document ###Code from itertools import combinations def extract_coocs(doc, include_types): ents = set([(e[0]._.ref_n, e[0]._.ref_t) for e in doc.ents if e[0]._.ref_t in include_types]) yield from combinations(sorted(ents), 2) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size], disable=['neural_coref', 'anaphor_coref']) for j, doc in enumerate(docs): try: coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) except: print(f"Index {i+j}") print(df['text'][i+j][0:100]) raise print(*coocs[:3], sep='\n') coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'type1', 'ent2', 'type2'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) cooc_df['articles'] = cooc_df['articles'].map( lambda lst: ','.join([str(a) for a in lst[:5]])) cooc_df.head(3) ###Output _____no_output_____ ###Markdown Visualizing the Graph with Gephi ###Code import networkx as nx graph = nx.from_pandas_edgelist( cooc_df[['ent1', 'ent2', 'articles', 'freq']] \ .query('freq > 3').rename(columns={'freq': 'weight'}), source='ent1', target='ent2', edge_attr=True) nx.readwrite.write_gexf(graph, 'cooc.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____ ###Markdown Visualizing the Graph with NetworkX (not in book)We can also use NetworkX for drawing, it's just not that nice. By executing the code below you will see more nodes than in the book, where we manually removed several nodes for the sake of clarity. ###Code # identify the greatest component (connected subgraph) # and plot only that one giant_component = sorted(nx.connected_components(graph), key=len, reverse=True) graph = graph.subgraph(giant_component[0]) pos = nx.kamada_kawai_layout(graph, weight='weight') # pos = nx.fruchterman_reingold_layout(graph, weight='weight') # pos = nx.circular_layout(graph) _ = plt.figure(figsize=(20, 20)) nx.draw(graph, pos, node_size=1000, node_color='skyblue', alpha=0.8, with_labels = True) plt.title('Graph Visualization', size=15) for (node1,node2,data) in graph.edges(data=True): width = data['weight'] _ = nx.draw_networkx_edges(graph,pos, edgelist=[(node1, node2)], width=width, edge_color='#505050', alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown Blueprint: Identifying Acronyms (not in book)It is very easy to generate a very good list of suggestions for acronyms if you search for frequent cooccurrences of acronyms. To find possible acronyms in the cooccurrence data frame, we look for all tuples that have an acronym (all capital letters) either as source or as target. As additional conditions, we require that the first letter in both is the same and the combination exists more than once. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'name_resolver', 'norm_names']) # no alias resolver batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'ent2'])['article_id'] \ .agg(['count']).rename(columns={'count': 'freq'}) \ .reset_index().sort_values('freq', ascending=False) acro_pattern = (cooc_df['ent1'].str.isupper() | cooc_df['ent2'].str.isupper()) & \ (cooc_df['ent1'].str[:1] == cooc_df['ent2'].str[:1]) & \ (cooc_df['freq'] > 1) print(len(cooc_df[acro_pattern])) cooc_df[acro_pattern][:10] ###Output _____no_output_____ ###Markdown For our corpus, this yields about 40 potential acronyms.We save them to a file: ###Code # export to csv cooc_df[acro_pattern][['ent1', 'ent2']] \ .sort_values(['ent1', 'ent2']) \ .to_csv('possible_acronyms.txt', index=False) ###Output _____no_output_____ ###Markdown This file has to be curated manually. After cleaning, we load the remaining acronyms and convert them to a dictionary: ###Code # curate manually the csv acro_df = pd.read_csv('possible_acronyms.txt') acro_df.set_index('ent1')['ent2'].to_dict() ###Output _____no_output_____ ###Markdown We took this list, and curated it to create a dictionary that maps acronyms to their long names. It is provided in the blueprints package for this chapter and part of `alias_lookup`. Here are some example entries: ###Code from blueprints.knowledge import _acronyms for acro in ['TWA', 'UCPB', 'SEC', 'DOT']: print(acro, ' --> ', alias_lookup[acro]) ###Output _____no_output_____ ###Markdown Relation Extraction Blueprint: Relation Extraction by Phrase Matching ###Code # use large model, otherwise the examples look different! # to make it work on Colab, we need to import the model directly # usually you would use nlp = spacy.load('en_core_web_lg') import en_core_web_lg nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger.""" text = re.sub('\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) from spacy.matcher import Matcher matcher = Matcher(nlp.vocab) acq_synonyms = ['acquire', 'buy', 'purchase'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'POS': 'VERB', 'LEMMA': {'IN': acq_synonyms}}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('acquires', [pattern]) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) def extract_rel_match(doc, matcher): for sent in doc.sents: for match_id, start, end in matcher(sent): span = sent[start:end] # matched span pred = nlp.vocab.strings[match_id] # rule name subj, obj = span[0], span[-1] if pred.startswith('rev-'): # reversed relation subj, obj = obj, subj pred = pred[4:] yield ((subj._.ref_n, subj._.ref_t), pred, (obj._.ref_n, obj._.ref_t)) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger. The Schlumberger unit Fairchild Corp received an offer.""" text = re.sub('\s+', ' ', text) ### doc = nlp(text) print(*extract_rel_match(doc, matcher), sep='\n') text = "Fairchild Corp was acquired by Fujitsu." print(*extract_rel_match(nlp(text), matcher), sep='\n') text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." print(*extract_rel_match(nlp(text), matcher), sep='\n') if matcher.has_key("acquires"): matcher.remove("acquires") ###Output _____no_output_____ ###Markdown Blueprint: Relation Extraction using Dependency Trees ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) text = "Fairchild Corp was acquired by Fujitsu." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) # Here is the longer part of the code, that was skipped in the book. # Actually we search for the shortest path between the # subject running through our predicate (verb) to the object. # subject and object are organizations in our examples. # Here are the three helper functions omitted in the book: # - bfs: breadth first searching the closest subject/object # - is_passive: checks if noun or verb is in passive form # - find_subj: searches left part of tree for subject # - find_obj: searches right part of tree for object from collections import deque def bfs(root, ent_type, deps, first_dep_only=False): """Return first child of root (included) that matches ent_type and dependency list by breadth first search. Search stops after first dependency match if first_dep_only (used for subject search - do not "jump" over subjects)""" to_visit = deque([root]) # queue for bfs while len(to_visit) > 0: child = to_visit.popleft() # print("child", child, child.dep_) if child.dep_ in deps: if child._.ref_t == ent_type: return child elif first_dep_only: # first match (subjects) return None elif child.dep_ == 'compound' and \ child.head.dep_ in deps and \ child._.ref_t == ent_type: # check if contained in compound return child to_visit.extend(list(child.children)) return None def is_passive(token): if token.dep_.endswith('pass'): # noun return True for left in token.lefts: # verb if left.dep_ == 'auxpass': return True return False def find_subj(pred, ent_type, passive): """Find closest subject in predicates left subtree or predicates parent's left subtree (recursive). Has a filter on organizations.""" for left in pred.lefts: if passive: # if pred is passive, search for passive subject subj = bfs(left, ent_type, ['nsubjpass', 'nsubj:pass'], True) else: subj = bfs(left, ent_type, ['nsubj'], True) if subj is not None: # found it! return subj if pred.head != pred and not is_passive(pred): return find_subj(pred.head, ent_type, passive) # climb up left subtree else: return None def find_obj(pred, ent_type, excl_prepos): """Find closest object in predicates right subtree. Skip prepositional objects if the preposition is in exclude list. Has a filter on organizations.""" for right in pred.rights: obj = bfs(right, ent_type, ['dobj', 'pobj', 'iobj', 'obj', 'obl']) if obj is not None: if obj.dep_ == 'pobj' and obj.head.lemma_.lower() in excl_prepos: # check preposition continue return obj return None def extract_rel_dep(doc, pred_name, pred_synonyms, excl_prepos=[]): for token in doc: if token.pos_ == 'VERB' and token.lemma_ in pred_synonyms: pred = token passive = is_passive(pred) subj = find_subj(pred, 'ORG', passive) if subj is not None: obj = find_obj(pred, 'ORG', excl_prepos) if obj is not None: if passive: # switch roles obj, subj = subj, obj yield ((subj._.ref_n, subj._.ref_t), pred_name, (obj._.ref_n, obj._.ref_t)) text = """Fujitsu said that Schlumberger Ltd has arranged to sell its stake in Fairchild Inc.""" doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') text = "Schlumberger Ltd has arranged to sell to Fujitsu its stake in Fairchild Inc." doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 80}) print("A:", *extract_rel_dep(doc, 'sells', ['sell'])) print("B:", *extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from'])) texts = [ "Fairchild Corp was bought by Fujitsu.", # 1 "Fujitsu, a competitor of NEC Co, acquired Fairchild Inc.", # 2 "Fujitsu is expanding." + "The company made an offer to acquire 80% of Fairchild Inc.", # 3 "Fujitsu plans to acquire 80% of Fairchild Corp.", # 4 "Fujitsu plans not to acquire Fairchild Corp.", # 5 "The competition forced Fujitsu to aquire Fairchild Corp." # 6 ] acq_synonyms = ['acquire', 'buy', 'purchase'] for i, text in enumerate(texts): doc = nlp(text) rels = extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) print(f'{i+1}:', *rels) ###Output _____no_output_____ ###Markdown Creating the Knowledge Graph **On Colab**: Choose "Runtime"&rarr;"Change Runtime Type"&rarr;"GPU" to benefit from the GPUs. ###Code if spacy.prefer_gpu(): print("Working on GPU.") else: print("No GPU found, working on CPU.") nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) pipes = ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names'] for pipe in pipes: nlp.add_pipe(pipe) # recreate matcher - same definition as above for these rules matcher = Matcher(nlp.vocab) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'POS': 'PART', 'OP': '?'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) ceo_synonyms = ['chairman', 'president', 'director', 'ceo', 'executive'] pattern = [{'ENT_TYPE': 'PERSON'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'LOWER': {'IN': ceo_synonyms}}, {'TEXT': 'of'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'ENT_TYPE': 'ORG'}] matcher.add('executive-of', [pattern]) pattern = [{'ENT_TYPE': 'ORG'}, {'LOWER': {'IN': ceo_synonyms}}, {'ENT_TYPE': 'PERSON'}] matcher.add('rev-executive-of', [pattern]) def extract_rels(doc): yield from extract_rel_match(doc, matcher) yield from extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) yield from extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from']) ###Output _____no_output_____ ###Markdown Testing Relationship Extraction (not in book) ###Code text = """Allied-Signal Inc and Schlumberger Ltd jointly announced that Schlumberger had acquired Allied-Signal's unit Neptune International. """ #text = df.text.loc[19975] text = re.sub(r'\s+', ' ', text).strip() print(*textwrap.wrap(text, 100), sep='\n') print() doc = nlp(text, disable='entity_ruler') #displacy.render(doc, style='ent') print(*extract_rels(doc), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) ###Output _____no_output_____ ###Markdown Extraction of Entities and Relations and Creation of Gephi-File (not in book)Batch-processing for entity extraction with subsequent relation extraction. Takes about 5 minutes, 80% of runtime for NeuralCoref. ###Code from math import ceil batch_size = 20 batches = ceil(len(df) / batch_size) ### rels = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): rels.extend([(df.index[i+j], *r) for r in extract_rels(doc)]) ###Output _____no_output_____ ###Markdown Creation of the relation data frame including final curation: ###Code # unpack subject and object rels = [(a_id, *subj, pred, *obj) for (a_id, subj, pred, obj) in rels] # create data frame rel_df = pd.DataFrame.from_records(rels, columns=('article_id', 'subj', 'subj_type', 'pred', 'obj', 'obj_type')) # false positives: subject cannot be object rel_df = rel_df.query('subj != obj') # filter entities that were not correctly detected # tokenizer produces "-owned XYZ company" rel_df = rel_df[~rel_df['subj'].str.startswith('-own')] rel_df = rel_df[~rel_df['obj'].str.startswith('-own')] # drop duplicate relations (within an article) rel_df = rel_df.drop_duplicates() # aggregate to produce one record per relation rel_df['article_id'] = rel_df['article_id'].map(lambda a: [a]) rel_df = rel_df.groupby(['subj', 'subj_type', 'pred', 'obj', 'obj_type'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) rel_df['articles'] = rel_df['articles'].map(lambda lst: ','.join(list(set([str(a) for a in lst])))) rel_df.head(10) # some statitics rel_df['pred'].value_counts() # try searching for a specific entity search = "Trans World" rel_df[(rel_df.subj.str.lower().str.contains(search.lower()) | rel_df.obj.str.lower().str.contains(search.lower()))] # in fact, TWA acquires and sells parts of USAir according to the messages # look at a specific article text = df['text'][9487] print(*textwrap.wrap(text, 80), sep='\n') ###Output _____no_output_____ ###Markdown To create the NetworkX graph be careful: We need a `MultiDiGraph` here, a directed graph allowing multiple edges between two nodes! ###Code import networkx as nx from networkx import MultiDiGraph graph = MultiDiGraph() for i, row in rel_df.iterrows(): graph.add_node(row['subj'], Type=row['subj_type']) graph.add_node(row['obj'], Type=row['obj_type']) _ = graph.add_edge(row['subj'], row['obj'], Articles=row['articles'], Rel=row['pred']) nx.readwrite.write_gexf(graph, 'knowledge_graph.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____ ###Markdown [**Blueprints for Text Analysis Using Python**](https://github.com/blueprints-for-text-analytics-python/blueprints-text) Jens Albrecht, Sidharth Ramachandran, Christian Winkler Chapter 12: Building a Knowledge Graph Updated Version for spaCy 3.xYou find the version as printed in the book using spaCy 2.3.2 [here](Knowledge_Graph.ipynb). ###Code import spacy assert spacy.__version__[0] >= '3' ###Output _____no_output_____ ###Markdown We adjusted the this notebook to run with spaCy 3.0.Note, that spaCy 3.0 includes transformer models, which are more accurate than the conventional models. If you go for accuracy in named entity recognition, you should prefer the transformer models. See https://spacy.io/universe/project/spacy-transformers**Changes to `nlp.add_pipe`**: https://spacy.io/api/languageadd_pipe "As of v3.0, the Language.add_pipe method doesn’t take callables anymore and instead expects the name of a component factory registered using @Language.component or @Language.factory. It now takes care of creating the component, adds it to the pipeline and returns it."**Changes to `matcher.add`**: https://spacy.io/api/matcheradd "As of spaCy v3.0, Matcher.add takes a list of patterns as the second argument (instead of a variable number of arguments). The on_match callback becomes an optional keyword argument."**NeuralCoref not yet supported in spaCy 3**But planned: https://github.com/huggingface/neuralcoref/issues/295Currently, we cannot import NeuralCoref, so the functions for anaphora resolution are replaced by dummies in this notebook. RemarkThe code in this notebook differs slightly from the printed book. For example we frequently use pretty print (`pp.pprint`) instead of `print` and `tqdm`'s `progress_apply` instead of Pandas' `apply`. Moreover, several layout and formatting commands, like `figsize` to control figure size or subplot commands are removed in the book.You may also find some lines marked with three hashes . Those are not in the book as well as they don't contribute to the concept.All of this is done to simplify the code in the book and put the focus on the important parts instead of formatting. SetupSet directory locations. If working on Google Colab: copy files and install required libraries.**On Colab:** Use runtime **with GPU (Menu&rarr;Runtime&rarr;Change runtime type)** for better performance **before** you start this notebook. ###Code import sys, os ON_COLAB = 'google.colab' in sys.modules if ON_COLAB: GIT_ROOT = 'https://github.com/blueprints-for-text-analytics-python/blueprints-text/raw/master' os.system(f'wget {GIT_ROOT}/ch12/setup.py') %run -i setup.py ###Output _____no_output_____ ###Markdown Load Python SettingsCommon imports, defaults for formatting in Matplotlib, Pandas etc. ###Code %run "$BASE_DIR/settings.py" %reload_ext autoreload %autoreload 2 %config InlineBackend.figure_format = 'png' # to print output of all statements and not just the last from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" sys.path.append(BASE_DIR + '/packages') # to import blueprints package from blueprints.knowledge import display_ner, reset_pipeline, print_dep_tree, alias_lookup ###Output _____no_output_____ ###Markdown What you'll learn and what we build Knowledge Graphs Blueprint to Query Wikidata for Aliases not in BookBelow you find an example of what you can do with public ontologies like Wikidata. Here, we defined a SPARQL query to retrieve the names, aliases and URLs of all entities of type "United States federal executive department" (https://www.wikidata.org/wiki/Q910252). ###Code # pip install sparqlwrapper # https://rdflib.github.io/sparqlwrapper/ import sys from SPARQLWrapper import SPARQLWrapper, JSON endpoint_url = "https://query.wikidata.org/sparql" query = """ SELECT ?org ?orgLabel ?aliases ?urlLabel ?country ?countryLabel WITH { SELECT ?org (group_concat(distinct ?alias;separator=",") as ?aliases) WHERE { ?org wdt:P31 wd:Q910252. # org is(P31) US department (Q910252) ?org skos:altLabel ?alias. filter(lang(?alias)="en") } GROUP BY ?org } AS %i WHERE { include %i ?org wdt:P856 ?url; # has official website (P856) wdt:P17 ?country. # has country (P17) SERVICE wikibase:label { bd:serviceParam wikibase:language "[AUTO_LANGUAGE],en". } } ORDER BY ?orgLabel """ def sparql_df(endpoint_url, query): user_agent = "Wikidata-Service Python/%s.%s" % (sys.version_info[0], sys.version_info[1]) sparql = SPARQLWrapper(endpoint_url, agent=user_agent) sparql.setQuery(query) sparql.setReturnFormat(JSON) results = sparql.query().convert() columns = results['head']['vars'] rows = [] for result in results["results"]["bindings"]: row = {} for col in result: row[col] = result[col]['value'] rows.append(row) return pd.DataFrame.from_records(rows, columns=columns) wd_df = sparql_df(endpoint_url, query) # rename columns wd_df.columns = ['org_id', 'org', 'aliases', 'url', 'country_id', 'country'] wd_df['org_id'] = wd_df['org_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['country_id'] = wd_df['country_id'].str.replace('http://www.wikidata.org/entity/', '') wd_df['aliases'] = wd_df['aliases'].str.split(',') wd_df.head(10) ###Output _____no_output_____ ###Markdown Building a Knowledge Graph Introducing the Data Set ###Code import nltk nltk.download('reuters') ###Output _____no_output_____ ###Markdown Data Preparation of NLTK Reuters Corpus (not in book)This section contains the steps how to create the data frame for some of the examples. ###Code from nltk.corpus import reuters # List of documents documents = reuters.fileids() print(str(len(documents)) + " documents") print(str(len(reuters.categories())) + " categories:") print(reuters.categories()[:10] + ['...']) print(reuters.readme()[:200]) ###Output _____no_output_____ ###Markdown Each article is stored as a separated file. The data files are identified by a file ID of the form "train/1234" or "test/5678". We first create a data frame with the `fileid` column and then load the raw text for each ID into a second column. Finally, as we don't care whether it's train or test, we just the number from the file ID and use it as the index of our data frame. ###Code from nltk.corpus import reuters # create fileid column df = pd.DataFrame(reuters.fileids("acq"), columns=['fileid']) # load raw texts df['raw'] = df['fileid'].progress_map(lambda f: reuters.raw(f)) # set index to numeric id df.index = df['fileid'].map(lambda f: int(f.split('/')[1])) df.index.name = None df = df.drop(columns=['fileid']).sort_index() df.sample(3, random_state=12) ###Output _____no_output_____ ###Markdown As we see from the example, we will still need some data cleaning before we can expect to get reasonably good results during named entity recognition. First, we separate headlines from the actual news text by splitting at the first newline. ###Code df[['headline', 'raw_text']] = df.progress_apply(lambda row: row['raw'].split('\n', 1), axis='columns', result_type='expand') ###Output _____no_output_____ ###Markdown Now we use the adapted data cleaning blueprint from Chapter 4 for to remove some disturbing artifacts, substitute some abbreviations (like "dlr" for dollar) and repair some typos. ###Code def clean(text): text = text.replace('&lt;','<') # html escape text = re.sub(r'[<>]', '"', text) # quotation marks instead of <> text = re.sub(r'[ ]*"[A-Z\.]+"', '', text) # drop stock symbols text = re.sub(r'[ ]*\([A-Z\.]+\)', '', text) # drop stock symbols text = re.sub(r'\bdlr(s?)\b', r'dollar\1', text, flags=re.I) text = re.sub(r'\bmln(s?)\b', r'million\1', text, flags=re.I) text = re.sub(r'\bpct\b', r'%', text, flags=re.I) # normalize INC to Inc text = re.sub(r'\b(Co|Corp|Inc|Plc|Ltd)\b', lambda m: m.expand(r'\1').capitalize(), text, flags=re.I) text = re.sub(r'"', r'', text) # quotation marks text = re.sub(r'\s+', ' ', text) # multiple whitespace by one text = re.sub(r'acquisiton', 'acquisition', text) # typo text = re.sub(r'Nippon bLife', 'Nippon Life', text) # typo text = re.sub(r'COMSAT.COMSAT', 'COMSAT. COMSAT', text) # missing space at end of sentence #text = re.sub(r'Audio/Video', 'Audio-Video', text) # missing space at end of sentence return text.strip() ###Output _____no_output_____ ###Markdown So let's have a look at the result of our data cleaning steps : ###Code # that's what the substitutions do texts = [ """Trafalgar House Plc &lt;TRAF.L> said it has\n acquired the entire share capital of &lt;Capital Homes Inc> of the\n U.S. For 20 mln dlrs in cash.""", """Equiticorp Holdings Ltd &lt;EQUW.WE> now owns\n or has received acceptances representing 59.93 pct of the\n issued ordinary share capital of Guinness Peat Group Plc\n &lt;GNSP.L>, Equiticorp said in a statement.""", """Computer Terminal Systems Inc said it has completed the sale of 200,000 shares of its common stock, and warrants to acquire an additional one mln shares, to "Sedio N.V." of Lugano, Switzerland for 50,000 dlrs.""", """North American Group Ltd said it has a definitive agreement to buy 100 pct of Pioneer Business Group Inc of Atlanta.""" ] for text in texts: print(clean(text), end="\n\n") ###Output _____no_output_____ ###Markdown We apply it to the `raw_text` and create a new `text` column: ###Code df['text'] = df['raw_text'].progress_map(clean) df['headline'] = df['headline'].progress_map(clean) ###Output _____no_output_____ ###Markdown The newly created column `text` contains the cleaned articles. But we have one disturbing artifact left in the data: a few articles, like the second one in the sample above, consist only of capital letters. In fact, here the raw text is identical to the headlines. We finally drop those because named entity recognition will not yield useful results on such a text. ###Code # we will drop these articles with only capital letters df[df['raw_text'].map(lambda t: t.isupper())][['headline', 'raw_text']].head(3) # drop articles with only capital letters df = df[df['raw_text'].map(lambda t: not t.isupper())] # this is our clean data set df[['headline', 'text']].sample(3, random_state=12) pd.options.display.max_colwidth = 200 ###Output _____no_output_____ ###Markdown Book section continues ... Named-Entity Recognition ###Code nlp = spacy.load('en_core_web_sm') print(*nlp.pipeline, sep='\n') text = """Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp was still under consideration. We hope to come soon to a mutual agreement, Kistler said. The directors of Baker filed a law suit in Texas to force Hughes to complete the merger.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) print(*[(e.text, e.label_) for e in doc.ents], sep=' ') from spacy import displacy displacy.render(doc, style='ent') ###Output _____no_output_____ ###Markdown Blueprint: Rule-based Named-Entity Recognition ###Code reset_pipeline(nlp, pipes=[]) from spacy.pipeline import EntityRuler departments = ['Justice', 'Transportation'] patterns = [{"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": "Department"}, {"TEXT": "of"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}]}, {"label": "GOV", "pattern": [{"TEXT": "U.S.", "OP": "?"}, {"TEXT": {"IN": departments}, "ENT_TYPE": "ORG"}, {"TEXT": "Department"}]}, {"label": "GOV", "pattern": [{"TEXT": "Securities"}, {"TEXT": "and"}, {"TEXT": "Exchange"}, {"TEXT": "Commission"}]}] # not in book, but useful if you modify the rules if nlp.has_pipe('entity_ruler'): nlp.remove_pipe('entity_ruler') entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) nlp.add_pipe('entity_ruler') text = """Justice Department is an alias for the U.S. Department of Justice. Department of Transportation and the Securities and Exchange Commission are government organisations, but the Sales Department is not.""" #text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) # print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') ### displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Blueprint: Normalizing Named-Entities ###Code reset_pipeline(nlp, ['entity_ruler']) text = "Baker International's shares climbed on the New York Stock Exchange." doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') from spacy.tokens import Span from spacy import Language @Language.component("norm_entities") def norm_entities(doc): ents = [] for ent in doc.ents: if ent[0].pos_ == "DET": # leading article ent = Span(doc, ent.start+1, ent.end, label=ent.label) if len(ent) > 0: if ent[-1].pos_ == "PART": # trailing particle like 's ent = Span(doc, ent.start, ent.end-1, label=ent.label) ents.append(ent) doc.ents = tuple(ents) return doc nlp.add_pipe("norm_entities") doc = nlp(text) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') # not in book displacy.render(doc, style='ent', jupyter=True) ###Output _____no_output_____ ###Markdown Merging Entity Tokens ###Code from spacy.pipeline import merge_entities if nlp.has_pipe('merge_entities'): ### _ = nlp.remove_pipe('merge_entities') ### nlp.add_pipe('merge_entities') doc = nlp(text) print(*[(t.text, t.ent_type_) for t in doc if t.ent_type_ != '']) ###Output _____no_output_____ ###Markdown Testing the NER Pipeline on Sample Data (not in book)Take random samples from the text and display the result. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities']) i = df['text'].sample(1).index[0] print("Text Number:", i) text = df['text'].loc[i][:600] text = re.sub(r'\s+', ' ', text.strip()) print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) print(*[([t.text for t in e], e.label_) for e in doc.ents], sep='\n') pd.options.display.max_rows = 500 # blueprint function to show tokens with entity attributes display_ner(doc, include_punct=True).query('ent_type != ""') pd.options.display.max_rows = 60 ###Output _____no_output_____ ###Markdown Coreference Resolution Blueprint: Using spaCy's Token Extensions ###Code # not in book, but usefule if you modify the extension from spacy.tokens import Token if Token.has_extension('ref_n'): _ = Token.remove_extension('ref_n') if Token.has_extension('ref_t'): _ = Token.remove_extension('ref_t') if Token.has_extension('ref_t_'): _ = Token.remove_extension('ref_t_') from spacy.tokens import Token Token.set_extension('ref_n', default='') Token.set_extension('ref_t', default='') @Language.component("init_coref") def init_coref(doc): for e in doc.ents: if e.label_ in ['ORG', 'GOV', 'PERSON']: e[0]._.ref_n, e[0]._.ref_t = e.text, e.label_ return doc ###Output _____no_output_____ ###Markdown Blueprint: Alias Resolution ###Code from blueprints.knowledge import alias_lookup for token in ['Transportation Department', 'DOT', 'SEC', 'TWA']: print(token, ':', alias_lookup[token]) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref']) @Language.component("alias_resolver") def alias_resolver(doc): """Lookup aliases and store result in ref_t, ref_n""" for ent in doc.ents: token = ent[0].text if token in alias_lookup: a_name, a_type = alias_lookup[token] ent[0]._.ref_n, ent[0]._.ref_t = a_name, a_type return propagate_ent_type(doc) @Language.component("propagate_ent_type") def propagate_ent_type(doc): """propagate entity type stored in ref_t""" ents = [] for e in doc.ents: if e[0]._.ref_n != '': # if e is a coreference e = Span(doc, e.start, e.end, label=e[0]._.ref_t) ents.append(e) doc.ents = tuple(ents) return doc nlp.add_pipe('alias_resolver') from blueprints.knowledge import display_ner text = """The deal of Trans World Airlines is under investigation by the U.S. Department of Transportation. The Transportation Department will block the deal of TWA.""" text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Resolving Name Variations ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver']) text = """ Hughes Tool Co Chairman W.A. Kistler said its merger with Baker International Corp. was still under consideration. We hope to come to a mutual agreement, Kistler said. Baker will force Hughes to complete the merger. """ text = re.sub(r'\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) def name_match(m1, m2): m2 = re.sub(r'[()\.]', '', m2) # ignore parentheses and dots m2 = r'\b' + m2 + r'\b' # \b marks word boundary m2 = re.sub(r'\s+', r'\\b.*\\b', m2) return re.search(m2, m1, flags=re.I) is not None @Language.component("name_resolver") def name_resolver(doc): """create name-based reference to e1 as primary mention of e2""" ents = [e for e in doc.ents if e.label_ in ['ORG', 'PERSON']] for i, e1 in enumerate(ents): for e2 in ents[i+1:]: if name_match(e1[0]._.ref_n, e2[0].text): e2[0]._.ref_n = e1[0]._.ref_n e2[0]._.ref_t = e1[0]._.ref_t return propagate_ent_type(doc) nlp.add_pipe('name_resolver') doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Testing Name Coreference Resolution Sample Data (not in book)Take random samples from the text and display the result. You may find examples where the resolution is not working correctly. We have put the emphasis on the simplicity of rules, so there will be cases in which they don't work. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # not in the book: # pick random examples to test the string matching i = df['text'].sample(1).index[0] i = 10 print("Text Number:", i) text = df['text'].loc[i]#[:300] # print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) display_ner(doc).query("ref_n != ''") ###Output _____no_output_____ ###Markdown Blueprint: Anaphora Resolution with NeuralCoref ###Code text = """Hughes Tool Co said its merger with Baker was still under consideration. Hughes had a board meeting today. W.A. Kistler mentioned that the company hopes for a mutual agreement. He is reasonably confident.""" text = re.sub(r'\s+', ' ', text).strip() ### reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver']) # NEXT CODE BLOCKS ARE COMMENTED UNTIL NEURALCOREF SUPPORTS SPACY 3! # from neuralcoref import NeuralCoref # neural_coref = NeuralCoref(nlp.vocab, greedyness=0.45) # nlp.add_pipe(neural_coref, name='neural_coref') # doc = nlp(text) # print(*doc._.coref_clusters, sep='\n') ###Output _____no_output_____ ###Markdown Not in the book: Try the visualization of NeuralCoref!https://huggingface.co/coref/?text=Hughes%20Tool%20Co%20said%20its%20merger%20with%20Baker%20was%20still%20under%20consideration.%20 ###Code @Language.component("anaphor_coref") def anaphor_coref(doc): """anaphora resolution""" for token in doc: # if token is coref and not already dereferenced if token._.in_coref and token._.ref_n == '': ref_span = token._.coref_clusters[0].main # get referred span if len(ref_span) <= 3: # consider only short spans for ref in ref_span: # find first dereferenced entity if ref._.ref_n != '': token._.ref_n = ref._.ref_n token._.ref_t = ref._.ref_t break return doc # if nlp.has_pipe('anaphor_coref'): ### # nlp.remove_pipe('anaphor_coref') ### # nlp.add_pipe('anaphor_coref') # doc = nlp(text) # display_ner(doc).query("ref_n != ''") \ # [['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] # Dummy components for neural_coref and anaphor_coref # to keep the remaining code working @Language.component("neural_coref") def neural_coref(doc): return doc @Language.component("anaphor_coref") def anaphor_coref(doc): return doc ###Output _____no_output_____ ###Markdown Name Normalization ###Code def strip_legal_suffix(text): return re.sub(r'(\s+and)?(\s+|\b(Co|Corp|Inc|Plc|Ltd)\b\.?)*$', '', text) print(strip_legal_suffix('Hughes Tool Co')) @Language.component("norm_names") def norm_names(doc): for t in doc: if t._.ref_n != '' and t._.ref_t in ['ORG']: t._.ref_n = strip_legal_suffix(t._.ref_n) if t._.ref_n == '': t._.ref_t = '' return doc nlp.add_pipe("norm_names") ###Output _____no_output_____ ###Markdown Entity Linking Testing Coreference Resolution (not in book)Not in the book, but a good demonstration of what works good and what doesn't work, yet. ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) # pick random examples and test i = df['text'].sample(1).index[0] i = 2948 # 1862, 1836,2948,7650,3013,2950,3095 print("Text Number:", i) text = df['text'].loc[i][:500] print(text) doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) # display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'main_coref', 'ref_n', 'ref_t']] display_ner(doc).query("ref_n != ''")[['text', 'ent_type', 'ref_n', 'ref_t']] ###Output _____no_output_____ ###Markdown Blueprint: Creating a Cooccurence Graph **Largest connected component of the cooccurrence graph generated from the Reuters corpus** The visualization was prepared with the help of [Gephi](https://gephi.org/). Extracting Cooccurrences from a Document ###Code from itertools import combinations def extract_coocs(doc, include_types): ents = set([(e[0]._.ref_n, e[0]._.ref_t) for e in doc.ents if e[0]._.ref_t in include_types]) yield from combinations(sorted(ents), 2) reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size], disable=['neural_coref', 'anaphor_coref']) for j, doc in enumerate(docs): try: coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) except: print(f"Index {i+j}") print(df['text'][i+j][0:100]) raise print(*coocs[:3], sep='\n') coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'type1', 'ent2', 'type2'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) cooc_df['articles'] = cooc_df['articles'].map( lambda lst: ','.join([str(a) for a in lst[:5]])) cooc_df.head(3) ###Output _____no_output_____ ###Markdown Visualizing the Graph with Gephi ###Code import networkx as nx graph = nx.from_pandas_edgelist( cooc_df[['ent1', 'ent2', 'articles', 'freq']] \ .query('freq > 3').rename(columns={'freq': 'weight'}), source='ent1', target='ent2', edge_attr=True) nx.readwrite.write_gexf(graph, 'cooc.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____ ###Markdown Visualizing the Graph with NetworkX (not in book)We can also use NetworkX for drawing, it's just not that nice. By executing the code below you will see more nodes than in the book, where we manually removed several nodes for the sake of clarity. ###Code # identify the greatest component (connected subgraph) # and plot only that one giant_component = sorted(nx.connected_components(graph), key=len, reverse=True) graph = graph.subgraph(giant_component[0]) pos = nx.kamada_kawai_layout(graph, weight='weight') # pos = nx.fruchterman_reingold_layout(graph, weight='weight') # pos = nx.circular_layout(graph) _ = plt.figure(figsize=(20, 20)) nx.draw(graph, pos, node_size=1000, node_color='skyblue', alpha=0.8, with_labels = True) plt.title('Graph Visualization', size=15) for (node1,node2,data) in graph.edges(data=True): width = data['weight'] _ = nx.draw_networkx_edges(graph,pos, edgelist=[(node1, node2)], width=width, edge_color='#505050', alpha=0.5) plt.show() ###Output _____no_output_____ ###Markdown Blueprint: Identifying Acronyms (not in book)It is very easy to generate a very good list of suggestions for acronyms if you search for frequent cooccurrences of acronyms. To find possible acronyms in the cooccurrence data frame, we look for all tuples that have an acronym (all capital letters) either as source or as target. As additional conditions, we require that the first letter in both is the same and the combination exists more than once. ###Code reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'name_resolver', 'norm_names']) # no alias resolver batch_size = 100 batches = math.ceil(len(df)/batch_size) ### coocs = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): coocs.extend([(df.index[i+j], *c) for c in extract_coocs(doc, ['ORG', 'GOV'])]) coocs = [([id], *e1, *e2) for (id, e1, e2) in coocs] cooc_df = pd.DataFrame.from_records(coocs, columns=('article_id', 'ent1', 'type1', 'ent2', 'type2')) cooc_df = cooc_df.groupby(['ent1', 'ent2'])['article_id'] \ .agg(['count']).rename(columns={'count': 'freq'}) \ .reset_index().sort_values('freq', ascending=False) acro_pattern = (cooc_df['ent1'].str.isupper() | cooc_df['ent2'].str.isupper()) & \ (cooc_df['ent1'].str[:1] == cooc_df['ent2'].str[:1]) & \ (cooc_df['freq'] > 1) print(len(cooc_df[acro_pattern])) cooc_df[acro_pattern][:10] ###Output _____no_output_____ ###Markdown For our corpus, this yields about 40 potential acronyms.We save them to a file: ###Code # export to csv cooc_df[acro_pattern][['ent1', 'ent2']] \ .sort_values(['ent1', 'ent2']) \ .to_csv('possible_acronyms.txt', index=False) ###Output _____no_output_____ ###Markdown This file has to be curated manually. After cleaning, we load the remaining acronyms and convert them to a dictionary: ###Code # curate manually the csv acro_df = pd.read_csv('possible_acronyms.txt') acro_df.set_index('ent1')['ent2'].to_dict() ###Output _____no_output_____ ###Markdown We took this list, and curated it to create a dictionary that maps acronyms to their long names. It is provided in the blueprints package for this chapter and part of `alias_lookup`. Here are some example entries: ###Code from blueprints.knowledge import _acronyms for acro in ['TWA', 'UCPB', 'SEC', 'DOT']: print(acro, ' --> ', alias_lookup[acro]) ###Output _____no_output_____ ###Markdown Relation Extraction Blueprint: Relation Extraction by Phrase Matching ###Code # use large model, otherwise the examples look different! # to make it work on Colab, we need to import the model directly # usually you would use nlp = spacy.load('en_core_web_lg') import en_core_web_lg nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger.""" text = re.sub('\s+', ' ', text).strip() ### doc = nlp(text) displacy.render(doc, style='ent', jupyter=True) from spacy.matcher import Matcher matcher = Matcher(nlp.vocab) acq_synonyms = ['acquire', 'buy', 'purchase'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'POS': 'VERB', 'LEMMA': {'IN': acq_synonyms}}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('acquires', [pattern]) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) def extract_rel_match(doc, matcher): for sent in doc.sents: for match_id, start, end in matcher(sent): span = sent[start:end] # matched span pred = nlp.vocab.strings[match_id] # rule name subj, obj = span[0], span[-1] if pred.startswith('rev-'): # reversed relation subj, obj = obj, subj pred = pred[4:] yield ((subj._.ref_n, subj._.ref_t), pred, (obj._.ref_n, obj._.ref_t)) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) text = """Fujitsu plans to acquire 80% of Fairchild Corp, an industrial unit of Schlumberger. The Schlumberger unit Fairchild Corp received an offer.""" text = re.sub('\s+', ' ', text) ### doc = nlp(text) print(*extract_rel_match(doc, matcher), sep='\n') text = "Fairchild Corp was acquired by Fujitsu." print(*extract_rel_match(nlp(text), matcher), sep='\n') text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." print(*extract_rel_match(nlp(text), matcher), sep='\n') if matcher.has_key("acquires"): matcher.remove("acquires") ###Output _____no_output_____ ###Markdown Blueprint: Relation Extraction using Dependency Trees ###Code # recreate pipeline reset_pipeline(nlp, ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names']) text = "Fujitsu, a competitor of NEC, acquired Fairchild Corp." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) text = "Fairchild Corp was acquired by Fujitsu." doc = nlp(text) displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) # Here is the longer part of the code, that was skipped in the book. # Actually we search for the shortest path between the # subject running through our predicate (verb) to the object. # subject and object are organizations in our examples. # Here are the three helper functions omitted in the book: # - bfs: breadth first searching the closest subject/object # - is_passive: checks if noun or verb is in passive form # - find_subj: searches left part of tree for subject # - find_obj: searches right part of tree for object from collections import deque def bfs(root, ent_type, deps, first_dep_only=False): """Return first child of root (included) that matches ent_type and dependency list by breadth first search. Search stops after first dependency match if first_dep_only (used for subject search - do not "jump" over subjects)""" to_visit = deque([root]) # queue for bfs while len(to_visit) > 0: child = to_visit.popleft() # print("child", child, child.dep_) if child.dep_ in deps: if child._.ref_t == ent_type: return child elif first_dep_only: # first match (subjects) return None elif child.dep_ == 'compound' and \ child.head.dep_ in deps and \ child._.ref_t == ent_type: # check if contained in compound return child to_visit.extend(list(child.children)) return None def is_passive(token): if token.dep_.endswith('pass'): # noun return True for left in token.lefts: # verb if left.dep_ == 'auxpass': return True return False def find_subj(pred, ent_type, passive): """Find closest subject in predicates left subtree or predicates parent's left subtree (recursive). Has a filter on organizations.""" for left in pred.lefts: if passive: # if pred is passive, search for passive subject subj = bfs(left, ent_type, ['nsubjpass', 'nsubj:pass'], True) else: subj = bfs(left, ent_type, ['nsubj'], True) if subj is not None: # found it! return subj if pred.head != pred and not is_passive(pred): return find_subj(pred.head, ent_type, passive) # climb up left subtree else: return None def find_obj(pred, ent_type, excl_prepos): """Find closest object in predicates right subtree. Skip prepositional objects if the preposition is in exclude list. Has a filter on organizations.""" for right in pred.rights: obj = bfs(right, ent_type, ['dobj', 'pobj', 'iobj', 'obj', 'obl']) if obj is not None: if obj.dep_ == 'pobj' and obj.head.lemma_.lower() in excl_prepos: # check preposition continue return obj return None def extract_rel_dep(doc, pred_name, pred_synonyms, excl_prepos=[]): for token in doc: if token.pos_ == 'VERB' and token.lemma_ in pred_synonyms: pred = token passive = is_passive(pred) subj = find_subj(pred, 'ORG', passive) if subj is not None: obj = find_obj(pred, 'ORG', excl_prepos) if obj is not None: if passive: # switch roles obj, subj = subj, obj yield ((subj._.ref_n, subj._.ref_t), pred_name, (obj._.ref_n, obj._.ref_t)) text = """Fujitsu said that Schlumberger Ltd has arranged to sell its stake in Fairchild Inc.""" doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') text = "Schlumberger Ltd has arranged to sell to Fujitsu its stake in Fairchild Inc." doc = nlp(text) print(*extract_rel_dep(doc, 'sells', ['sell']), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 80}) print("A:", *extract_rel_dep(doc, 'sells', ['sell'])) print("B:", *extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from'])) texts = [ "Fairchild Corp was bought by Fujitsu.", # 1 "Fujitsu, a competitor of NEC Co, acquired Fairchild Inc.", # 2 "Fujitsu is expanding." + "The company made an offer to acquire 80% of Fairchild Inc.", # 3 "Fujitsu plans to acquire 80% of Fairchild Corp.", # 4 "Fujitsu plans not to acquire Fairchild Corp.", # 5 "The competition forced Fujitsu to aquire Fairchild Corp." # 6 ] acq_synonyms = ['acquire', 'buy', 'purchase'] for i, text in enumerate(texts): doc = nlp(text) rels = extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) print(f'{i+1}:', *rels) ###Output _____no_output_____ ###Markdown Creating the Knowledge Graph **On Colab**: Choose "Runtime"&rarr;"Change Runtime Type"&rarr;"GPU" to benefit from the GPUs. ###Code if spacy.prefer_gpu(): print("Working on GPU.") else: print("No GPU found, working on CPU.") nlp = en_core_web_lg.load() # need to re-create the entity ruler after reloading nlp # because new entity type 'GOV' needs to be added to nlp.vocab entity_ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) pipes = ['entity_ruler', 'norm_entities', 'merge_entities', 'init_coref', 'alias_resolver', 'name_resolver', 'neural_coref', 'anaphor_coref', 'norm_names'] for pipe in pipes: nlp.add_pipe(pipe) # recreate matcher - same definition as above for these rules matcher = Matcher(nlp.vocab) subs_synonyms = ['subsidiary', 'unit'] pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'TEXT': 'of'}, {'_': {'ref_t': {'NOT_IN': ['ORG']}}, 'POS': {'NOT_IN': ['VERB']}, 'OP': '*'}, {'_': {'ref_t': 'ORG'}}] # object matcher.add('subsidiary-of', [pattern]) pattern = [{'_': {'ref_t': 'ORG'}}, # subject {'POS': 'PART', 'OP': '?'}, {'LOWER': {'IN': subs_synonyms}}, # predicate {'_': {'ref_t': 'ORG'}}] # object matcher.add('rev-subsidiary-of', [pattern]) ceo_synonyms = ['chairman', 'president', 'director', 'ceo', 'executive'] pattern = [{'ENT_TYPE': 'PERSON'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'LOWER': {'IN': ceo_synonyms}}, {'TEXT': 'of'}, {'ENT_TYPE': {'NOT_IN': ['ORG', 'PERSON']}, 'OP': '*'}, {'ENT_TYPE': 'ORG'}] matcher.add('executive-of', [pattern]) pattern = [{'ENT_TYPE': 'ORG'}, {'LOWER': {'IN': ceo_synonyms}}, {'ENT_TYPE': 'PERSON'}] matcher.add('rev-executive-of', [pattern]) def extract_rels(doc): yield from extract_rel_match(doc, matcher) yield from extract_rel_dep(doc, 'acquires', acq_synonyms, ['to', 'from']) yield from extract_rel_dep(doc, 'sells', ['sell'], ['to', 'from']) ###Output _____no_output_____ ###Markdown Testing Relationship Extraction (not in book) ###Code text = """Allied-Signal Inc and Schlumberger Ltd jointly announced that Schlumberger had acquired Allied-Signal's unit Neptune International. """ #text = df.text.loc[19975] text = re.sub(r'\s+', ' ', text).strip() print(*textwrap.wrap(text, 100), sep='\n') print() doc = nlp(text, disable='entity_ruler') #displacy.render(doc, style='ent') print(*extract_rels(doc), sep='\n') displacy.render(doc, style='dep', jupyter=True, options={'compact': False, 'distance': 100}) ###Output _____no_output_____ ###Markdown Extraction of Entities and Relations and Creation of Gephi-File (not in book)Batch-processing for entity extraction with subsequent relation extraction. Takes about 5 minutes, 80% of runtime for NeuralCoref. ###Code from math import ceil batch_size = 20 batches = ceil(len(df) / batch_size) ### rels = [] for i in tqdm(range(0, len(df), batch_size), total=batches): docs = nlp.pipe(df['text'][i:i+batch_size]) for j, doc in enumerate(docs): rels.extend([(df.index[i+j], *r) for r in extract_rels(doc)]) ###Output _____no_output_____ ###Markdown Creation of the relation data frame including final curation: ###Code # unpack subject and object rels = [(a_id, *subj, pred, *obj) for (a_id, subj, pred, obj) in rels] # create data frame rel_df = pd.DataFrame.from_records(rels, columns=('article_id', 'subj', 'subj_type', 'pred', 'obj', 'obj_type')) # false positives: subject cannot be object rel_df = rel_df.query('subj != obj') # filter entities that were not correctly detected # tokenizer produces "-owned XYZ company" rel_df = rel_df[~rel_df['subj'].str.startswith('-own')] rel_df = rel_df[~rel_df['obj'].str.startswith('-own')] # drop duplicate relations (within an article) rel_df = rel_df.drop_duplicates() # aggregate to produce one record per relation rel_df['article_id'] = rel_df['article_id'].map(lambda a: [a]) rel_df = rel_df.groupby(['subj', 'subj_type', 'pred', 'obj', 'obj_type'])['article_id'] \ .agg(['count', 'sum']) \ .rename(columns={'count': 'freq', 'sum': 'articles'}) \ .reset_index().sort_values('freq', ascending=False) rel_df['articles'] = rel_df['articles'].map(lambda lst: ','.join(list(set([str(a) for a in lst])))) rel_df.head(10) # some statitics rel_df['pred'].value_counts() # try searching for a specific entity search = "Trans World" rel_df[(rel_df.subj.str.lower().str.contains(search.lower()) | rel_df.obj.str.lower().str.contains(search.lower()))] # in fact, TWA acquires and sells parts of USAir according to the messages # look at a specific article text = df['text'][9487] print(*textwrap.wrap(text, 80), sep='\n') ###Output _____no_output_____ ###Markdown To create the NetworkX graph be careful: We need a `MultiDiGraph` here, a directed graph allowing multiple edges between two nodes! ###Code import networkx as nx from networkx import MultiDiGraph graph = MultiDiGraph() for i, row in rel_df.iterrows(): graph.add_node(row['subj'], Type=row['subj_type']) graph.add_node(row['obj'], Type=row['obj_type']) _ = graph.add_edge(row['subj'], row['obj'], Articles=row['articles'], Rel=row['pred']) nx.readwrite.write_gexf(graph, 'knowledge_graph.gexf', encoding='utf-8', prettyprint=True, version='1.2draft') ###Output _____no_output_____
notebooks/pMEC1136.ipynb
###Markdown pMEC1136Also referred to as pYPK0-Df. This vector expresses four genes and was assembled from four single gene expression cassettes:Gene | Enzyme | Acronym | Cassette-------------------------------------------------- |-------------------|---|-----|[SsXYL1](http://www.ncbi.nlm.nih.gov/gene/4839234) |D-xylose reductase |XR | [pYPK0_TEF1_PsXYL1_TDH3](pYPK0_TEF1_PsXYL1_TDH3.ipynb)[SsXYL2](http://www.ncbi.nlm.nih.gov/gene/4852013) |xylitol dehydrogenase |XDH | [pYPK0_TDH3_PsXYL2_PGI1](pYPK0_TDH3_PsXYL2_PGI1.ipynb)[ScXKS1](http://www.yeastgenome.org/locus/S000003426/overview) |Xylulokinase |XK | [pYPK0_PGI1_ScXKS1_FBA1](pYPK0_PGI1_ScXKS1_FBA1.ipynb)[ScTAL1](http://www.yeastgenome.org/locus/S000004346/overview) |Transaldolase |tal1p | [pYPK0_FBA1_ScTAL1_PDC1](pYPK0_FBA1_ScTAL1_PDC1.ipynb)The systematic name of this vector is : ```pYPK0-ScTEF1-XR-ScTDH3-XDH-ScPGI1-XK-ScFBA1-TAL1-ScPDC1```The vector [pMEC1135](pMEC1135.ipynb) is identical to this vector, but has a point mutation in XYL1. [Yeast Pathway Kit Standard Primers](ypk_std_primers.ipynb) ###Code from pydna.all import * p567,p577,p468,p467,p568,p578,p775,p778,p167,p166 = parse("yeast_pahtway_kit_standard_primers.txt") pYPK0 =read("pYPK0.gb") pYPK0.cseguid() from Bio.Restriction import ZraI, AjiI, EcoRV p417,p626 =parse(''' >417_ScTEF1tpf (30-mer) TTAAATAACAATGCATACTTTGTACGTTCA >626_ScTEF1tpr_PacI (35-mer) taattaaTTTGTAATTAAAACTTAGATTAGATTGC''', ds=False) p415,p623 =parse(''' >415_ScTDH3tpf (29-mer) TTAAATAATAAAAAACACGCTTTTTCAGT >623_ScTDH3tpr_PacI (33-mer) taattaaTTTGTTTGTTTATGTGTGTTTATTCG''', ds=False) p549,p622 =parse(''' >549_ScPGI1tpf (27-mer) ttaaatAATTCAGTTTTCTGACTGAGT >622_ScPGI1tpr_PacI (28-mer) taattaaTTTTAGGCTGGTATCTTGATT''', ds=False) p409,p624 =parse(''' >409_ScFBA1tpf (37-mer) TTAAATAATAACAATACTGACAGTACTAAATAATTGC >624_ScFBA1tpr_PacI (29-mer) taattaaTTTGAATATGTATTACTTGGTT''', ds=False) p1 =read("pYPK0_TEF1_PsXYL1_TDH3.gb") p2 =read("pYPK0_TDH3_PsXYL2_PGI1.gb") p3 =read("pYPK0_PGI1_ScXKS1_FBA1.gb") p4 =read("pYPK0_FBA1_ScTAL1_PDC1.gb") cas1 =pcr( p167, p623, p1) cas2 =pcr( p415, p622, p2) cas3 =pcr( p549, p624, p3) cas4 =pcr( p409, p166, p4) pYPK0_E_Z, stuffer = pYPK0.cut((EcoRV, ZraI)) (pYPK0_E_Z, cas1, cas2, cas3, cas4) asm =Assembly( [pYPK0_E_Z, cas1, cas2, cas3, cas4] , limit = 61) asm candidate = asm.assemble_circular()[0] candidate.figure() pw = candidate.synced("tcgcgcgtttcggtgatgacggtgaaaacctctg") len(pw) pw.cseguid() pw.id = "pMEC1136" pw.description="pYPK0-ScTEF1-XR-ScTDH3-XDH-ScPGI1-XK-ScFBA1-TAL1-ScPDC1 (alt name pYPK0_Df)" pw.stamp() pw.write("pMEC1136.gb") ###Output _____no_output_____ ###Markdown Download[pMEC1136](pMEC1136.gb) ###Code reloaded =read("pMEC1136.gb") reloaded.cseguid() ###Output _____no_output_____
notebooks/1M_brain_gpu_analysis_multigpu.ipynb
###Markdown RAPIDS & Scanpy Single-Cell RNA-seq Workflow on 1.3 Million Cells Copyright (c) 2020, NVIDIA CORPORATION.Licensed under the Apache License, Version 2.0 (the "License") you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This notebook demonstrates a single-cell RNA analysis workflow that begins with preprocessing a count matrix of size `(n_gene, n_cell)` and results in a visualization of the clustered cells for further analysis. For demonstration purposes, we use a dataset of 1M brain cells with Unified Virtual Memory to oversubscribe GPU memory. See the README for instructions to download this dataset. Change to the notebooks directory Change into the *notebooks* directory. You may need to modify your path depending on where you cloned the repo. ###Code import os os.chdir("/datasets/cnolet/workspace/rapids-single-cell-examples/notebooks") ###Output _____no_output_____ ###Markdown Import requirements ###Code import numpy as np import scanpy as sc import anndata import dask import time import cudf import cuml import cupy as cp import os, wget from cuml.decomposition import PCA from cuml.manifold import TSNE from cuml.cluster import KMeans from dask_cuda import initialize, LocalCUDACluster from dask.distributed import Client, default_client import rapids_scanpy_funcs import utils import logging import warnings warnings.filterwarnings('ignore') warnings.simplefilter('ignore') ###Output _____no_output_____ ###Markdown We use the RAPIDS memory manager to enable Unified Virtual Memory management, which allows us to oversubscribe the GPU memory. ###Code import rmm def set_mem(): rmm.reinitialize(managed_memory=True) cp.cuda.set_allocator(rmm.rmm_cupy_allocator) cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="0, 1, 2, 3, 4, 5, 6, 7") client = Client(cluster) set_mem() client.run(set_mem) client ###Output WARNING:bokeh.server.util:Host wildcard '*' will allow connections originating from multiple (or possibly all) hostnames or IPs. Use non-wildcard values to restrict access explicitly ###Markdown Input data In the cell below, we provide the path to the sparse `.h5ad` file containing the count matrix to analyze.To run this notebook using your own dataset, please see the README for instructions to convert your own count matrix into this format. Then, replace the path in the cell below with the path to your generated `.h5ad` file. ###Code input_file = "../data/1M_brain_cells_10X.sparse.h5ad" if not os.path.exists(input_file): print('Downloading import file...') os.makedirs('../data', exist_ok=True) wget.download('https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/1M_brain_cells_10X.sparse.h5ad', input_file) ###Output _____no_output_____ ###Markdown Set parameters ###Code # marker genes MITO_GENE_PREFIX = "mt-" # Prefix for mitochondrial genes to regress out markers = ["Stmn2", "Hes1", "Olig1"] # Marker genes for visualization # filtering cells min_genes_per_cell = 200 # Filter out cells with fewer genes than this expressed max_genes_per_cell = 6000 # Filter out cells with more genes than this expressed # filtering genes n_top_genes = 4000 # Number of highly variable genes to retain # PCA n_components = 50 # Number of principal components to compute # Batched PCA pca_train_ratio = 0.35 # Fraction of cells to use for PCA training n_pca_batches = 8 # t-SNE tsne_n_pcs = 20 # Number of principal components to use for t-SNE # k-means k = 35 # Number of clusters for k-means # KNN n_neighbors = 15 # Number of nearest neighbors for KNN graph knn_n_pcs = 50 # Number of principal components to use for finding nearest neighbors # UMAP umap_min_dist = 0.3 umap_spread = 1.0 start = time.time() ###Output _____no_output_____ ###Markdown Load and Preprocess Data ###Code data_load_preprocess_start = time.time() ###Output _____no_output_____ ###Markdown Below, we load the sparse count matrix from the `.h5ad` file into GPU using a custom function. While reading the dataset, filters are applied on the count matrix to remove cells with an extreme number of genes expressed. Genes will zero expression in all cells are also eliminated. The custom function uses [Dask](https://dask.org) to partition data. The above mentioned filters are applied on individual partitions. Usage of Dask along with cupy provides the following benefits:- Parallelized data loading when multiple GPUs are available- Ability to partition the data allows pre-processing large datasetsFilters are applied on individual batches of cells. Elementwise or cell-level normalization operations are also performed while reading. For this example, the following two operations are performed:- Normalize the count matrix so that the total counts in each cell sum to 1e4.- Log transform the count matrix. ###Code %%time def partial_post_processor(partial_data): partial_data = rapids_scanpy_funcs.normalize_total(partial_data, target_sum=1e4) return partial_data.log1p() dask_sparse_arr, genes, query = rapids_scanpy_funcs.read_with_filter(client, input_file, min_genes_per_cell=min_genes_per_cell, max_genes_per_cell=max_genes_per_cell, partial_post_processor=partial_post_processor) dask_sparse_arr = dask_sparse_arr.persist() ###Output CPU times: user 7.1 s, sys: 1.83 s, total: 8.94 s Wall time: 42.2 s ###Markdown Verify the shape of the resulting sparse matrix: ###Code dask_sparse_arr.shape ###Output _____no_output_____ ###Markdown Select Most Variable Genes Before filtering the count matrix, we save the 'raw' expression values of the marker genes to use for labeling cells afterward. ###Code %%time marker_genes_raw = {} i = 0 for index in genes[genes.isin(markers)].index.to_arrow().to_pylist(): marker_genes_raw[markers[i]] = dask_sparse_arr[:, index].compute().toarray().ravel() i += 1 ###Output CPU times: user 850 ms, sys: 327 ms, total: 1.18 s Wall time: 7.09 s ###Markdown Filter the count matrix to retain only the most variable genes. ###Code %%time hvg = rapids_scanpy_funcs.highly_variable_genes_filter(client, dask_sparse_arr, genes, n_top_genes=n_top_genes) genes = genes[hvg] dask_sparse_arr = dask_sparse_arr[:, hvg] sparse_gpu_array = dask_sparse_arr.compute() # del dask_sparse_arr del hvg ###Output CPU times: user 2.19 s, sys: 5.07 s, total: 7.26 s Wall time: 9.09 s ###Markdown Regress out confounding factors (number of counts, mitochondrial gene expression) We can now perform regression on the count matrix to correct for confounding factors - for example purposes, we use the number of counts and the expression of mitochondrial genes (named starting with `mt-`). We now calculate the total counts and the percentage of mitochondrial counts for each cell. ###Code %%time sparse_gpu_array = sparse_gpu_array.tocsc() mito_genes = genes.str.startswith(MITO_GENE_PREFIX).values n_counts = sparse_gpu_array.sum(axis=1) percent_mito = (sparse_gpu_array[:,mito_genes].sum(axis=1) / n_counts).ravel() n_counts = cp.array(n_counts).ravel() percent_mito = cp.array(percent_mito).ravel() del sparse_gpu_array ###Output _____no_output_____ ###Markdown And perform regression: ###Code %%time n_rows = dask_sparse_arr.shape[0] n_cols = dask_sparse_arr.shape[1] dask_sparse_arr = dask_sparse_arr.map_blocks(lambda x: x.todense(), dtype="float32", meta=cp.array(cp.zeros((0,)))).T dask_sparse_arr = dask_sparse_arr.rechunk((500, n_rows)).persist() dask_sparse_arr.compute_chunk_sizes() %%time import math dask_sparse_arr = dask_sparse_arr.map_blocks(lambda x: rapids_scanpy_funcs.regress_out(x.T, n_counts, percent_mito).T, dtype="float32", meta=cp.array(cp.zeros(0,))).T dask_sparse_arr = dask_sparse_arr.rechunk((math.ceil(n_rows/8), n_cols)).persist() dask_sparse_arr.compute_chunk_sizes() ###Output CPU times: user 2.41 s, sys: 1.09 s, total: 3.5 s Wall time: 36 s ###Markdown Scale Finally, we scale the count matrix to obtain a z-score and apply a cutoff value of 10 standard deviations, obtaining the preprocessed count matrix. ###Code %%time mean = dask_sparse_arr.mean(axis=0) dask_sparse_arr -= mean stddev = cp.sqrt(dask_sparse_arr.var(axis=0).compute()) dask_sparse_arr /= stddev dask_sparse_arr = dask.array.clip(dask_sparse_arr, 0, 10).persist() del mean, stddev data_load_preprocess_time = time.time() print("Total data load and preprocessing time: %s" % (data_load_preprocess_time-data_load_preprocess_start)) ###Output Total data load and preprocessing time: 117.95534157752991 ###Markdown Cluster & Visualize Reduce We use PCA to reduce the dimensionality of the matrix to its top 50 principal components.If the number of cells was smaller, we would use the command `adata.obsm["X_pca"] = cuml.dask.decomposition.PCA(n_components=n_components, output_type="numpy").fit_transform(dask_sparse_arr)` to perform PCA on all the cells.However, we cannot perform PCA on the complete dataset using a single GPU. Therefore, we use the batched PCA function in `utils.py`, which uses only a fraction of the total cells to train PCA. ###Code %%time from cuml.dask.decomposition import PCA pca_data = PCA(n_components=50).fit_transform(dask_sparse_arr) pca_data.compute_chunk_sizes() ###Output CPU times: user 895 ms, sys: 229 ms, total: 1.12 s Wall time: 11.6 s ###Markdown We store the preprocessed count matrix as an AnnData object, which is currently in host memory. We also add the expression levels of the marker genes as observations to the annData object. ###Code %%time local_pca = pca_data.compute() adata = anndata.AnnData(local_pca.get()) pca_data.shape ###Output _____no_output_____ ###Markdown t-SNE + K-means We cluster the cells using k-means on the principal components. For example purposes, we set k=35. ###Code %%time adata.obsm['X_tsne'] = TSNE().fit_transform(adata.X[:,:tsne_n_pcs]) %%time from cuml.dask.cluster import KMeans kmeans_labels = KMeans(n_clusters=k, init="k-means||", random_state=0).fit_predict(pca_data) adata.obs['kmeans'] = kmeans_labels.compute().get().astype(str) ###Output CPU times: user 1.07 s, sys: 217 ms, total: 1.29 s Wall time: 8.47 s ###Markdown We visualize the cells using t-SNE and label cells by color according to the k-means clustering. ###Code %%time sc.pl.tsne(adata, color=["kmeans"]) ###Output _____no_output_____ ###Markdown UMAP + Graph clustering We can also visualize the cells using the UMAP algorithm in Rapids. Before UMAP, we need to construct a k-nearest neighbors graph in which each cell is connected to its nearest neighbors. This can be done conveniently using rapids functionality already integrated into Scanpy.Note that Scanpy uses an approximation to the nearest neighbors on the CPU while the GPU version performs an exact search. While both methods are known to yield useful results, some differences in the resulting visualization and clusters can be observed. The UMAP function from Rapids is also integrated into Scanpy. ###Code %%time from cuml.dask.manifold import UMAP as DaskUMAP from cuml.manifold import UMAP umap_train = pca_data[:int(0.10*pca_data.shape[0])].compute() umap = UMAP(min_dist=umap_min_dist, spread=umap_spread).fit(umap_train) %%time adata.obsm["X_umap"] = DaskUMAP(umap).transform(pca_data).compute().get() ###Output CPU times: user 495 ms, sys: 109 ms, total: 604 ms Wall time: 6.78 s ###Markdown We can distribute the computation of the nearest neighbors graph for graph-based clustering ###Code %%time sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids') ###Output CPU times: user 55 s, sys: 11.9 s, total: 1min 6s Wall time: 1min 4s ###Markdown Next, we use the Louvain algorithm for graph-based clustering. ###Code %%time sc.tl.louvain(adata, flavor='rapids') ###Output CPU times: user 2.56 s, sys: 864 ms, total: 3.42 s Wall time: 3.33 s ###Markdown We plot the cells using the UMAP visualization, and using the Louvain clusters as labels. ###Code %%time sc.pl.umap(adata, color=["louvain"]) ###Output _____no_output_____ ###Markdown We can also use the Leiden clustering method in RAPIDS. This method has not been integrated into Scanpy and needs to be called separately. ###Code %%time adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata) %%time sc.pl.umap(adata, color=["leiden"]) print("Full time: %s" % (time.time() - start)) client.shutdown() cluster.close() ###Output Full time: 299.02070474624634 ###Markdown RAPIDS & Scanpy Single-Cell RNA-seq Workflow on 1.3 Million Cells Copyright (c) 2020, NVIDIA CORPORATION.Licensed under the Apache License, Version 2.0 (the "License") you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. This notebook demonstrates a single-cell RNA analysis workflow that begins with preprocessing a count matrix of size `(n_gene, n_cell)` and results in a visualization of the clustered cells for further analysis. For demonstration purposes, we use a dataset of 1M brain cells with Unified Virtual Memory to oversubscribe GPU memory. See the README for instructions to download this dataset. Change to the notebooks directory Change into the *notebooks* directory. You may need to modify your path depending on where you cloned the repo. ###Code import os os.chdir("/") ###Output _____no_output_____ ###Markdown Import requirements ###Code import numpy as np import scanpy as sc import anndata import dask import time import cudf import cuml import cupy as cp import os, wget from cuml.decomposition import PCA from cuml.manifold import TSNE from cuml.cluster import KMeans from dask_cuda import initialize, LocalCUDACluster from dask.distributed import Client, default_client import rapids_scanpy_funcs import utils import warnings warnings.filterwarnings('ignore', 'Expected ') warnings.simplefilter('ignore') ###Output _____no_output_____ ###Markdown We use the RAPIDS memory manager to enable Unified Virtual Memory management, which allows us to oversubscribe the GPU memory. ###Code import rmm def set_mem(): rmm.reinitialize(managed_memory=True) cp.cuda.set_allocator(rmm.rmm_cupy_allocator) cluster = LocalCUDACluster(CUDA_VISIBLE_DEVICES="0, 1, 2, 3, 4, 5, 6, 7") client = Client(cluster) set_mem() client.run(set_mem) client ###Output WARNING:bokeh.server.util:Host wildcard '*' will allow connections originating from multiple (or possibly all) hostnames or IPs. Use non-wildcard values to restrict access explicitly Start worker at: tcp://127.0.0.1:34195 INFO:distributed.worker: Start worker at: tcp://127.0.0.1:34195 ###Markdown Input data In the cell below, we provide the path to the sparse `.h5ad` file containing the count matrix to analyze.To run this notebook using your own dataset, please see the README for instructions to convert your own count matrix into this format. Then, replace the path in the cell below with the path to your generated `.h5ad` file. ###Code input_file = "../data/1M_brain_cells_10X.sparse.h5ad" if not os.path.exists(input_file): print('Downloading import file...') os.makedirs('../data', exist_ok=True) wget.download('https://rapids-single-cell-examples.s3.us-east-2.amazonaws.com/1M_brain_cells_10X.sparse.h5ad', input_file) ###Output _____no_output_____ ###Markdown Set parameters ###Code # marker genes MITO_GENE_PREFIX = "mt-" # Prefix for mitochondrial genes to regress out markers = ["Stmn2", "Hes1", "Olig1"] # Marker genes for visualization # filtering cells min_genes_per_cell = 200 # Filter out cells with fewer genes than this expressed max_genes_per_cell = 6000 # Filter out cells with more genes than this expressed # filtering genes n_top_genes = 4000 # Number of highly variable genes to retain # PCA n_components = 50 # Number of principal components to compute # Batched PCA pca_train_ratio = 0.35 # Fraction of cells to use for PCA training n_pca_batches = 8 # t-SNE tsne_n_pcs = 20 # Number of principal components to use for t-SNE # k-means k = 35 # Number of clusters for k-means # KNN n_neighbors = 15 # Number of nearest neighbors for KNN graph knn_n_pcs = 50 # Number of principal components to use for finding nearest neighbors # UMAP umap_min_dist = 0.3 umap_spread = 1.0 start = time.time() ###Output _____no_output_____ ###Markdown Load and Preprocess Data ###Code data_load_preprocess_start = time.time() ###Output _____no_output_____ ###Markdown Below, we load the sparse count matrix from the `.h5ad` file into GPU using a custom function. While reading the dataset, filters are applied on the count matrix to remove cells with an extreme number of genes expressed. Genes will zero expression in all cells are also eliminated. The custom function uses [Dask](https://dask.org) to partition data. The above mentioned filters are applied on individual partitions. Usage of Dask along with cupy provides the following benefits:- Parallelized data loading when multiple GPUs are available- Ability to partition the data allows pre-processing large datasetsFilters are applied on individual batches of cells. Elementwise or cell-level normalization operations are also performed while reading. For this example, the following two operations are performed:- Normalize the count matrix so that the total counts in each cell sum to 1e4.- Log transform the count matrix. ###Code %%time def partial_post_processor(partial_data): partial_data = rapids_scanpy_funcs.normalize_total(partial_data, target_sum=1e4) return partial_data.log1p() dask_sparse_arr, genes, query = rapids_scanpy_funcs.read_with_filter(client, input_file, min_genes_per_cell=min_genes_per_cell, max_genes_per_cell=max_genes_per_cell, partial_post_processor=partial_post_processor) dask_sparse_arr = dask_sparse_arr.persist() ###Output CPU times: user 6.81 s, sys: 1.57 s, total: 8.38 s Wall time: 41.1 s ###Markdown Verify the shape of the resulting sparse matrix: ###Code dask_sparse_arr.shape ###Output _____no_output_____ ###Markdown Select Most Variable Genes Before filtering the count matrix, we save the 'raw' expression values of the marker genes to use for labeling cells afterward. ###Code %%time marker_genes_raw = {} i = 0 for index in genes[genes.isin(markers)].index.to_arrow().to_pylist(): marker_genes_raw[markers[i]] = dask_sparse_arr[:, index].compute().toarray().ravel() i += 1 ###Output CPU times: user 770 ms, sys: 350 ms, total: 1.12 s Wall time: 7.51 s ###Markdown Filter the count matrix to retain only the most variable genes. ###Code %%time hvg = rapids_scanpy_funcs.highly_variable_genes_filter(client, dask_sparse_arr, genes, n_top_genes=n_top_genes) genes = genes[hvg] dask_sparse_arr = dask_sparse_arr[:, hvg] sparse_gpu_array = dask_sparse_arr.compute() # del dask_sparse_arr del hvg ###Output CPU times: user 2.07 s, sys: 4.92 s, total: 6.99 s Wall time: 8.83 s ###Markdown Regress out confounding factors (number of counts, mitochondrial gene expression) We can now perform regression on the count matrix to correct for confounding factors - for example purposes, we use the number of counts and the expression of mitochondrial genes (named starting with `mt-`). We now calculate the total counts and the percentage of mitochondrial counts for each cell. ###Code %%time sparse_gpu_array = sparse_gpu_array.tocsc() mito_genes = genes.str.startswith(MITO_GENE_PREFIX).values n_counts = sparse_gpu_array.sum(axis=1) percent_mito = (sparse_gpu_array[:,mito_genes].sum(axis=1) / n_counts).ravel() n_counts = cp.array(n_counts).ravel() percent_mito = cp.array(percent_mito).ravel() del sparse_gpu_array ###Output _____no_output_____ ###Markdown And perform regression: ###Code %%time n_rows = dask_sparse_arr.shape[0] n_cols = dask_sparse_arr.shape[1] dask_sparse_arr = dask_sparse_arr.map_blocks(lambda x: x.todense(), dtype="float32", meta=cp.array(cp.zeros((0,)))).T dask_sparse_arr = dask_sparse_arr.rechunk((500, n_rows)).persist() dask_sparse_arr.compute_chunk_sizes() %%time import math dask_sparse_arr = dask_sparse_arr.map_blocks(lambda x: rapids_scanpy_funcs.regress_out(x.T, n_counts, percent_mito).T, dtype="float32", meta=cp.array(cp.zeros(0,))).T dask_sparse_arr = dask_sparse_arr.rechunk((math.ceil(n_rows/8), n_cols)).persist() dask_sparse_arr.compute_chunk_sizes() ###Output CPU times: user 2.17 s, sys: 930 ms, total: 3.1 s Wall time: 35.1 s ###Markdown Scale Finally, we scale the count matrix to obtain a z-score and apply a cutoff value of 10 standard deviations, obtaining the preprocessed count matrix. ###Code %%time mean = dask_sparse_arr.mean(axis=0) dask_sparse_arr -= mean stddev = cp.sqrt(dask_sparse_arr.var(axis=0).compute()) dask_sparse_arr /= stddev dask_sparse_arr = dask.array.clip(dask_sparse_arr, 0, 10).persist() del mean, stddev data_load_preprocess_time = time.time() print("Total data load and preprocessing time: %s" % (data_load_preprocess_time-data_load_preprocess_start)) ###Output Total data load and preprocessing time: 113.3812141418457 ###Markdown Cluster & Visualize Reduce We use PCA to reduce the dimensionality of the matrix to its top 50 principal components.If the number of cells was smaller, we would use the command `adata.obsm["X_pca"] = cuml.dask.decomposition.PCA(n_components=n_components, output_type="numpy").fit_transform(dask_sparse_arr)` to perform PCA on all the cells.However, we cannot perform PCA on the complete dataset using a single GPU. Therefore, we use the batched PCA function in `utils.py`, which uses only a fraction of the total cells to train PCA. ###Code %%time pca = PCA(n_components=50).fit(dask_sparse_arr[:int(0.35*dask_sparse_arr.shape[0])].compute()) pca_data = dask_sparse_arr.map_blocks(lambda x: pca.transform(x), dtype="float32", meta=cp.zeros((0,))).persist() pca_data.compute_chunk_sizes() ###Output CPU times: user 20.3 s, sys: 21.3 s, total: 41.6 s Wall time: 44.2 s ###Markdown We store the preprocessed count matrix as an AnnData object, which is currently in host memory. We also add the expression levels of the marker genes as observations to the annData object. ###Code %%time local_pca = pca_data.compute() adata = anndata.AnnData(local_pca.get()) pca_data.shape ###Output _____no_output_____ ###Markdown t-SNE + K-means We cluster the cells using k-means on the principal components. For example purposes, we set k=35. ###Code %%time adata.obsm['X_tsne'] = TSNE().fit_transform(adata.X[:,:tsne_n_pcs]) %%time kmeans = KMeans(n_clusters=k, init="k-means++", random_state=0).fit(adata.X) adata.obs['kmeans'] = kmeans.labels_.astype(str) ###Output CPU times: user 1.32 s, sys: 349 ms, total: 1.67 s Wall time: 1.63 s ###Markdown We visualize the cells using t-SNE and label cells by color according to the k-means clustering. ###Code %%time sc.pl.tsne(adata, color=["kmeans"]) ###Output _____no_output_____ ###Markdown UMAP + Graph clustering We can also visualize the cells using the UMAP algorithm in Rapids. Before UMAP, we need to construct a k-nearest neighbors graph in which each cell is connected to its nearest neighbors. This can be done conveniently using rapids functionality already integrated into Scanpy.Note that Scanpy uses an approximation to the nearest neighbors on the CPU while the GPU version performs an exact search. While both methods are known to yield useful results, some differences in the resulting visualization and clusters can be observed. ###Code %%time sc.pp.neighbors(adata, n_neighbors=n_neighbors, n_pcs=knn_n_pcs, method='rapids') ###Output CPU times: user 1min, sys: 13 s, total: 1min 13s Wall time: 1min 11s ###Markdown The UMAP function from Rapids is also integrated into Scanpy. ###Code %%time sc.tl.umap(adata, min_dist=umap_min_dist, spread=umap_spread, method='rapids') ###Output WARNING: .obsp["connectivities"] have not been computed using umap CPU times: user 24 s, sys: 11.8 s, total: 35.8 s Wall time: 35.5 s ###Markdown Next, we use the Louvain algorithm for graph-based clustering. ###Code %%time sc.tl.louvain(adata, flavor='rapids') ###Output CPU times: user 2.27 s, sys: 687 ms, total: 2.95 s Wall time: 2.88 s ###Markdown We plot the cells using the UMAP visualization, and using the Louvain clusters as labels. ###Code %%time sc.pl.umap(adata, color=["louvain"]) ###Output _____no_output_____ ###Markdown We can also use the Leiden clustering method in RAPIDS. This method has not been integrated into Scanpy and needs to be called separately. ###Code %%time adata.obs['leiden'] = rapids_scanpy_funcs.leiden(adata) %%time sc.pl.umap(adata, color=["leiden"]) print("Full time: %s" % (time.time() - start)) client.shutdown() cluster.close() ###Output Full time: 349.65317606925964
Chest_X_ray/COVID_X_Ray_dataset_preparation.ipynb
###Markdown Mounting ###Code from google.colab import drive drive.mount('/content/drive') ###Output Mounted at /content/drive ###Markdown Creating Dataset ###Code FLDR="drive/MyDrive/covid_19_images/" import os import cv2 import shutil import pandas as pd fldrs=os.listdir(FLDR) fldrs fldrs.remove('Unpro_2') fldrs.remove('Source_1') fldrs os.mkdir(FLDR+'Processed_file_2') dest_fldr=FLDR+"Processed_file_2" file_a=[] label=[] for f in fldrs: if 'Source' in f: int_f=os.listdir(FLDR+f) for f_2 in int_f: files=os.listdir(FLDR+f+'/'+f_2) for file in files: fle=FLDR+f+'/'+f_2+'/'+file print(fle) shutil.copy(fle,dest_fldr+'/') file_a.append(file) if f_2=='Covid-19': label.append(1) else: label.append(0) else: fldr_t="drive/MyDrive/covid_19_images/"+f+"/images/" print('drive/MyDrive/covid_19_images/'+f+'/metadata.csv') df=pd.read_csv('drive/MyDrive/covid_19_images/'+f+'/metadata.csv',encoding='unicode_escape') i=0 while i<len(df): if 'imagename' in df.columns: file_n=fldr_t+df.iloc[i]['imagename'] file_a.append(df.iloc[i]['imagename']) shutil.copy(file_n,dest_fldr+'/') if df.iloc[i]['finding']=='COVID-19': label.append(1) else: label.append(0) else: try: file_n=fldr_t+df.iloc[i]['patientid']+'.jpg' file_a.append(df.iloc[i]['patientid']+'.jpg') shutil.copy(file_n,dest_fldr+'/') if df.iloc[i]['finding']=='COVID-19': label.append(1) else: label.append(0) except: pass i+=1 print(f+" done") df=pd.DataFrame(list(zip(file_a,label)),columns=['Files','Labels']) len(file_a) df=pd.DataFrame(list(zip(file_a,label)),columns=['Files','Labels']) df.to_csv(FLDR+'Details_2.csv',index=False) ###Output _____no_output_____
exercises/1.33.ipynb
###Markdown 練習問題1.33結合する項に対するフィルタ(filter)という概念を導⼊することで、 さらに⼀般的なバージョンのaccumulate(練習問 題1.32)を⼿に⼊れることができる。 指定された条件を満たす範囲内の値から導出される項だけを結合するというものだ。 結果となる filtered-accumulate抽象は、accumulateと同じ引数に加えて、 フィルタを指定する1引数の述語を取る。 filtered-accumulate を⼿続きとして書け。 次のものをfiltered-accumulateを使ってどのように表現するかを⽰せ。 a. aからbの区間の素数の⼆乗の和(すでにprime?述語を書い ているとする) b. $n$と互いに素である$n$未満のすべての正の整数 (つまり、 $gcd(i,n) = 1$となるすべての整数$i < n$)の積 ###Code ; 線形反復プロセス ; 再起呼び出し部分を1つにしようとしたら、変数を更新する処理が必要になった。 (define (filtered-accumulate combiner null-value filter term a next b) (define (iter a result) (if (> a b) result (let ((term-val (term a)) (result-next result)) (if (filter a)(set! result-next (combiner result term-val))) (iter (next a) result-next) ) ) ) (iter a null-value) ) ; 素直な実装 (define (filtered-accumulate combiner null-value filter term a next b) (define (iter a result) (if (> a b) result (if (filter a) (iter (next a) (combiner result (term a))) (iter (next a) result) ) ) ) (iter a null-value) ) ; aからbまでの整数の積 (define (factorial a b) (filtered-accumulate * 1 (lambda (x) #t) (lambda (x) x) a (lambda (n)(+ n 1)) b) ) (display (factorial 1 4)) (newline) ; aからbまでの整数の和 (define (sum a b) (filtered-accumulate + 0 (lambda (x) #t) (lambda (x) x) a (lambda (n)(+ n 1)) b) ) (display (sum 1 100)) (newline) (define (smallest-divisor n) (find-divisor n 2) ) (define (find-divisor n test-divisor) (cond ((> (square test-divisor) n) n) ((divides? test-divisor n) test-divisor) (else (find-divisor n (+ test-divisor 1))) ) ) (define (divides? a b) (= (remainder b a) 0) ) (define (square x) (* x x)) (define (prime? n) (= n (smallest-divisor n)) ) ; 動作確認 (display (prime? 27)) (newline) (display (prime? 109)) (newline) ; aからbまでの素数の2乗和 (define (sum a b) (filtered-accumulate + 0 prime? (lambda (x)(* x x)) a (lambda (n)(+ n 1)) b) ) (display (sum 2 10)) (newline) (define (gcd a b) (if (= b 0) a (begin ;(display "gcd(") ;(display a) ;(display ",") ;(display b) ;(display ")") ;(newline) (gcd b (remainder a b)) ) ) ) (gcd 206 40) ; bと互いに素なる整数の積 (define (product a b) (filtered-accumulate * 1 (lambda (x)(= (gcd x b) 1)) (lambda (x) x) a (lambda (n)(+ n 1)) b) ) (display (product 1 10)) (newline) (display (product 1 20)) (newline) ###Output 8729721
Section 5/.ipynb_checkpoints/Customer Segmentation Solution-checkpoint.ipynb
###Markdown Performing customer segmentation Import the customers.csvThe dataset is obtained from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers) ###Code # Import libraries necessary for this project import numpy as np import pandas as pd from sklearn.cluster import KMeans from sklearn import metrics from scipy import stats from scipy.spatial.distance import cdist import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Import the data and remove categorical variable ###Code data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) # Display a description of the dataset display(data.describe()) # Scatter plot of all the features pd.plotting.scatter_matrix(data, figsize = (16,10)); ###Output _____no_output_____ ###Markdown Since the data is skewed we perform a log transform ###Code # We transform the data using the natural logarithm data_trans = data.apply(lambda x: np.log(x)) # Scatter plot matrix for each pair of transformed features pd.plotting.scatter_matrix(data_trans, figsize = (16,10)); ###Output _____no_output_____ ###Markdown We put the data into a numpy array to use in kmeans ###Code X = data_trans.values # k means determine k distortions = [] K = range(1,7) for k in K: kmeanModel = KMeans(n_clusters=k).fit(X) kmeanModel.fit(X) distortions.append(sum(np.min(cdist(X, kmeanModel.cluster_centers_, 'euclidean'), axis=1)) / X.shape[0]) # Plot the elbow plt.plot(K, distortions, 'bx-') plt.xlabel('k') plt.ylabel('Distortion') plt.title('The Elbow Method showing the optimal k') plt.show() ###Output _____no_output_____ ###Markdown Based on the graph we can choose k=2 or k=4 ###Code kmeans = KMeans(n_clusters=2, random_state=0) kmeans.fit(X) # Scatter plot matrix for each pair of transformed features along the the cluster centers pd.plotting.scatter_matrix(data_trans, figsize = (16,10),c=kmeans.labels_, cmap='rainbow'); centers=kmeans.cluster_centers_ true_centers = np.exp(kmeans.cluster_centers_) true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers ###Output _____no_output_____
how-to-use-azureml/training/train-within-notebook/train-within-notebook.ipynb
###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* deploying the model and packages as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create an environment object describing the dependencies. Next we create an inference configuration using this environment object and the scoring script that we created previously. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.environment import Environment from azureml.core.model import InferenceConfig env = Environment('deploytocloudenv') env.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],pip_packages=['azureml-defaults']) inference_config = InferenceConfig(entry_script="score.py", environment=env) ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the inference configuration, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `Model.deploy()`. This function uses the deployment and inference configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `Model.deploy` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `inference_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.model import Model from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Model.deploy(workspace=ws, name='my-aci-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.15 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Take a snapshot of the directory containing this notebook run.take_snapshot('./') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Capture this notebook with the run run.take_snapshot('./') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to construct a docker image that can support the models and any other objects required for inference. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging()# Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Take a snapshot of the directory containing this notebook run.take_snapshot('./') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np import os from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Capture this notebook with the run run.take_snapshot('./') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](..\..\configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Run, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging()# Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Take a snapshot of the directory containing this notebook run.take_snapshot('./') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np import os from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Capture this notebook with the run run.take_snapshot('./') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests import json # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4); a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step'); a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10); a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipynb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Run, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging()# Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Take a snapshot of the directory containing this notebook run.take_snapshot('./') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np import os from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Capture this notebook with the run run.take_snapshot('./') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests import json # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4); a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step'); a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10); a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* deploying the model and packages as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create an environment object describing the dependencies. Next we create an inference configuration using this environment object and the scoring script that we created previously. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.environment import Environment from azureml.core.model import InferenceConfig env = Environment('deploytocloudenv') env.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],pip_packages=['azureml-defaults']) inference_config = InferenceConfig(entry_script="score.py", environment=env) ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the inference configuration, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `Model.deploy()`. This function uses the deployment and inference configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `Model.deploy` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `inference_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.model import Model from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Model.deploy(workspace=ws, name='my-aci-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* deploying the model and packages as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create an environment object describing the dependencies. Next we create an inference configuration using this environment object and the scoring script that we created previously. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.environment import Environment from azureml.core.model import InferenceConfig env = Environment('deploytocloudenv') env.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn']) inference_config = InferenceConfig(entry_script="score.py", environment=env) ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the inference configuration, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `Model.deploy()`. This function uses the deployment and inference configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `Model.deploy` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `inference_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.model import Model from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Model.deploy(workspace=ws, name='my-aci-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.10 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.take_snapshot()` to capture *this* notebook so we can reproduce this experiment at a later time.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Take a snapshot of the directory containing this notebook run.take_snapshot('./') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Capture this notebook with the run run.take_snapshot('./') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-within-notebook/train-within-notebook.png) Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](nextsteps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't already to establish your connection to the AzureML Workspace. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture model_file_name = 'outputs/model.pkl' joblib.dump(value = regression_model, filename = model_file_name) # upload the model file explicitly into artifacts run.upload_file(name = model_file_name, path_or_stream = model_file_name) # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inference. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* deploying the model and packages as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create an environment object describing the dependencies. Next we create an inference configuration using this environment object and the scoring script that we created previously. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.environment import Environment from azureml.core.model import InferenceConfig env = Environment('deploytocloudenv') env.python.conda_dependencies = CondaDependencies.create(conda_packages=['scikit-learn'],pip_packages=['azureml-defaults']) inference_config = InferenceConfig(entry_script="score.py", environment=env) ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the inference configuration, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/production-deploy-to-aks/production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `Model.deploy()`. This function uses the deployment and inference configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `Model.deploy` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `inference_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.model import Model from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Model.deploy(workspace=ws, name='my-aci-svc', models=[model], inference_config=inference_config, deployment_config=aciconfig) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json service = ws.webservices['my-aci-svc'] # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____ ###Markdown Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Train and deploy a model_**Create and deploy a model directly from a notebook**_------ Contents1. [Introduction](Introduction)1. [Setup](Setup)1. [Data](Data)1. [Train](Train) 1. Viewing run results 1. Simple parameter sweep 1. Viewing experiment results 1. Select the best model1. [Deploy](Deploy) 1. Register the model 1. Create a scoring file 1. Describe your environment 1. Descrice your target compute 1. Deploy your webservice 1. Test your webservice 1. Clean up1. [Next Steps](Next%20Steps)--- IntroductionAzure Machine Learning provides capabilities to control all aspects of model training and deployment directly from a notebook using the AML Python SDK. In this notebook we will* connect to our AML Workspace* create an experiment that contains multiple runs with tracked metrics* choose the best model created across all runs* deploy that model as a serviceIn the end we will have a model deployed as a web service which we can call from an HTTP endpoint --- SetupMake sure you have completed the [Configuration](../../../configuration.ipnyb) notebook to set up your Azure Machine Learning workspace and ensure other common prerequisites are met. From the configuration, the important sections are the workspace configuration and ACI regristration.We will also need the following libraries install to our conda environment. If these are not installed, use the following command to do so and restart the notebook.```shell(myenv) $ conda install -y matplotlib tqdm scikit-learn```For this notebook we need the Azure ML SDK and access to our workspace. The following cell imports the SDK, checks the version, and accesses our already configured AzureML workspace. ###Code import azureml.core from azureml.core import Experiment, Workspace # Check core SDK version number print("This notebook was created using version 1.0.2 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK") print("") ws = Workspace.from_config() print('Workspace name: ' + ws.name, 'Azure region: ' + ws.location, 'Subscription id: ' + ws.subscription_id, 'Resource group: ' + ws.resource_group, sep='\n') ###Output _____no_output_____ ###Markdown --- DataWe will use the diabetes dataset for this experiement, a well-known small dataset that comes with scikit-learn. This cell loads the dataset and splits it into random training and testing sets. ###Code from sklearn.datasets import load_diabetes from sklearn.linear_model import Ridge from sklearn.metrics import mean_squared_error from sklearn.model_selection import train_test_split from sklearn.externals import joblib X, y = load_diabetes(return_X_y = True) columns = ['age', 'gender', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) data = { "train":{"X": X_train, "y": y_train}, "test":{"X": X_test, "y": y_test} } print ("Data contains", len(data['train']['X']), "training samples and",len(data['test']['X']), "test samples") ###Output _____no_output_____ ###Markdown --- TrainLet's use scikit-learn to train a simple Ridge regression model. We use AML to record interesting information about the model in an Experiment. An Experiment contains a series of trials called Runs. During this trial we use AML in the following way:* We access an experiment from our AML workspace by name, which will be created if it doesn't exist* We use `start_logging` to create a new run in this experiment* We use `run.log()` to record a parameter, alpha, and an accuracy measure - the Mean Squared Error (MSE) to the run. We will be able to review and compare these measures in the Azure Portal at a later time.* We store the resulting model in the **outputs** directory, which is automatically captured by AML when the run is complete.* We use `run.complete()` to indicate that the run is over and results can be captured and finalized ###Code # Get an experiment object from Azure Machine Learning experiment = Experiment(workspace=ws, name="train-within-notebook") # Create a run object in the experiment run = experiment.start_logging() # Log the algorithm parameter alpha to the run run.log('alpha', 0.03) # Create, fit, and test the scikit-learn Ridge regression model regression_model = Ridge(alpha=0.03) regression_model.fit(data['train']['X'], data['train']['y']) preds = regression_model.predict(data['test']['X']) # Output the Mean Squared Error to the notebook and to the run print('Mean Squared Error is', mean_squared_error(data['test']['y'], preds)) run.log('mse', mean_squared_error(data['test']['y'], preds)) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') # Complete the run run.complete() ###Output _____no_output_____ ###Markdown Viewing run resultsAzure Machine Learning stores all the details about the run in the Azure cloud. Let's access those details by retrieving a link to the run using the default run output. Clicking on the resulting link will take you to an interactive page presenting all run information. ###Code run ###Output _____no_output_____ ###Markdown Simple parameter sweepNow let's take the same concept from above and modify the **alpha** parameter. For each value of alpha we will create a run that will store metrics and the resulting model. In the end we can use the captured run history to determine which model was the best for us to deploy. Note that by using `with experiment.start_logging() as run` AML will automatically call `run.complete()` at the end of each loop.This example also uses the **tqdm** library to provide a thermometer feedback ###Code import numpy as np from tqdm import tqdm model_name = "model.pkl" # list of numbers from 0 to 1.0 with a 0.05 interval alphas = np.arange(0.0, 1.0, 0.05) # try a bunch of alpha values in a Linear Regression (Ridge) model for alpha in tqdm(alphas): # create a bunch of runs, each train a model with a different alpha value with experiment.start_logging() as run: # Use Ridge algorithm to build a regression model regression_model = Ridge(alpha=alpha) regression_model.fit(X=data["train"]["X"], y=data["train"]["y"]) preds = regression_model.predict(X=data["test"]["X"]) mse = mean_squared_error(y_true=data["test"]["y"], y_pred=preds) # log alpha, mean_squared_error and feature names in run history run.log(name="alpha", value=alpha) run.log(name="mse", value=mse) # Save the model to the outputs directory for capture joblib.dump(value=regression_model, filename='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Viewing experiment resultsSimilar to viewing the run, we can also view the entire experiment. The experiment report view in the Azure portal lets us view all the runs in a table, and also allows us to customize charts. This way, we can see how the alpha parameter impacts the quality of the model ###Code # now let's take a look at the experiment in Azure portal. experiment ###Output _____no_output_____ ###Markdown Select the best model Now that we've created many runs with different parameters, we need to determine which model is the best for deployment. For this, we will iterate over the set of runs. From each run we will take the *run id* using the `id` property, and examine the metrics by calling `run.get_metrics()`. Since each run may be different, we do need to check if the run has the metric that we are looking for, in this case, **mse**. To find the best run, we create a dictionary mapping the run id's to the metrics.Finally, we use the `tag` method to mark the best run to make it easier to find later. ###Code runs = {} run_metrics = {} # Create dictionaries containing the runs and the metrics for all runs containing the 'mse' metric for r in tqdm(experiment.get_runs()): metrics = r.get_metrics() if 'mse' in metrics.keys(): runs[r.id] = r run_metrics[r.id] = metrics # Find the run with the best (lowest) mean squared error and display the id and metrics best_run_id = min(run_metrics, key = lambda k: run_metrics[k]['mse']) best_run = runs[best_run_id] print('Best run is:', best_run_id) print('Metrics:', run_metrics[best_run_id]) # Tag the best run for identification later best_run.tag("Best Run") ###Output _____no_output_____ ###Markdown --- DeployNow that we have trained a set of models and identified the run containing the best model, we want to deploy the model for real time inferencing. The process of deploying a model involves* registering a model in your workspace* creating a scoring file containing init and run methods* creating an environment dependency file describing packages necessary for your scoring file* creating a docker image containing a properly described environment, your model, and your scoring file* deploying that docker image as a web service Register a modelWe have already identified which run contains the "best model" by our evaluation criteria. Each run has a file structure associated with it that contains various files collected during the run. Since a run can have many outputs we need to tell AML which file from those outputs represents the model that we want to use for our deployment. We can use the `run.get_file_names()` method to list the files associated with the run, and then use the `run.register_model()` method to place the model in the workspace's model registry.When using `run.register_model()` we supply a `model_name` that is meaningful for our scenario and the `model_path` of the model relative to the run. In this case, the model path is what is returned from `run.get_file_names()` ###Code # View the files in the run for f in best_run.get_file_names(): print(f) # Register the model with the workspace model = best_run.register_model(model_name='best_model', model_path='outputs/model.pkl') ###Output _____no_output_____ ###Markdown Once a model is registered, it is accessible from the list of models on the AML workspace. If you register models with the same name multiple times, AML keeps a version history of those models for you. The `Model.list()` lists all models in a workspace, and can be filtered by name, tags, or model properties. ###Code # Find all models called "best_model" and display their version numbers from azureml.core.model import Model models = Model.list(ws, name='best_model') for m in models: print(m.name, m.version) ###Output _____no_output_____ ###Markdown Create a scoring fileSince your model file can essentially be anything you want it to be, you need to supply a scoring script that can load your model and then apply the model to new data. This script is your 'scoring file'. This scoring file is a python program containing, at a minimum, two methods `init()` and `run()`. The `init()` method is called once when your deployment is started so you can load your model and any other required objects. This method uses the `get_model_path` function to locate the registered model inside the docker container. The `run()` method is called interactively when the web service is called with one or more data samples to predict.The scoring file used for this exercise is [here](score.py). Describe your environmentEach modelling process may require a unique set of packages. Therefore we need to create a dependency file providing instructions to AML on how to contstruct a docker image that can support the models and any other objects required for inferencing. In the following cell, we create a environment dependency file, *myenv.yml* that specifies which libraries are needed by the scoring script. You can create this file manually, or use the `CondaDependencies` class to create it for you.Next we use this environment file to describe the docker container that we need to create in order to deploy our model. This container is created using our environment description and includes our scoring script. ###Code from azureml.core.conda_dependencies import CondaDependencies from azureml.core.image import ContainerImage # Create an empty conda environment and add the scikit-learn package env = CondaDependencies() env.add_conda_package("scikit-learn") # Display the environment print(env.serialize_to_string()) # Write the environment to disk with open("myenv.yml","w") as f: f.write(env.serialize_to_string()) # Create a configuration object indicating how our deployment container needs to be created image_config = ContainerImage.image_configuration(execution_script="score.py", runtime="python", conda_file="myenv.yml") ###Output _____no_output_____ ###Markdown Describe your target computeIn addition to the container, we also need to describe the type of compute we want to allocate for our webservice. In in this example we are using an [Azure Container Instance](https://azure.microsoft.com/en-us/services/container-instances/) which is a good choice for quick and cost-effective dev/test deployment scenarios. ACI instances require the number of cores you want to run and memory you need. Tags and descriptions are available for you to identify the instances in AML when viewing the Compute tab in the AML Portal.For production workloads, it is better to use [Azure Kubernentes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) instead. Try [this notebook](11.production-deploy-to-aks.ipynb) to see how that can be done from Azure ML. ###Code from azureml.core.webservice import AciWebservice aciconfig = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1, tags={'sample name': 'AML 101'}, description='This is a great example.') ###Output _____no_output_____ ###Markdown Deploy your webserviceThe final step to deploying your webservice is to call `WebService.deploy_from_model()`. This function uses the deployment and image configurations created above to perform the following:* Build a docker image* Deploy to the docker image to an Azure Container Instance* Copy your model files to the Azure Container Instance* Call the `init()` function in your scoring file* Provide an HTTP endpoint for scoring callsThe `deploy_from_model` method requires the following parameters* `workspace` - the workspace containing the service* `name` - a unique named used to identify the service in the workspace* `models` - an array of models to be deployed into the container* `image_config` - a configuration object describing the image environment* `deployment_config` - a configuration object describing the compute type **Note:** The web service creation can take several minutes. ###Code %%time from azureml.core.webservice import Webservice # Create the webservice using all of the precreated configurations and our best model service = Webservice.deploy_from_model(name='my-aci-svc', deployment_config=aciconfig, models=[model], image_config=image_config, workspace=ws) # Wait for the service deployment to complete while displaying log output service.wait_for_deployment(show_output=True) ###Output _____no_output_____ ###Markdown Test your webservice Now that your web service is runing you can send JSON data directly to the service using the `run` method. This cell pulls the first test sample from the original dataset into JSON and then sends it to the service. ###Code import json # scrape the first row from the test set. test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) #score on our service service.run(input_data = test_samples) ###Output _____no_output_____ ###Markdown This cell shows how you can send multiple rows to the webservice at once. It then calculates the residuals - that is, the errors - by subtracting out the actual values from the results. These residuals are used later to show a plotted result. ###Code # score the entire test set. test_samples = json.dumps({'data': X_test.tolist()}) result = service.run(input_data = test_samples) residual = result - y_test ###Output _____no_output_____ ###Markdown This cell shows how you can use the `service.scoring_uri` property to access the HTTP endpoint of the service and call it using standard POST operations. ###Code import requests # use the first row from the test set again test_samples = json.dumps({"data": X_test[0:1, :].tolist()}) # create the required header headers = {'Content-Type':'application/json'} # post the request to the service and display the result resp = requests.post(service.scoring_uri, test_samples, headers = headers) print(resp.text) ###Output _____no_output_____ ###Markdown Residual graphOne way to understand the behavior of your model is to see how the data performs against data with known results. This cell uses matplotlib to create a histogram of the residual values, or errors, created from scoring the test samples.A good model should have residual values that cluster around 0 - that is, no error. Observing the resulting histogram can also show you if the model is skewed in any particular direction. ###Code %matplotlib inline import matplotlib.pyplot as plt f, (a0, a1) = plt.subplots(1, 2, gridspec_kw={'width_ratios':[3, 1], 'wspace':0, 'hspace': 0}) f.suptitle('Residual Values', fontsize = 18) f.set_figheight(6) f.set_figwidth(14) a0.plot(residual, 'bo', alpha=0.4) a0.plot([0,90], [0,0], 'r', lw=2) a0.set_ylabel('residue values', fontsize=14) a0.set_xlabel('test data set', fontsize=14) a1.hist(residual, orientation='horizontal', color='blue', bins=10, histtype='step') a1.hist(residual, orientation='horizontal', color='blue', alpha=0.2, bins=10) a1.set_yticklabels([]) plt.show() ###Output _____no_output_____ ###Markdown Clean up Delete the ACI instance to stop the compute and any associated billing. ###Code %%time service.delete() ###Output _____no_output_____
WDModel.ipynb
###Markdown Fitting Complex Dynamics Models with a Bayesian ApproachFor a review of the Bayesian approach I personally recommend this short 5-part series Blogs: http://jakevdp.github.io/blog/2014/03/11/frequentism-and-bayesianism-a-practical-intro/In short, when we fit linear model using glm it uses **maximum likelihood** *aka* **MLE**.I decided to do the same, but using **Bayesian** approach. More or less, what it does - it probes parameter space of all model parameters $\theta$ and calculates the conditional likelihood of chosen model parameters, given all data points:$$L(\theta | D) = \frac{L(D | \theta) f(\theta)}{\int L(D | \theta^\prime)f(\theta^\prime)d\theta^\prime} \ \ \ \ \ \ ,$$where $L(D | \theta)$ is the likelihood (same as in MLE). $L(\theta | D)$ is posterior. $f(\theta)$ is prior.As model parameters grow, it becomes computationally impossible to straight up probe the entire parameter space for $\theta$. Instead, Markov Chain Monte-Carlo(MCMC) methods are used. There are different variations for sampling algorithms, such as Metropolis-Hastings, Gibbs. There are more sophisticated once out there.Integral in the denominator is irrelevant in MCMC.Another interesting fact is that there is absolutely **no overfitting** in the Bayesian approach... Huh? Interesting, right? This is due to the fact that there is no optimization in the procedure. It's a pure calculation of likelihoods. ###Code # Load libraries library(rstan) library(ggplot2) library(sn) # Skew-normal library(psych) # describe #library(MillimanPM) library(dplyr) library(rstanarm) # stan_glm library(bayesplot)# mcmc_areas source("./WDUtilityFunctions.R") print(R.Version()$version.string) printversion(c("rstan", "ggplot2", "sn", "psych", "dplyr", "rstanarm", "bayesplot")) ###Output [1] "R version 3.3.1 (2016-06-21)" [1] "rstan version 2.12.1" [1] "ggplot2 version 2.2.0" [1] "sn version 1.4.0" [1] "psych version 1.6.12" [1] "dplyr version 0.5.0" [1] "rstanarm version 2.13.1" [1] "bayesplot version 1.0.0" ###Markdown Withdrawal Toy ModelHere, I'm going to build a simulated toy model for some integer outcome, say number of withdrawals in the next 10 quarters. In the end I'm interested in the estimation of model parameters and * Time of the first withdrawal PDF * Number of Withdrawals PDFLet us say we have consecutive 10 quarters withdrawal data for 100 or 1000 people.Each observation is a **binary** event (1 or 0) - whether a person took a withdrawal in the quarter or not.We will also have an instantaneous *jump* in the withdrawal probability right after the first withdrawal. This is to simulate the fact that once people start withdrawing, they are more likely to withdraw in the next quarter than before the first withdrawal happens.Each person has a *base withdrawal probability* that can be different from a different person.For example, maybe there is another parameter (income or credit score) that we do not have data for that affects overallwithdrawal probability.The base withdrawal probability is drawn from a normal PDF with a predefined mean and variance. Once drawn, it stays the same for this person. However, we allow the withdrawal probability to **change with time**. One Person Simulation algorithm 1. Draw base logit probability: $$ base\_link \sim N(\mu, \sigma)$$ 2. Initialize the withdrawal indicator $$WD_{IND} = 0$$ 3. Loop $q$ over 10 quarters * 3.1 Calculate quarterly probability: $$ p_q = \frac{1}{ 1 + \exp(-base\_link - C_{WD}WD_{IND} - C_qq)}, $$ where $C_q$ is a predefined constant. * 3.2 Draw Bernoulli result - whether the person withdraws in a given quarter: $$ WD_q \sim Bernoulli(p_q) $$ * 3.3 If $WD_q$ == 1, then set $WD_{IND} = 1$ So, effectively there are three model parameters: * $\mu$ or Intercept - base logit probability * $\sigma$ - base logit probability standard deviation * $C_q$ - Quarterly coefficient * $C_{WD}$ - Jump after the first observed withdrawal 1. Simulation and a Little bit of Exploration ###Code # Control parameters for simulation TrueSimulatedPars_Symmetric = list( Intercept = -0.1, # Intercept term (Base Probability of Withdrawal) Intercept_scale = 0.5, # Uncertainty in Intercept Intercept_shape = 0.0, # Skeweness of Intercept Npeople = 2, # Number of people Nq = 10, # Number of Quarters for each person for training Nqtot = 10, # Total number of Quarters for each person for training q_coef = 0.05, # Quarter effect CJump = 0.3 # Jump probability ) source("./WDUtilityFunctions.R") # Simulate Hundred People set.seed(54) df_symmetric = simulate_people(TrueSimulatedPars_Symmetric, 100) # Simulate Thousand People set.seed(54) df_symmetric_1000 = simulate_people(TrueSimulatedPars_Symmetric, 1000) #saveRDS(df_symmetric, "out/df_symmetric_v5.rds", compress=F) #saveRDS(df_symmetric_1000, "out/df_symmetric_1000_v5.rds", compress=F) # First couple of rows of the dataframe head(df_symmetric_1000,2) ###Output _____no_output_____ ###Markdown First 20 simulationsAs one can see below are first 20 simulations. The first person has overall higher withdrawal probability than the second person. Withdrawal probability increases with quarter. In this case it is linear in this simulation. But it is straightforward to add things like shock withdrawals, for example.In this case **Person 1** has a lower base withdrawal probability than **Person 2**. And also Person 2 had more Withdrawal Events (8 vs 4).The first withdrawal for **Person 1** happens in the third quarter. For **Person 2** it happened in the second quarter. Right after that we see an instantaneous jump in the simulated withdrawal probabilities.The withdrawal probability is a hidden variable. It is not allowed to be used in training. It is only for analysis and vizualization. ###Code # DataFrame for the first plot df_plot_1 = df_symmetric_1000[1:20, c("id", "wd_prob", "person_id", "WD")] # Plot #png(filename="./Paper/figs/fig1.png") require(gridExtra) plot1 = ggplot(df_plot_1, aes(id, wd_prob, color=as.factor(person_id))) + geom_line() + geom_point() + xlab("Simulation ID") + ylab("Withdrawal Probability")+ labs(color='Person ID') plot2 = ggplot(df_plot_1, aes(id, WD, color=as.factor(person_id))) + geom_line() + geom_point() + xlab("Simulation ID") + ylab("Withdrawal Event")+ labs(color='Person ID') grid.arrange(plot1, plot2, ncol=1) #dev.off() #write.csv(df_plot_1, "./Paper/data/df_plot_1.csv", row.names=F) ###Output _____no_output_____ ###Markdown In general, we should see that on average we should see less *Withdrawal Events* in the early quarters than on the later quarters, because we simulated those probabilities to be higher. Also, for later quarters there is a high chance of already experiencing the instantaneous jump.Here is a Univariate plot of the Withdrawal Event rates as a function quarter: ###Code source("./WDUtilityFunctions.R") options(repr.plot.width=5, repr.plot.height=5) par(mfrow=c(1,1)) # Average Withdrawal rates as a function of quarter in 100 people sample Plot.Data.Response(df_symmetric, "q", "WD_numeric") # Average Withdrawal rates as a function of quarter in 1000 people sample Plot.Data.Response(df_symmetric_1000, "q", "WD_numeric") ###Output Warning message: : Removed 2 rows containing missing values (geom_path).Warning message: : Removed 2 rows containing missing values (geom_point).Warning message: : Removed 2 rows containing missing values (geom_errorbar).Warning message: : Removed 2 rows containing missing values (geom_bar).Warning message: : Removed 2 rows containing missing values (geom_path).Warning message: : Removed 2 rows containing missing values (geom_point).Warning message: : Removed 2 rows containing missing values (geom_errorbar).Warning message: : Removed 2 rows containing missing values (geom_bar). ###Markdown So, we do see an overall higher withdrawal rates in both samples. Withdrawal Probability PDFOverall withdrawal probability pdf is around 0.59 with standard deviation of 0.13 and has a negative skew: ###Code df_plot_2 = df_symmetric_1000$wd_prob source("./WDUtilityFunctions.R") #png(filename="./Paper/figs/fig2.png") options(repr.plot.width=5, repr.plot.height=3) plot_distribution(df_plot_2, xlab = "Withdrawal Probability", scale_text=0.9, legend_position="topleft", y.intersp=3, text.width_scale=1.2) #dev.off() #write.csv(df_plot_2, "./Paper/data/df_plot_2.csv", row.names=F) ###Output Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1412 0.4992 0.5930 0.5821 0.6744 0.9136 vars n mean sd median trimmed mad min max range skew kurtosis se 1 1 10000 0.58 0.13 0.59 0.59 0.13 0.14 0.91 0.77 -0.38 -0.15 0 ###Markdown 2. Bayesian Predictive Modeling using Stan ###Code source("./WDUtilityFunctions.R") # Converting dataframes with simulations to the input data format for Stan df_symmetric_stan = get_stan_input(df_symmetric) df_symmetric_1000_stan = get_stan_input(df_symmetric_1000) # 2.1 Model 100 people simulation stan_wd1 = stan(file="./wd_v5.stan", data=df_symmetric_stan, chains=8, iter=2000, cores=8, seed = 1 ) # Exctract result stan_wd1_result <- extract(stan_wd1, permuted = TRUE) # 2.2 Model 1000 people simulation stan_wd1_1000 = stan(file="./wd_v5.stan", data=df_symmetric_1000_stan, chains=8, iter=2000, cores=8, seed = 1 ) # Exctract result stan_wd1_1000_result <- extract(stan_wd1_1000, permuted = TRUE) #saveRDS(stan_wd1_result, "./out/stan_wd1_result_big_v5.rds") #saveRDS(stan_wd1_1000_result, "./out/stan_wd1_1000_result_big_v5.rds") ###Output _____no_output_____ ###Markdown 3. Analysis of the Predictive Model 3.1 Trace PlotsTrace plots are used to monitor the convergence of MCMC. We should see that the model parameters are converged after the warmup. Warmup in this case is 1000 iterations.As we can see, both models behave well. All parameters are converged after a few iterations. ###Code #Plot traces for the two models options(repr.plot.width=12, repr.plot.height=3.8) traceplot(stan_wd1, pars=c("coef_intercept", "coef_q", "sigma", "coef_Jump"), inc_warmup=T) traceplot(stan_wd1_1000, pars=c("coef_intercept", "coef_q", "sigma", "coef_Jump"), inc_warmup=T) ###Output _____no_output_____ ###Markdown 3.2 Model Coefficients As we can see below, our Predictive Models are able to reconstruct true model parameters fairly well. In general we can see that with more data, we get closer to the true value and the PDF is narrower since the confidence increases. ###Code #stan_wd1_result <- readRDS("./out/stan_wd1_result.rds") #stan_wd1_1000_result <- readRDS("./out/stan_wd1_1000_result.rds") # Here we plot the Intercept parameter (base logit withdrawal probability) # estimation from the two models (100 and 1000 people) # and the true value that was used in simulation source("./WDUtilityFunctions.R") #png(filename="./Paper/figs/fig3.png") options(repr.plot.width=12, repr.plot.height=8) #par(mfrow=c(2,2)) par(mfrow=c(2,2)) plot_2_histograms(stan_wd1_result$coef_intercept, stan_wd1_1000_result$coef_intercept, breaks=50, trueValue = TrueSimulatedPars_Symmetric$Intercept, legend1="100 people", legend2="1000 people", xlab=expression(mu), main="Base Logit Probability PDF", plot_legend=F, xlim=c(-0.5,0.4) ) plot_2_histograms(stan_wd1_result$sigma, stan_wd1_1000_result$sigma, breaks=50, trueValue = TrueSimulatedPars_Symmetric$Intercept_scale, legend1="100 people", legend2="1000 people", xlab=expression(sigma), main="Base Logit Probability Standard Deviation PDF", #plot_legend=F, xlim=c(0.3,1), y.intersp=1.8, text.width_scale=0.85, #legend_position="topleft" scale_text=0.6 ) plot_2_histograms(stan_wd1_result$coef_q, stan_wd1_1000_result$coef_q, breaks=50, trueValue = TrueSimulatedPars_Symmetric$q_coef, legend1="100 people", legend2="1000 people", xlab=expression("C"["q"]), main="Quarter coefficient PDF", plot_legend=F, xlim=c(-0.04,0.12) ) plot_2_histograms(stan_wd1_result$coef_Jump, stan_wd1_1000_result$coef_Jump, breaks=50, trueValue = TrueSimulatedPars_Symmetric$CJump, legend1="100 people", legend2="1000 people", xlab=expression("C"["WD"]), main="Jump coefficient PDF", plot_legend=F, xlim=c(-0.3,0.8) ) #dev.off() ###Output _____no_output_____ ###Markdown 3.3 Base Logit Withdrawal Probabilities for Individual PeopleUsing Bayesian model we can infer Base Logit Withdrawal Probabilities.Here is a plot of the Base Logit Withdrawal Probabilities for the first two people in the simulation (these are the same people for which we made a above plot of withdrawal probabilities as a function of SimulationID) ###Code source("./WDUtilityFunctions.R") #png(filename="./Paper/figs/fig4.png", width = 880, height = 480) options(repr.plot.width=12, repr.plot.height=5) par(mfrow=c(1,2)) brr = seq(-2,2,0.1) plot_2_histograms(stan_wd1_result$wd_prob_link[,1], stan_wd1_1000_result$wd_prob_link[,1], breaks=brr, trueValue = df_symmetric[df_symmetric$q==1,]$wd_prob_link[1], legend1="100 people", legend2="1000 people", xlab="base_logit", main="Base logit probability of withdrawal 1-st Person", plot_legend=T, xlim=c(-2,2) ) plot_2_histograms(stan_wd1_result$wd_prob_link[,2], stan_wd1_1000_result$wd_prob_link[,2], breaks=brr, trueValue = df_symmetric[df_symmetric$q==1,]$wd_prob_link[2], legend1="100 people", legend2="1000 people", xlab="base_logit", main="Base logit probability of withdrawal 2-nd Person", plot_legend=F, xlim=c(-2,2) ) #dev.off() ###Output _____no_output_____
api-book/_build/html/_sources/chapter-3-python/object-oriented-programming.ipynb
###Markdown Classes and objects Basic definitionsTo put it simply, in object oriented programming, the code is structured around **objects**. An **object** is an instance of a class.A **class** is a blueprint for creating objects, defined by its name in the namespace, its attributes and its methods.In computing, a namespace is a set of signs (names) that are used to identify and refer to objects of various kinds. The names are saved as pointers somewhere in computer memory. A namespace ensures that all of a given set of objects have unique names so that they can be easily identified. To put it simply, *a namespace is a mapping from names to objects*.A class in Python is defined with the keyword `class` following with the class name. Each class has a constructor, which is a method that is called when an object of the class is created. The constructor is called automatically when an object is created. The constructor is defined with an internal function \_\_init\_\_().For example, lets create a class that creates an object of class **Employee** (I will explain every detail after the class initialization): ###Code # Name of the class class Employee: # The constructor def __init__(self, name, surname, position): """ In order to create an object of the class Employee, we need to pass: name - name of the employee surname - surname of the employee position - position of the employee """ self.name = name self.surname = surname self.position = position # Calculating the name length of the employee in construction time self.name_length = len(name) # Defining a method for the object def get_full_name(self): """ This method returns the full name of the employee """ return f"{self.name} {self.surname}" ###Output _____no_output_____ ###Markdown One might be wandering, what is the "**self**" argument in the __init__() function? The argument "self" is a reference to the object being created. The "self" argument is used to access the attributes and methods of the object. When creating an object, we skip the argument "self" and pass the other arguments to the constructor. ###Code # Two employees Jane = Employee("Jane", "Doe", "Manager") John = Employee("John", "Doe", "Sales") ###Output _____no_output_____ ###Markdown As you can see above, one blueprint (Employee class) was used to create two objects (Jane and John). This is exactly as a recipe works: you can have a recipe (or blueprint) for a cake and with that recipe make hundreds of cakes. One fact that should always be kept in one's mind is that **EVERYTHING in Python is an object**. All the imported packages, all the defined variables even the functions are objects. This means that everything has a **class** with which the object was created, attributes and methods. Python "magic" methods*Magic* or *dunder* methods in Python are special methods that start and end with the double underscores `____`. Magic methods are not meant to be invoked directly by the user, but the invocation happens internally from the class on a certain action. For example, when you add two numbers using the *+* operator, internally, the __add__() method will be called: ###Code a = 5 b = 4 print(f"Addition results: {a + b}") print(f"Addition using the magic method: {a.__add__(b)}") ###Output Addition results: 9 Addition using the magic method: 9 ###Markdown Every class has a lot of magic methods. To list them out, use the `dir()` function and search for the `____` pattern: ###Code # All the magic methods of the class Employee magic_methods = [x for x in dir(Employee) if x.startswith("__")] print(magic_methods) # All the methods of the int class in Python magic_methods = [x for x in dir(int) if x.startswith("__")] print(magic_methods) ###Output ['__abs__', '__add__', '__and__', '__bool__', '__ceil__', '__class__', '__delattr__', '__dir__', '__divmod__', '__doc__', '__eq__', '__float__', '__floor__', '__floordiv__', '__format__', '__ge__', '__getattribute__', '__getnewargs__', '__gt__', '__hash__', '__index__', '__init__', '__init_subclass__', '__int__', '__invert__', '__le__', '__lshift__', '__lt__', '__mod__', '__mul__', '__ne__', '__neg__', '__new__', '__or__', '__pos__', '__pow__', '__radd__', '__rand__', '__rdivmod__', '__reduce__', '__reduce_ex__', '__repr__', '__rfloordiv__', '__rlshift__', '__rmod__', '__rmul__', '__ror__', '__round__', '__rpow__', '__rrshift__', '__rshift__', '__rsub__', '__rtruediv__', '__rxor__', '__setattr__', '__sizeof__', '__str__', '__sub__', '__subclasshook__', '__truediv__', '__trunc__', '__xor__'] ###Markdown Constructors A constructor is a special type of function of a class which initializes objects of a class. In Python, the constructor is automatically called when an object is beeing created. The constructor is a magic function denoted as `__init__()`. ###Code # Equivalent statements Bob = Employee("Bob", "Smith", "Sales") Rob = Employee.__init__(Employee, "Rob", "Smith", "Sales") ###Output _____no_output_____ ###Markdown Object attributes An object attribute is a piece of data that is associated with an object. It cannot be called as a function. To access an object attribute, we use the dot operator (`.`). ###Code print(f"Employee position of Jane: {Jane.position}") print(f"Surname of John: {John.surname}") print(f"Name length of John: {John.name_length}") ###Output Employee position of Jane: Manager Surname of John: Doe Name length of John: 4 ###Markdown We can explicitly set any attribute to an object using the (`.`) operator as well. ###Code John.salary = 1000 print(f"John's salary: {John.salary}") try: print(f"Jane's salary: {Jane.salary}") except AttributeError as e: print(f"Jane does not have a salary yet!\nError: {e}") Jane.salary = 2000 print(f"Jane's salary: {Jane.salary}") ###Output John's salary: 1000 Jane does not have a salary yet! Error: 'Employee' object has no attribute 'salary' Jane's salary: 2000 ###Markdown The objects are completely independent from each other. If we define a new attribute to Jane, it will not affect John and vice versa. Object methods An object method is a callable function that is associated with an object. Each object method only uses the attributes of the object it is called on. For example, to get the full names of the employees, we can use the method `get_full_name()`: ###Code print(f"Jane's full name: {Jane.get_full_name()}") print(f"John's full name: {John.get_full_name()}") ###Output Jane's full name: Jane Doe John's full name: John Doe ###Markdown Object stateAn object state is all the data that is stored in the object. The data includes attributes, methods and other data. The state of an object is dynamic because it can change over time. For example, let's create a new class **Human**: ###Code class Human: def __init__(self, name, surname, age): """ Template for human object; Attributes: name - name of the human surname - surname of the human age - age of the person """ self.name = name self.surname = surname self.age = age def increase_age(self, amount): """ Method to increase the age of the human by the given amount """ self.age += amount def get_age(self): """ Method to get the age of the human """ return f"My name is {self.name} and my age is: {self.age}" # Creating a 25 year old John Doe John = Human("John", "Doe", 25) # Initial age value print(f"{John.get_age()}") ###Output My name is John and my age is: 25 ###Markdown The initial state of the age of John Doe is 25. We can change that using the method `increase_age()`: ###Code # Increasting the age John.increase_age(1) # What is the age now? print(f"{John.get_age()}") ###Output My name is John and my age is: 26 ###Markdown The internal state of the object has changed and it effected **only** John. Class inheritance Class inheritence in programming is a mechanism that allows one class to inherit the attributes and methods of another class. In Python, the syntax is very simple:```class DerivedClass(BaseClass): ... ...```All the methods in the `DerivedClass` are inherited from the `BaseClass`. For example, lets a new class called `President` that inherits from the `Human` class: ###Code class President(Human): def __init__(self, name, surname, age, country, years_in_service): """ Template for president object; Attributes: name - name of the president surname - surname of the president age - age of the president country - country of the president years_in_service - years of service of the president """ super().__init__(name, surname, age) self.country = country self.years_in_service = years_in_service def introduce(self): """ Method to introduce the president """ return f"My name is {self.name} {self.surname} and I am a president of {self.country} serving for {self.years_in_service} years" ###Output _____no_output_____ ###Markdown The constructor of `President` has a function called `super()` that calls the constructor of the base class and provides it with all the necessary arguments. ###Code # Lets create the past president of USA Donald = President("Donald", "Trump", 62, "USA", 8) # Introduce yourself print(Donald.introduce()) ###Output My name is Donald Trump and I am a president of USA serving for 8 years ###Markdown Encapsulation Encapsulation is the packing of data and functions that work on that data within a single object. By doing so, you can hide the internal state of the object from the outside. This is done by defining the attributes of an object as either: * Public* Protected* PrivateAll the public variables in a class are accessible from outside the class and do not have any underscores infront of them ``. The members of a class that are declared protected are only accessible to a class derived from it and have 1 underscore in front of them `_`.All the private variables are not accessible from outside the class and have a double underscore infront of them `__`. ###Code # Lets create an example class of an animal class Animal: def __init__(self, name, species, age): """ Template for animal object; Attributes: name - name of the animal species - species of the animal """ self.name = name self._species = species self.__age = age def increase_age(self, amount): """ Method to increase the age of the animal by the given amount """ self.__age += amount def print_info(self): """ Get the animal information """ return f"My name is {self.name}, I am a {self._species} and my age is: {self.__age}" # Lets create penguin penguin = Animal("Happy Feet", "Penguin", 1) # Trying to access the private variable will result in an error try: print(penguin.__age) except AttributeError as e: print(f"Error: {e}") ###Output Error: 'Animal' object has no attribute '__age' ###Markdown The above error is a bit missleading, because the private variable is not accessible from outside the class but it DOES exist. Only methods of the same class can access it and modify it. ###Code # Initial information print(penguin.print_info()) # Adding one year to the age penguin.increase_age(1) # New information print(penguin.print_info()) ###Output My name is Happy Feet, I am a Penguin and my age is: 1 My name is Happy Feet, I am a Penguin and my age is: 2 ###Markdown Secondly, in Python, there is no existence of **private** instance variables that cannot be accessed except inside an object. We can freely access the private variables of an object (`_species`).However, a convention is being followed by most Python coders that a name prefixed with an underscore should be treated as a non-public part of the API or any Python code, whether it is a function, a method, or a data member.What happens to the inherited public, private and protected members of the base class? Lets extend the above class: ###Code # Lets create a domesticated animal class class DomesticatedAnimal(Animal): def __init__(self, name, species, age, owner): """ Template for domesticated animal object; Attributes: name - name of the animal species - species of the animal age - age of the animal owner - owner of the animal """ super().__init__(name, species, age) self.owner = owner def increase_age(self, amount): return super().increase_age(amount) def print_info(self): """ Prints all the information about the animal """ return f"My name is {self.name}, I am a {self._species} and my age is: {self.__age} and I am owned by {self.owner}" # Lets create an instancte of the class domesticated_penguin = DomesticatedAnimal("Happy Feet", "Penguin", 1, "Old McDonald") # Lets try increasing the age domesticated_penguin.increase_age(1) # Lets try to access the private variable try: print(domesticated_penguin.print_info()) except AttributeError as e: print(f"Error: {e}") ###Output Error: 'DomesticatedAnimal' object has no attribute '_DomesticatedAnimal__age' Error: 'super' object has no attribute '_DomesticatedAnimal__age'
KK_A1_MySQL_Local_Shell_Pandas.ipynb
###Markdown ![alt text](https://4.bp.blogspot.com/-gbL5nZDkpFQ/XScFYwoTEII/AAAAAAAAAGY/CcVb_HDLwvs2Brv5T4vSsUcz7O4r2Q79ACK4BGAYYCw/s1600/kk3-header00-beta.png)[Prithwis Mukerjee](http://www.yantrajaal.com) PurposeThis Colab Notebook demonstrates how to 1. Install MySQL inside the VM2. Access MySQL from the mysql client3. Access MySQL from Python with Pandas Last Updated : 20 Aug 2021 Install & Test Connectivity ###Code !apt-get update > null !apt-get -y install mysql-server > null !/etc/init.d/mysql restart !mysql --version !mysql -e 'create database praxisDB' !mysql -e 'show databases' ###Output +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | praxisDB | | sys | +--------------------+ ###Markdown Create Table & Load Data Test Data dept.csv https://drive.google.com/open?id=1rJu1oUgUsShEG9Hh8IG0-55tAisfYBCcemp.csv https://drive.google.com/open?id=136ZpigRZKG3T-9wEBjD7r6P94y9gCWeU ###Code # The CSV files are stored in the author's Google Drive !gdown https://drive.google.com/uc?id=1rJu1oUgUsShEG9Hh8IG0-55tAisfYBCc !gdown https://drive.google.com/uc?id=136ZpigRZKG3T-9wEBjD7r6P94y9gCWeU !ls !mysql praxisDB -e 'drop table empl' !mysql praxisDB -e 'create table empl (empid char(6),lastname varchar(20), firstname varchar(20), jobdesc varchar(10), joindate date, salary int, comm decimal(3,2), deptid char(2));' !mysql praxisDB -e 'drop table dept' !mysql praxisDB -e 'create table dept (deptid char(2),deptname varchar(20), managerid char(6), location varchar(10));' !mysql praxisDB -e 'desc empl' !mysql praxisDB -e 'desc dept' #quick SQL reference https://gist.github.com/hofmannsven/9164408 !mysql praxisDB -e "LOAD DATA LOCAL INFILE 'emp.csv' INTO TABLE empl FIELDS TERMINATED BY ',' IGNORE 1 LINES;" #!cat dept.csv !mysql praxisDB -e "LOAD DATA LOCAL INFILE 'dept.csv' INTO TABLE dept FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' IGNORE 1 LINES;" ###Output _____no_output_____ ###Markdown Retrieve Data ###Code !mysql praxisDB -e 'select * from empl' !mysql praxisDB -e 'select * from dept' !mysql praxisDB -e 'select lastname, firstname, location from empl, dept where empl.deptid = dept.deptid' ###Output +-------------+-----------+----------+ | lastname | firstname | location | +-------------+-----------+----------+ | Bacchan | Amitabh | Calcutta | | Mukherjee | Rani | Bombay | | Dikshit | Madhuri | Calcutta | | Khan | Shahrukh | Calcutta | | Sehwag | Virender | Calcutta | | Dhoni | Mahender | Bombay | | Dravid | Rahul | Calcutta | | Dalmia | Jagmohan | Calcutta | | Ganguly | Sourav | Bombay | | Ganesan | Rekha | Calcutta | | Karthikeyan | Narayan | Calcutta | | Mirza | Sania | Calcutta | +-------------+-----------+----------+ ###Markdown Python with Pandas Panda Modules ###Code !apt install libmysqlclient-dev !pip install mysqlclient import pandas as pd import MySQLdb DBConn = MySQLdb.connect(db='praxisDB') #df_mysql = pd.read_sql('select * from emp2;', con=con_mysql) df_mysql = pd.read_sql('show tables', con=DBConn) print ('loaded dataframe from MySQL. records:', len(df_mysql)) DBConn.close() df_mysql ###Output _____no_output_____ ###Markdown Pandas Functions ###Code #To run any non-SELECT SQL command def runCMD (DDL): DBConn= MySQLdb.connect(db='praxisDB') myCursor = DBConn.cursor() retcode = myCursor.execute(DDL) print (retcode) DBConn.commit() DBConn.close() #To run any SELECT SQL command def runSELECT (CMD): DBConn= MySQLdb.connect(db='praxisDB') df_mysql = pd.read_sql(CMD, con=DBConn) DBConn.close() return df_mysql ###Output _____no_output_____ ###Markdown Operations with Python ###Code runCMD("DROP TABLE IF EXISTS Emp;") runCMD("CREATE TABLE IF NOT EXISTS Emp ( \ EmpID char(6) NOT NULL, \ LastName varchar(50) , \ FirstName varchar(50) , \ JobDesc varchar(50) , \ JoinDate date NOT NULL, \ Salary int(11) , \ Comm float , \ DeptID char(2) \ ) ;") runCMD("INSERT INTO Emp (EmpID, LastName, FirstName, JobDesc, JoinDate, Salary, Comm, DeptID) \ VALUES \ ('742866', 'Bacchan', 'Amitabh', 'Executive', '2003-03-10', 50000, 0.1, '10'), \ ('349870', 'Mukherjee', 'Rani', 'Manager', '2005-05-04', 25000, 0.06, '40'), \ ('865477', 'Dikshit', 'Madhuri', 'Clerk', '2002-04-04', 10000, 0.02, '20'), \ ('239456', 'Khan', 'Shahrukh', 'Manager', '2004-01-03', 30000, 0.07, '20'), \ ('897889', 'Sehwag', 'Virender', 'Cus_Rep', '2005-01-02', 15000, 0.05, '20'), \ ('123980', 'Dhoni', 'Mahender', 'Clerk', '2004-10-09', 9000, 0.02, '40'), \ ('822134', 'Dravid', 'Rahul', 'Sr Manager', '2000-06-04', 40000, 0.08, '30'), \ ('997445', 'Dalmia', 'Jagmohan', 'Clerk', '2001-07-01', 12000, 0.02, '30'), \ ('989007', 'Ganguly', 'Sourav', 'Cus_Rep', '2002-01-01', 20000, 0.03, '40'), \ ('299034', 'Ganesan', 'Rekha', 'Director', '2002-10-10', 60000, 0.11, '10'), \ ('546223', 'Karthikeyan', 'Narayan', 'Secretary', '2005-12-04', 40000, 0.09, '10'), \ ('223112', 'Mirza', 'Sania', 'Cus_Rep', '2001-11-19', 25000, 0.04, '30');" ) runCMD("DROP TABLE IF EXISTS Dept;") runCMD("CREATE TABLE Dept ( \ DeptID char(2) NOT NULL, \ DeptName varchar(50) , \ ManagerID char(6) , \ Location varchar(50) \ );") runCMD("INSERT INTO Dept (DeptID, DeptName, ManagerID, Location) VALUES \ ('10', 'Corporate', '299034', 'Calcutta'), \ ('20', 'Sales', '239456', 'Calcutta'), \ ('30', 'Accounts', '822134', 'Calcutta'), \ ('40', 'Production', '349870', 'Bombay');") runSELECT('Select * from Emp;') runSELECT('Select * from Dept;') DBConn = MySQLdb.connect(db='praxisDB') #df_mysql = pd.read_sql('select * from emp2;', con=con_mysql) pd.read_sql('show tables', con=DBConn) #print ('loaded dataframe from MySQL. records:', len(df_mysql)) #DBConn.close() ###Output _____no_output_____ ###Markdown ![alt text](https://4.bp.blogspot.com/-gbL5nZDkpFQ/XScFYwoTEII/AAAAAAAAAGY/CcVb_HDLwvs2Brv5T4vSsUcz7O4r2Q79ACK4BGAYYCw/s1600/kk3-header00-beta.png)[Prithwis Mukerjee](http://www.linkedin.com/in/prithwis) PurposeThis Colab Notebook demonstrates how to 1. Install MySQL inside the VM2. Access MySQL from the mysql client3. Access MySQL from Python with Pandas Last Updated : 20 Aug 2021 Install & Test Connectivity ###Code !apt-get update > null !apt-get -y install mysql-server > null !/etc/init.d/mysql restart !mysql --version !mysql -e 'create database praxisDB' !mysql -e 'show databases' ###Output +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | praxisDB | | sys | +--------------------+ ###Markdown Create Table & Load Data Test Data dept.csv https://drive.google.com/open?id=1rJu1oUgUsShEG9Hh8IG0-55tAisfYBCcemp.csv https://drive.google.com/open?id=136ZpigRZKG3T-9wEBjD7r6P94y9gCWeU ###Code # The CSV files are stored in the author's Google Drive !gdown https://drive.google.com/uc?id=1rJu1oUgUsShEG9Hh8IG0-55tAisfYBCc !gdown https://drive.google.com/uc?id=136ZpigRZKG3T-9wEBjD7r6P94y9gCWeU !ls !mysql praxisDB -e 'drop table empl' !mysql praxisDB -e 'create table empl (empid char(6),lastname varchar(20), firstname varchar(20), jobdesc varchar(10), joindate date, salary int, comm decimal(3,2), deptid char(2));' !mysql praxisDB -e 'drop table dept' !mysql praxisDB -e 'create table dept (deptid char(2),deptname varchar(20), managerid char(6), location varchar(10));' !mysql praxisDB -e 'desc empl' !mysql praxisDB -e 'desc dept' #quick SQL reference https://gist.github.com/hofmannsven/9164408 !mysql praxisDB -e "LOAD DATA LOCAL INFILE 'emp.csv' INTO TABLE empl FIELDS TERMINATED BY ',' IGNORE 1 LINES;" #!cat dept.csv !mysql praxisDB -e "LOAD DATA LOCAL INFILE 'dept.csv' INTO TABLE dept FIELDS TERMINATED BY ',' LINES TERMINATED BY '\r\n' IGNORE 1 LINES;" ###Output _____no_output_____ ###Markdown Retrieve Data ###Code !mysql praxisDB -e 'select * from empl' !mysql praxisDB -e 'select * from dept' !mysql praxisDB -e 'select lastname, firstname, location from empl, dept where empl.deptid = dept.deptid' ###Output +-------------+-----------+----------+ | lastname | firstname | location | +-------------+-----------+----------+ | Bacchan | Amitabh | Calcutta | | Mukherjee | Rani | Bombay | | Dikshit | Madhuri | Calcutta | | Khan | Shahrukh | Calcutta | | Sehwag | Virender | Calcutta | | Dhoni | Mahender | Bombay | | Dravid | Rahul | Calcutta | | Dalmia | Jagmohan | Calcutta | | Ganguly | Sourav | Bombay | | Ganesan | Rekha | Calcutta | | Karthikeyan | Narayan | Calcutta | | Mirza | Sania | Calcutta | +-------------+-----------+----------+ ###Markdown Python with Pandas Panda Modules ###Code !apt install libmysqlclient-dev !pip install mysqlclient import pandas as pd import MySQLdb DBConn = MySQLdb.connect(db='praxisDB') #df_mysql = pd.read_sql('select * from emp2;', con=con_mysql) df_mysql = pd.read_sql('show tables', con=DBConn) print ('loaded dataframe from MySQL. records:', len(df_mysql)) DBConn.close() df_mysql ###Output _____no_output_____ ###Markdown Pandas Functions ###Code #To run any non-SELECT SQL command def runCMD (DDL): DBConn= MySQLdb.connect(db='praxisDB') myCursor = DBConn.cursor() retcode = myCursor.execute(DDL) print (retcode) DBConn.commit() DBConn.close() #To run any SELECT SQL command def runSELECT (CMD): DBConn= MySQLdb.connect(db='praxisDB') df_mysql = pd.read_sql(CMD, con=DBConn) DBConn.close() return df_mysql ###Output _____no_output_____ ###Markdown Operations with Python ###Code runCMD("DROP TABLE IF EXISTS Emp;") runCMD("CREATE TABLE IF NOT EXISTS Emp ( \ EmpID char(6) NOT NULL, \ LastName varchar(50) , \ FirstName varchar(50) , \ JobDesc varchar(50) , \ JoinDate date NOT NULL, \ Salary int(11) , \ Comm float , \ DeptID char(2) \ ) ;") runCMD("INSERT INTO Emp (EmpID, LastName, FirstName, JobDesc, JoinDate, Salary, Comm, DeptID) \ VALUES \ ('742866', 'Bacchan', 'Amitabh', 'Executive', '2003-03-10', 50000, 0.1, '10'), \ ('349870', 'Mukherjee', 'Rani', 'Manager', '2005-05-04', 25000, 0.06, '40'), \ ('865477', 'Dikshit', 'Madhuri', 'Clerk', '2002-04-04', 10000, 0.02, '20'), \ ('239456', 'Khan', 'Shahrukh', 'Manager', '2004-01-03', 30000, 0.07, '20'), \ ('897889', 'Sehwag', 'Virender', 'Cus_Rep', '2005-01-02', 15000, 0.05, '20'), \ ('123980', 'Dhoni', 'Mahender', 'Clerk', '2004-10-09', 9000, 0.02, '40'), \ ('822134', 'Dravid', 'Rahul', 'Sr Manager', '2000-06-04', 40000, 0.08, '30'), \ ('997445', 'Dalmia', 'Jagmohan', 'Clerk', '2001-07-01', 12000, 0.02, '30'), \ ('989007', 'Ganguly', 'Sourav', 'Cus_Rep', '2002-01-01', 20000, 0.03, '40'), \ ('299034', 'Ganesan', 'Rekha', 'Director', '2002-10-10', 60000, 0.11, '10'), \ ('546223', 'Karthikeyan', 'Narayan', 'Secretary', '2005-12-04', 40000, 0.09, '10'), \ ('223112', 'Mirza', 'Sania', 'Cus_Rep', '2001-11-19', 25000, 0.04, '30');" ) runCMD("DROP TABLE IF EXISTS Dept;") runCMD("CREATE TABLE Dept ( \ DeptID char(2) NOT NULL, \ DeptName varchar(50) , \ ManagerID char(6) , \ Location varchar(50) \ );") runCMD("INSERT INTO Dept (DeptID, DeptName, ManagerID, Location) VALUES \ ('10', 'Corporate', '299034', 'Calcutta'), \ ('20', 'Sales', '239456', 'Calcutta'), \ ('30', 'Accounts', '822134', 'Calcutta'), \ ('40', 'Production', '349870', 'Bombay');") runSELECT('Select * from Emp;') runSELECT('Select * from Dept;') DBConn = MySQLdb.connect(db='praxisDB') #df_mysql = pd.read_sql('select * from emp2;', con=con_mysql) pd.read_sql('show tables', con=DBConn) #print ('loaded dataframe from MySQL. records:', len(df_mysql)) #DBConn.close() ###Output _____no_output_____
ml-h/h4-smoada.ipynb
###Markdown Topics: Classification using SVM and Adaboost Assigned: Wednesday May 9 Due: Sunday May 20---------------------------------------- Report With reference to the code and plots in the detailed report & code below---------------------------------------- 1. SVM Implementation available from In[11] onwards![SVM Boundaries](SVM.png) 2. ADABOOSTImplementation available from In[29] onwards![Weak Learners](weak_1075.png) DISCUSSION : Observations Comparison among logistic regression,SVM, neural network, Adaboost for the current dataset. Which one gives the best results :Based on the probability of error among all the technique mentioned above. The probability of error are as follows for all:1. Logistic Regression : 0.36 2. Neural Network : 0.15 3. SVM : 0.1375 4. Adaboost : 0.1255 OBSERVATIONS According to the probability distribution above. Though Adaboost have the minimum probability of error and the logistic regression has the maximum probabilty of error. After running the Adaboost algorithm for about 12000 times it seems there is lot of overfitting. hence if learned parameters are going to be tested on the test data, here are high chances that the probability of error is going to be too high, hence adaboost is not so good for the current data set we are using. Neural NetworkNeural network has the best performance and minimum probability of error. The weights vectors converges to a constant value faster in case of Neural Network. While on the same data set in case of SVM and Adaboost it takes total of around 25-30 minutes and individual 15 minutes for SVM and 10 minutes for ADdaboost, hence we can conclude Neural Network are much faster and have lower probability of error as compared to logistic regressiion , SVM , ADABoost and Kmeans. Code Section ###Code # -*- coding: utf-8 -*- import tensorflow as tf import numpy as np from math import * import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.mplot3d import Axes3D import random from scipy.stats import norm from IPython.display import Image, display, Math, Latex # Params total_samples = 400 #HyperParameters sigma = 1 # L #Class 0 num_samples_0 = total_samples/2 #mean_0 = np.array([0,0]).T #mean of class 0 mean_0 = (0,0) #mean of class 0 #Eigen Values pair lambda_01 = 2 lambda_02 = 1 theta_0 = 0 # For More info check the http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/ #Rotation Matrix q_0 = np.array([[np.cos(theta_0), -np.sin(theta_0)],[np.sin(theta_0),np.cos(theta_0)]]) q_0_inv = q_0.T # Scaling factor which is represented in Eigen Values Pair s_0 = np.array([[lambda_01,0],[0,lambda_02]]) #Covariance is computed as the following. cov_0 = q_0*s_0*q_0_inv print(cov_0.shape) #Multivariate Distribution. samples_0 is the num_samples_0*2D array where each row represent X[1] and x[2] component samples_0 = np.random.multivariate_normal(mean_0,cov_0,int(num_samples_0)) # Data points scaled to zero mean and unit variance. #scaled_samples_0 = preprocessing.scale(samples_0) labels_0 = np.zeros((int(num_samples_0),1)) # label for class 0 as -1 labels_0[:,0] = -1 # Formation of the whole data set with it's corresponding labels sample_0_data_set = np.concatenate((samples_0, labels_0),axis = 1) #scaled_sample_0_data_set = np.concatenate((scaled_samples_0, labels_0),axis = 1) #Class 1 Gaussian Mixture with two components num_samples_1 = total_samples/2 pi_A = 1/3.0 pi_B = 2/3.0 ###### Start of Component A ## Component A num_samples_1A = np.random.binomial(num_samples_1,pi_A) num_samples_1B = num_samples_1 - num_samples_1A mean_1A = np.array([-2,1]).T #mean of class 1 component A #Eigen Values pair lambda_1A1 = 2 lambda_1A2 = 1/4.0 theta_1A = -3*np.pi/4.0 # For More info check the http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/ #Rotation Matrix q_1A = np.array([[np.cos(theta_1A), -np.sin(theta_1A)],[np.sin(theta_1A),np.cos(theta_1A)]]) q_1A_inv = q_1A.T # Scaling factor which is represented in Eigen Values Pair s_1A = np.array([[lambda_1A1,0],[0,lambda_1A2]]) #Covariance is computed as the following. cov_1A = q_1A.dot(s_1A).dot(q_1A_inv) #Multivariate Distribution. samples_0 is the num_samples_0*2D array where each row represent X[1] and x[2] component samples_1A = np.random.multivariate_normal(mean_1A,cov_1A,num_samples_1A) # Data points scaled to zero mean and unit variance. #scaled_samples_1A = preprocessing.scale(samples_1A) ###### End of Component A ###### Start of Component B ## Component B mean_1B = np.array([3,2]).T #mean of class 1 component A #Eigen Values pair lambda_1B1 = 3 lambda_1B2 = 1 theta_1B = np.pi/4.0 # For More info check the http://www.visiondummy.com/2014/04/geometric-interpretation-covariance-matrix/ #Rotation Matrix q_1B = np.array([[np.cos(theta_1B), -np.sin(theta_1B)],[np.sin(theta_1B),np.cos(theta_1B)]]) q_1B_inv = q_1B.T # Scaling factor which is represented in Eigen Values Pair s_1B = np.array([[lambda_1B1,0],[0,lambda_1B2]]) #Covariance is computed as the following. cov_1B = q_1B.dot(s_1B).dot(q_1B_inv) #Multivariate Distribution. samples_0 is the num_samples_0*2D array where each row represent X[1] and x[2] component samples_1B = np.random.multivariate_normal(mean_1B,cov_1B,int(num_samples_1B)) # Data points scaled to zero mean and unit variance. #scaled_samples_1B = preprocessing.scale(samples_1B) ###### End of Component B samples_1 = np.concatenate((samples_1A,samples_1B)) #samples_1 = pi_A*samples_1A + pi_B*samples_1B #scaled_samples_1 = pi_A*scaled_samples_1A + pi_B*scaled_samples_1B labels_1 = np.ones((int(num_samples_1),1)) # label for class 1 as +1 #labels_1[:,0] = 1 # Formation of the whole data set with it's corresponding labels sample_1_data_set = np.concatenate((samples_1, labels_1),axis = 1) #scaled_sample_1_data_set = np.concatenate((scaled_samples_1, labels_0),axis = 1) # plot samples from Class 0 (X_0) and Class 1 (X_1) plt.figure(0) #maxX = 7.5 # region to plot #maxY = 7.5 plt.title("Distribution of the Data") plt.scatter(sample_0_data_set[:,0], sample_0_data_set[:,1], s=15, c="blue") plt.scatter(sample_1_data_set[:,0], sample_1_data_set[:,1], s=15, c="red") #plt.axis([-maxX, maxX, -maxY, maxY]) plt.show() data_set = np.concatenate((sample_0_data_set,sample_1_data_set)) np.random.shuffle(data_set) data_set ###Output _____no_output_____ ###Markdown 1. SVM: Gaussian kernel k(x, x') = exp (− (||x − x'||^2) / 2 * (L^2) ApproachFor the sake of understanding the concepts behind support vector classification, we will instead implement a version of the Sequential Minimal Optimization (SMO) algorithm as described by John Platt in 1998 [PDF] to solve our optimization problem.SMO works by breaking down the dual form of the SVM optimzation problem into many smaller optimzation problems that are more easily solvable. In a nutshell, the algorithm works like this:- Two multiplier values ( αi and αj ) are selected out and their values are optimized while holding all other α values constant.- Once these two are optimized, another two are chosen and optimized over.- Choosing and optimizing repeats until the convergence, which is determined based on the problem constraints. Heuristics are used to select the two α values to optimize over, helping to speed up convergence. The heuristics are based on error cache that is stored while training the model. What we're looking forWhat we want out of the algorithm is a vector of α values that are mostly zeros, except where the corresponding training example is closest to the decision boundary. These examples are our support vectors and should lie near the decision boundary. We should end up with a few of them once our algorithm has converged. What this implies is that the resultant decision boundary will only depend on the training examples closest to it. If we were to add more examples to our training set that were far from the decision boundary, the support vectors would not change. However, labeled examples closer to the decision boundary can exert greater influence on the solution, subject to the degree of regularization. In other words, non-regularized (hard-margin) SVMs can be sensitive to outliers, while regularized (soft-margin) models are not. 1.1) Plot the decision boundaries, and display the support vectors. ###Code def gaussian_kernel(x, y, sigma=0.9): if np.ndim(x) == 1 and np.ndim(y) == 1: result = np.exp(- np.linalg.norm(x - y) / (2 * sigma ** 2)) elif (np.ndim(x) > 1 and np.ndim(y) == 1) or (np.ndim(x) == 1 and np.ndim(y) > 1): result = np.exp(- np.linalg.norm(x - y, axis=1) / (2 * sigma ** 2)) elif np.ndim(x) > 1 and np.ndim(y) > 1: result = np.exp(- np.linalg.norm(x[:, np.newaxis] - y[np.newaxis, :], axis=2) / (2 * sigma ** 2)) return result #x_len, y_len = 5, 10 #gaussian_kernel(np.random.rand(x_len, 1), np.random.rand(y_len, 1)).shape == (5,10) # Objective function to optimize# Objec def objective_function(alphas,kernel, data): """Returns the SVM objective function based in the input model defined by: `alphas`: vector of Lagrange multipliers `target`: vector of class labels (-1 or 1) for training data `kernel`: kernel function `X_train`: training data for model.""" return np.sum(alphas) - 0.5* np.sum(data[:,2] * data[:,2] * gaussian_kernel(data[:,0:2], data[:,0:2]) * aplhas * aplhas) # Decision function def decision_function(alphas,data,b,X_Test): """Applies the SVM decision function to the input feature vectors in `x_test`.""" # The following is the loop function. sum = 0 for i in range(len(data)): sum += alphas[i] * data[i,2] * gaussian_kernel(data[i,0:2],X_Test) f_x = sum + b return f_x def compute_range(alphas,i,j,data,C): if (data[i,2] != data[j,2]): L = max(0,alphas[j]-alphas[i]) H = min(C,C+alphas[j]-alphas[i]) elif (data[i,2] == data[j,2]): L = max(0,alphas[i]+alphas[j]-C) H = min(C,alphas[i]+alphas[j]) return L,H # Intialization alphas = np.zeros(len(data_set)) b = 0.0 passes = 0 max_passes = 300 C = 0.2 tol = 0.001 m = len(data_set) data = data_set while(passes < max_passes): num_changed_alphas = 0 for i in range(m): E_i = decision_function(alphas,data,b,data[i,0:2]) - data[i,2] if(((data[i,2] * E_i) < (-1) * tol and alphas[i] < C) or ((data[i,2]*E_i)>tol and alphas[i]>0 )): j = np.random.randint(0,m) while(j==i): j = np.random.randint(0,m) E_j = decision_function(alphas,data,b,data[j,0:2]) - data[j,2] alpha_i_old = alphas[i] alpha_j_old = alphas[j] L,H = compute_range(alphas,i,j,data,C) if L == H: continue k_ij = gaussian_kernel(data[i,0:2],data[j,0:2]) k_ii = gaussian_kernel(data[i,0:2],data[i,0:2]) k_jj = gaussian_kernel(data[j,0:2],data[j,0:2]) eta = 2 * k_ij - k_ii - k_jj if eta>=0: continue alphas[j] = alphas[j] - (data[j,2] * (E_i - E_j)/eta) if alphas[j] > H: alphas[j] = H elif alphas[j] < L: alphas[j] = L if abs(alphas[j] - alpha_j_old) < 0.00001: continue alphas[i] = alphas[i] + data[i,2] * data[j,2] * (alpha_j_old - alphas[j]) b_1 = b - E_i - data[i,2]*(alphas[i]-alpha_i_old)*k_ii - data[j,2]*(alphas[j]-alpha_j_old)*k_ij b_2 = b - E_j - data[i,2]*(alphas[i]-alpha_i_old)*k_ij - data[j,2]*(alphas[j]-alpha_j_old)*k_ij if alphas[i]<C and alphas[i] > 0: b = b_1 elif alphas[j]<C and alphas[j] > 0: b = b_2 else: b = (b_1+b_2)/2.0 num_changed_alphas += 1 if num_changed_alphas == 0: #print ("Pass {}".format(passes)) passes += 1 else: passes = 0 count =0 for i in range(m): if alphas[i] !=0 : count+=1 print("Completed\n Alphas Count: ", count) ###Output Completed Alphas Count: 230 ###Markdown """Plots the model's decision boundary on the input axes object. Range of decision boundary grid is determined by the training data. Returns decision boundary grid and axes object (`grid`, `ax`).""" ###Code def plot_decision_function(alphas,data,b,X_Test): Y = gaussian_kernel(data[:,0:2], X_Test) result = np.dot(alphas * data[:,2] , Y ) + b return result def plot_decision_boundary(b,alphas,ax, resolution=100, colors=('b', 'k', 'r')): xrange = np.linspace(data_set[:,0].min(), data_set[:,0].max(), resolution) yrange = np.linspace(data_set[:,1].min(), data_set[:,1].max(), resolution) grid = [[plot_decision_function(alphas, data_set,b, np.array([xr, yr])) for yr in yrange] for xr in xrange] print (type(grid)) grid = np.array(grid).reshape(len(xrange), len(yrange)) # Plot decision contours using grid and # make a scatter plot of training data ax.contour(xrange, yrange, grid, (-1, 0, 1), linewidths=(1, 1, 1), linestyles=('--', '-', '--'), colors=colors) ax.scatter(data_set[:,0], data_set[:,1], c=data_set[:,2], cmap=plt.cm.viridis, lw=0, alpha=0.5) # Plot support vectors (non-zero alphas) # as circled points (linewidth > 0) mask = alphas != 0.0 ax.scatter(data_set[:,0][mask], data_set[:,1][mask], c=data_set[:,2][mask], cmap=plt.cm.viridis) return grid, ax fig, ax = plt.subplots() grid, ax = plot_decision_boundary(b, alphas,ax) plt.title("SVM - Plot") plt.savefig("SVM.png") plt.show() label_hat = [] correct = 0 for i in range(len(data_set)): result = np.sign(plot_decision_function(alphas, data_set,b,data_set[i,0:2])) if result == data_set[i,2]: correct += 1 error = len(data_set)- correct print ('Misclassification Error is {}'.format(error/float(len(data_set)))) ###Output Misclassification Error is 0.1375 ###Markdown 1.2 Obeservation between misclassification done in case of using kernelized logistic regression and SVM. Prob of Error- Kernelized Logistic Regression : Perror_c0, Perror_c1 : 0.16 0.16- SVM Misclassification Error : 0.1375 Sample to compare with the SKlearn package - The SKlearn RBF kernel uses K(x, x') = exp(-gamma * ||x-x'||^2) - Here gamma = 1/(2 * (L^2)) to make it into a gaussian kernel and Gaussian kernel parameter L = 0.1 ###Code # Using SKLearn package for SVM from sklearn import svm from sklearn.metrics.pairwise import euclidean_distances from sklearn.metrics.pairwise import check_pairwise_arrays X_train = np.concatenate((samples_0,samples_1)) y_train = np.concatenate((labels_0,labels_1)) # Gaussian kernel parameter L L = 0.4 gamma = 1/(2 * (L**2)) clf = svm.SVC(kernel='rbf', gamma=gamma) clf.fit(X_train, y_train.ravel()) # plot the line, the points, and the nearest vectors to the plane fignum = 1 plt.figure(fignum, figsize=(18, 14)) plt.clf() # Plot the support vectors plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80, facecolors='none', zorder=10, edgecolors='g', label = 'Support Vectors') # Plot the training points plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train[:, 0], zorder=10, cmap=plt.cm.Paired, edgecolors='k') plt.axis('tight') x_min = -5 x_max = 5 y_min = -5 y_max = 5 XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j] Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()]) # Put the result into a color plot Z = Z.reshape(XX.shape) plt.figure(fignum, figsize=(4, 3)) plt.pcolormesh(XX, YY, Z < 0, alpha = 0.9, cmap=plt.cm.Paired) plt.contour(XX, YY, Z, alpha = .5, linestyles=['--', '-', '--'], levels=[-.5, 0, .5], cmap=plt.cm.jet, antialiased=False) plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.title('Decision boundaries, and the support vectors') plt.legend() plt.show() print('X Train Size\t : ',X_train.shape) print('Support Vectors\t : ',clf.support_vectors_.shape) print('\nFraction of training data points that are support vectors') print('Fraction \t : ', (clf.support_vectors_.shape[0]/X_train.shape[0]) * 100, '%' ) ###Output X Train Size : (400, 2) Support Vectors : (236, 2) Fraction of training data points that are support vectors Fraction : 59.0 % ###Markdown 2. Adaboost: Write your own code for Adaboost with decision stumps as weak learners. ###Code def stumpClassify(dataMatrix,dimen,threshVal,threshIneq):#just classify the data retArray = np.ones((np.shape(dataMatrix)[0],1)) if threshIneq == 'lt': retArray[dataMatrix[:,dimen] <= threshVal] = -1.0 else: retArray[dataMatrix[:,dimen] > threshVal] = -1.0 return retArray def buildStump(dataArr,classLabels,D): dataMatrix = np.mat(dataArr) labelMat = np.mat(classLabels).T m,n = np.shape(dataMatrix) numSteps = 500.0 bestStump = {} bestClasEst = np.mat(np.zeros((m,1))) minError = np.inf #init error sum, to +infinity for i in range(n):#loop over all dimensions rangeMin = dataMatrix[:,i].min() rangeMax = dataMatrix[:,i].max() stepSize = (rangeMax-rangeMin)/numSteps for j in range(-1,int(numSteps)+1): #loop over all range in current dimension for inequal in ['lt', 'gt']: #go over less than and greater than threshVal = (rangeMin + float(j) * stepSize) predictedVals = stumpClassify(dataMatrix,i,threshVal,inequal)#call stump classify with i, j, lessThan errArr = np.mat(np.ones((m,1))) errArr[predictedVals == labelMat] = 0 #Line where AdaBoost interacts with the classifier. weightedError = D.T*errArr #calc total error multiplied by D #print ("split: dim %d, thresh %.2f, thresh ineqal: %s, the weighted error is %.3f" % (i, threshVal, inequal, weightedError)) if weightedError < minError: minError = weightedError bestClasEst = predictedVals.copy() bestStump['dim'] = i bestStump['thresh'] = threshVal bestStump['ineq'] = inequal return bestStump,minError,bestClasEst def adaBoostTrainDS(dataArr,classLabels,numIt=400): weakClassArr = [] M = 0 #mis the number of datapoints in a dataset m = np.shape(dataArr)[0] #D holds all the weights of each peice of data D = np.mat(np.ones((m,1))/m) #init D to all equal errorRate = 100000 #aggregrate estimate of the class for every data point aggClassEst = np.mat(np.zeros((m,1))) while errorRate > 0.001: #for i in range(numIt): bestStump,error,classEst = buildStump(dataArr,classLabels,D)#build Stump #print("D:",D.T) alpha = float(0.5*np.log((1.0-error)/max(error,1e-2)))#ca2c alpha, throw in max(error,eps) to account for error=0 bestStump['alpha'] = alpha weakClassArr.append(bestStump) #store Stump Params in Array #print ("classEst: ",classEst.T) expon = np.multiply(-1*alpha*np.mat(classLabels).T,classEst) #exponent for D calc, getting messy D = np.multiply(D,np.exp(expon)) #Calc New D for next iteration D = D/D.sum() #calc training error of all classifiers, if this is 0 quit for loop early (use break) aggClassEst += alpha*classEst #print ("aggClassEst: ",aggClassEst.T) aggErrors = np.multiply(np.sign(aggClassEst) != np.mat(classLabels).T,np.ones((m,1))) errorRate = aggErrors.sum()/m if M%50 == 0: print ("total error: ",errorRate) M +=1 if errorRate == 0.0: break if M > numIt: break return weakClassArr ,aggClassEst classifierArray,aggClassEst = adaBoostTrainDS(np.mat(data_set[:,0:2]),np.mat(data_set[:,2]),1000) def adaClassify(datToClass,classifierArr): dataMatrix = np.mat(datToClass)#do stuff similar to last aggClassEst in adaBoostTrainDS m = np.shape(dataMatrix)[0] aggClassEst = np.mat(np.zeros((m,1))) for i in range(len(classifierArr)): classEst =stumpClassify(dataMatrix,classifierArr[i]['dim'],classifierArr[i]['thresh'],classifierArr[i]['ineq'])#call stump classify aggClassEst += classifierArr[i]['alpha']*classEst #print (aggClassEst) return np.sign(aggClassEst) ###Output _____no_output_____ ###Markdown 1.1 Plot of the individual decision boundaries for the first five weak learners found ###Code plt.title("Plot - Weak Learners") plt.scatter(sample_0_data_set[:,0], sample_0_data_set[:,1], s=15, c="blue") plt.scatter(sample_1_data_set[:,0], sample_1_data_set[:,1], s=15, c="red") for i in range(len(classifierArray)): if classifierArray[i]['dim'] == 0 : #print classifierArray[i]['thresh'] plt.axvline(x = classifierArray[i]['thresh']) if classifierArray[i]['dim'] == 1: #print classifierArray[i]['thresh'] plt.axhline(y = classifierArray[i]['thresh']) plt.title("Individual Weak Classifiers") plt.savefig("weak_1075.png") #plt.axis([-maxX, maxX, -maxY, maxY]) plt.show() data_list = [] resolution = 100 plot_step = 0.02 colors=('b', 'k', 'r') fig, ax = plt.subplots() x_min, x_max = data_set[:, 0].min() - 1, data_set[:, 0].max() + 1 y_min, y_max = data_set[:, 1].min() - 1, data_set[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = adaClassify(np.ndarray.tolist(np.c_[xx.ravel(), yy.ravel()]),classifierArray) Z = Z.reshape(xx.shape) #ax.contour(xx, yy, Z, colors='k') cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired) # Plot decision contours using grid and # make a scatter plot of training data #ax.contour(xrange, yrange, grid, colors='k') plt.scatter(sample_0_data_set[:,0], sample_0_data_set[:,1], s=15, c="blue") plt.scatter(sample_1_data_set[:,0], sample_1_data_set[:,1], s=15, c="red") #ax.scatter(data_set[:,0], data_set[:,1],c=data_set[:,2], cmap=plt.cm.viridis, lw=0, alpha=0.5) plt.title("Adabost Decision Boundary") plt.savefig("adabost_2949.png") plt.show() def plotROC(predStrengths, classLabels): import matplotlib.pyplot as plt %matplotlib inline cur = (1.0,1.0) #cursor ySum = 0.0 #variable to calculate AUC numPosClas = sum(classLabels==1.0) yStep = 1/float(numPosClas) xStep = 1/float(len(classLabels)-numPosClas) sortedIndicies = predStrengths.argsort()#get sorted index, it's reverse fig = plt.figure() fig.clf() ax = plt.subplot(111) #loop through all the values, drawing a line segment at each point for index in sortedIndicies.tolist()[0]: if classLabels[index] == 1.0: delX = 0; delY = yStep; else: delX = xStep; delY = 0; ySum += cur[1] #draw line from cur to (cur[0]-delX,cur[1]-delY) ax.plot([cur[0],cur[0]-delX],[cur[1],cur[1]-delY], c='b') cur = (cur[0]-delX,cur[1]-delY) ax.plot([0,1],[0,1],'b--') plt.xlabel('False positive rate') plt.ylabel('True positive rate') plt.title('ROC curve for AdaBoost horse colic detection system') ax.axis([0,1,0,1]) plt.show() print ("the Area Under the Curve is: ",ySum*xStep) plotROC(aggClassEst.T,data_set[:,2]) numPosClas = sum(np.array(np.mat(data_set[:,2]))==1.0) adaClassify(np.ndarray.tolist(np.c_[xx.ravel(), yy.ravel()]),classifierArray)[399] plot_colors = "br" plot_step = 0.02 class_names = "AB" plt.figure(figsize=(10, 5)) # Plot the decision boundaries plt.subplot(121) x_min, x_max = data_set[:, 0].min() - 1, data_set[:, 0].max() + 1 y_min, y_max = data_set[:, 1].min() - 1, data_set[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = adaClassify(np.ndarray.tolist(np.c_[xx.ravel(), yy.ravel()]),classifierArray) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired) plt.axis("tight") plt.scatter(sample_0_data_set[:,0], sample_0_data_set[:,1], s=15, c="blue") plt.scatter(sample_1_data_set[:,0], sample_1_data_set[:,1], s=15, c="red") plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.legend(loc='upper right') plt.xlabel('x') plt.ylabel('y') plt.title('Decision Boundary') plt.show() ###Output No handles with labels found to put in legend.
notebooks/2.3.build.train.eval.lstm.economic.indicators.monthly.ipynb
###Markdown Make a clen copy of data. This allows us to modify freely while we have always the original data for any further reference. ###Code df_original = df.copy() ###Output _____no_output_____ ###Markdown Reindex data frame per the time stamps ###Code df.set_index("DATE", inplace=True) df.head() # det trend_data df_trend = pd.DataFrame() df_detrended = pd.DataFrame() for i in range(df.shape[1]): cycle, trend = sm.tsa.filters.hpfilter(df.iloc[:, i], 1600) df_trend[df.columns[i]] = trend df_detrended[df.columns[i]] = df.iloc[:, i] - trend # plot time series+trend ncols = 2 fig, axs = plt.subplots(nrows=int(np.ceil(df.columns.size/ncols)), ncols=ncols, figsize=(12, 22), sharex=True) fig.tight_layout() for idx, label in enumerate(df.columns): i = idx // ncols j = idx % ncols df_trend[label].plot(title=label, ax=axs[i, j], color="blue") df[label].plot(title=label, ax=axs[i, j], color="red") # plot detrended time series ncols = 2 fig, axs = plt.subplots(nrows=int(np.ceil(df.columns.size/ncols)), ncols=ncols, figsize=(12, 22), sharex=True) fig.tight_layout() for idx, label in enumerate(df.columns): i = idx // ncols j = idx % ncols df_detrended[label].plot(title=label, ax=axs[i, j], color="blue") # rescale data values = df_detrended.values.astype("float32") scaler = MinMaxScaler(feature_range=(0, 1)) values_scaled = scaler.fit_transform(values) n_variables = values.shape[1] # USRECM: NBER based Recession Indicators for the United States from the Peak through the Trough # index_target = NA # GDPC1: Real Gross Domestic Product # index_target = NA # W875RX1: Real personal income excluding current transfer receipts index_target = 13 # PAYEMS: All Employees: Total Nonfarm Payrolls index_target = 0 # INDPRO: Industrial Production Index index_target = 12 # CMRMTSPL: Real Manufacturing and Trade Industries Sales # set model parameters n_lags = 12 n_sequences = 12 n_train = int(values.shape[0] * 0.8) n_units = 20 # set train parameters optimizer = "adam" loss = "mse" n_epochs = 30 sz_batch = 20 verbose = 1 df_reframed = series_to_supervised(values_scaled, n_lags, n_sequences) df_reframed.head() # [print(elem) for elem in df_reframed.columns] # create train/valid data # split into train and test sets train_values = df_reframed.values[:n_train, :] valid_values = df_reframed.values[n_train:, :] print(f"Train Inputs Shape: {train_values.shape}") print(f"Valid Inputs Shape: {valid_values.shape}") # split into input and targets n_train, n_ = train_values.shape n_valid, n_ = valid_values.shape n_features = n_lags * n_variables # split into input and targets n_train, n_ = train_values.shape n_valid, n_ = valid_values.shape n_features = n_lags * n_variables # split into input and targets n_train, n_ = train_values.shape n_valid, n_ = valid_values.shape n_observations = n_lags * n_variables x_train, y_train = train_values[:, :n_observations], train_values[:, n_observations+index_target-1:n_:n_variables] x_valid, y_valid = valid_values[:, :n_observations], valid_values[:, n_observations+index_target-1:n_:n_variables] print(f"Train Inputs Shape: {x_train.shape}, Train Targets Shape: {y_train.shape}") print(f"Valid Inputs Shape: {x_valid.shape}, Valid Targets Shape: {y_valid.shape}") # reshape data as required by ltsm x_train = x_train.reshape((n_train, n_lags, n_variables)) x_valid = x_valid.reshape((n_valid, n_lags, n_variables)) print(f"Train Inputs Shape: {x_train.shape}, Train Targets Shape: {y_train.shape}") print(f"Valid Inputs Shape: {x_valid.shape}, Valid Targets Shape: {y_valid.shape}") # build model model = Sequential() model.add(LSTM(n_units, input_shape=(n_lags, n_variables))) model.add(Dense(n_sequences)) model.compile(loss=loss, optimizer=optimizer) # train model history = model.fit(x_train, y_train, epochs=n_epochs, batch_size=sz_batch, validation_data=(x_valid, y_valid), verbose=verbose, shuffle=False) # plot history figsize = (12, 7) titlefontsize = 20 xtickfontsize = 15 ytickfontsize = 15 labelfontsize = 19 legendfontsize = 19 linewidth = 3 fig = plt.figure(figsize=figsize) ax = fig.subplots(1, 1) ax.plot(np.arange(1, n_epochs+1), history.history['loss'], "-", linewidth=linewidth, label='Train Loss') ax.plot(np.arange(1, n_epochs+1), history.history['val_loss'], "-", linewidth=linewidth, label='Valid Loss') ax.set_xlabel("Epoch #", fontsize=labelfontsize) ax.set_ylabel("Loss - " + loss.upper(), fontsize=labelfontsize) ax.tick_params( axis='x', which='both', labelsize=xtickfontsize) ax.tick_params( axis='y', labelsize=ytickfontsize) ax.set_title("Train Loss " + f"({loss})".upper() + " vs Epoch", fontsize=titlefontsize, fontweight="bold" ) ax.legend(loc="upper right", fontsize=legendfontsize, framealpha=0.8, fancybox=True, frameon=True, shadow=False, edgecolor="k") ax.set_xlim([0, n_epochs+1]) plt.tight_layout() fname = f"loss-plot-valid.png" # fig.savefig(os.path.join(ROOT_DIR, "reports", "figures", fname), transparent=False, dpi=dpi) plt.show() # make a prediction yhat_valid = model.predict(x_valid) temp = x_valid.reshape((n_valid, n_lags*n_variables)) temp[-n_sequences*n_variables:][:, index_target:n_sequences*n_variables:n_variables] = \ y_valid.reshape((n_valid, n_sequences)) temp = temp.reshape((-1, n_variables)) y_valid = scaler.inverse_transform(temp)[:, index_target] temp = x_valid.reshape((n_valid, n_lags*n_variables)) temp[-n_sequences*n_variables:][:, index_target:n_sequences*n_variables:n_variables] = \ yhat_valid.reshape((n_valid, n_sequences)) temp = temp.reshape((-1, n_variables)) yhat_valid = scaler.inverse_transform(temp)[:, index_target] n = 12 ncols = 3 fig, axs = plt.subplots(nrows=int(np.ceil(n/ncols)), ncols=ncols, figsize=(12, 22), sharex=True) for k in range(n): i = k // ncols j = k % ncols axs[i, j].plot(np.arange(1,n_sequences+1), y_valid[i*n_sequences:i*n_sequences+n_sequences], color="blue", label="True") axs[i, j].plot(np.arange(1,n_sequences+1), yhat_valid[i*n_sequences:i*n_sequences+n_sequences], color="black", label="Predicted") axs[i, j].legend(loc="best", fontsize=legendfontsize, framealpha=0.8, fancybox=True, frameon=True, shadow=False, edgecolor="k") # axs[i, j].set_ylim([0, 1.1*max(y_valid)]) # print(y_valid[i*n_sequences:i*n_sequences+n_sequences].shape) ###Output _____no_output_____ ###Markdown Perform forecatsting ###Code x_scaled = values_scaled[-n_lags:, :].reshape((1, -1)).reshape((-1, n_sequences, n_variables)) yhat_scaled = model.predict(x_scaled) temp = x_scaled.reshape((1, n_lags*n_variables)) temp[-n_sequences*n_variables:][:, index_target:n_sequences*n_variables:n_variables] = \ yhat_scaled.reshape((1, n_sequences)) temp = temp.reshape((-1, n_variables)) yhat = scaler.inverse_transform(temp)[:, index_target] start_date = df_detrended.index[-1] mrange = month_range(start_date, n_sequences+1) data = [] trace = go.Scatter( x=list(df_detrended.index.astype(str).values), y=df_detrended.iloc[:, index_target], name="Original", # line = dict(color=colors[i]), # colorscale='Viridis', opacity = 0.9, mode="lines" ) trace_predict = go.Scatter( x=[d for d in mrange], y=list(df_detrended.values[-1:, index_target]) + list(yhat), name="Predict", # line = dict(color=colors[i]), # colorscale='Viridis', opacity = 0.9, mode="lines" ) data = [trace, trace_predict] layout = dict( title=df.columns[index_target], yaxis= dict(title = df_detrended.columns[index_target]), xaxis=dict( title="Time", rangeselector=dict( buttons=list([ dict(count=1, label='1m', step='month', stepmode='backward'), dict(count=6, label='6m', step='month', stepmode='backward'), dict(step='all') ]) ), rangeslider=dict( visible = True ), type='date' ) ) fig = dict(data=data, layout=layout) py.iplot(fig, filename = df.columns[index_target]) ###Output _____no_output_____
notebooks/R-experiment-evaluation/evaluation-part1.ipynb
###Markdown Evaluation of SubSVDD against benchmark data set ###Code path <- "../../data/output/evaluation-part1.csv" data <- read_csv(path, col_types = cols(id=col_character(), c_start_quality=col_skip(), c_end_quality=col_skip(), c_maximum=col_skip(), c_ramp_up=col_skip(), c_quality_range=col_skip(), c_total_quality_range=col_skip(), c_average_end_quality=col_skip(), c_average_quality_change=col_skip(), c_average_gain=col_skip(), c_average_loss=col_skip(), c_learning_stability=col_skip(), c_ratio_of_outlier_queries=col_skip(), auc_start_quality=col_skip(), auc_end_quality=col_skip(), auc_maximum=col_skip(), auc_ramp_up=col_skip(), auc_quality_range=col_skip(), auc_total_quality_range=col_skip(), auc_average_end_quality=col_skip(), auc_average_quality_change=col_skip(), auc_average_gain=col_skip(), auc_average_loss=col_skip(), auc_learning_stability=col_skip(), auc_ratio_of_outlier_queries=col_skip(), pauc_start_quality=col_skip(), pauc_end_quality=col_skip(), pauc_maximum=col_skip(), pauc_ramp_up=col_skip(), pauc_quality_range=col_skip(), pauc_total_quality_range=col_skip(), pauc_average_end_quality=col_skip(), pauc_average_quality_change=col_skip(), pauc_average_gain=col_skip(), pauc_average_loss=col_skip(), pauc_learning_stability=col_skip(), pauc_ratio_of_outlier_queries=col_skip())) data <- data %>% filter(scenario != "01-subsvdd-smallsubspaces") %>% mutate(data_set = recode_factor(data_set, "Annthyroid" = "Thyroid", "Cardiotocography" = "Cardio", "HeartDisease" = "Heart", "PageBlocks" = "Page", "SpamBase" = "Spam")) data %>% group_by(exit_code, data_set) %>% count() data %>% group_by(data_set) %>% summarize(D = max(num_dimensions), N = max(num_points)) %>% rename(Data = data_set) ###Output _____no_output_____ ###Markdown Extract experimental run for indepth analysis ###Code data %>% filter(max_size_subspaces == 2, num_subspaces == 10) %>% group_by(data_set) %>% filter(m_end_quality == max(m_end_quality)) %>% select(data_set, m_end_quality, id) ###Output _____no_output_____ ###Markdown Benchmark Comparison ###Code plotVar <- data %>% mutate(file_version = str_extract(file_name, "v\\w{2}")) %>% filter(init_strategy_C == "FixedCStrategy(0.45)" | (init_strategy_C=="BoundedTaxErrorEstimate") & model != "SubSVDD") %>% filter(exit_code == "success", num_al_iterations==50) %>% # !!! Do not filter for min_size_subspaces, as this variable holds the acutal sizes, and not the config limits !!! select(data_set, file_version, num_subspaces, max_size_subspaces, model, qs, qs_combination_fct, weight_update_strategy, m_average_end_quality, m_quality_range, m_end_quality, m_ramp_up, m_ratio_of_outlier_queries, initial_pool_resample_version) options(repr.plot.width=9, repr.plot.height=4) pv <- plotVar %>% mutate(model = case_when(model == "SubSVDD" ~ paste0(model, "(", num_subspaces, ")"), model == "SSAD_0.1" ~ "SSAD", TRUE ~ model)) %>% filter( qs != "RandomOutlierPQs", # change this for results with RandomOutlierPQs is.na(max_size_subspaces) | max_size_subspaces > 4, is.na(qs_combination_fct) | qs_combination_fct == "sum", is.na(weight_update_strategy) | weight_update_strategy == "out_0.01-in_10.0") pv %>%ggplot(aes(y=m_average_end_quality, x=data_set, color = model)) + geom_boxplot() + geom_point(position = position_jitterdodge()) + labs(title="Model Comparison", y="Average End Quality (MCC)", x = "") + scale_colour_brewer(palette="Paired") + plot_theme + theme(legend.title = element_blank()) ###Output _____no_output_____ ###Markdown Nummber of experiments per data set and classifier ###Code pv %>% group_by(data_set, model) %>% summarize(n_exp = n()) %>% spread(data_set, n_exp) quality_table <- pv %>% group_by(data_set, model) %>% summarize(median_average_end_quality = median(m_average_end_quality)) %>% rename(" " = data_set) %>% spread(model, median_average_end_quality) quality_table print(xtable(quality_table, caption = "Comparison of Median AEQ after 50 iterations; 5-dim to 8-dim subspaces.", latex.environments = "center", label="tab:aeq-large-subspaces", align = "llcccc"), booktabs = TRUE, hline.after = c(0, 7), include.rownames = FALSE, floating = FALSE, file="../tables/quality_table.tex" ) roq_table <- pv %>% group_by(data_set, model) %>% summarize(median_ratio_of_outlier_queries = median(m_ratio_of_outlier_queries)) %>% # rename(" " = data_set) %>% spread(model, median_ratio_of_outlier_queries) %>% ungroup() %>% select(-data_set) roq_table print(xtable(roq_table, caption = "Ration of outlier queries after 50 iterations; 5-dim to 8-dim subspaces.", latex.environments = "center", label="tab:roq-large-subspaces", align = "llccc"), booktabs = TRUE, include.rownames = FALSE, hline.after = c(0, 7), floating = FALSE, file="../tables/roq_table.tex" ) ###Output _____no_output_____ ###Markdown Extract experimental run ###Code library(RColorBrewer) custom_pal <- brewer.pal(5, "Set1")[c(2,3,4)] options(repr.plot.width=9, repr.plot.height=3) plotVar %>% filter(model=="SubSVDD") %>% # subspace size is random, so there are a few instances where the maximum size per set is smaller than the limit filter(max_size_subspaces != 3, max_size_subspaces !=7) %>% ggplot(aes(x=data_set, y=m_average_end_quality, color=factor(max_size_subspaces))) + labs(y="Avg End Quality (MCC)", x = "", color = "Max Subspace Size") + geom_boxplot() + geom_point(position = position_jitterdodge(), alpha=0.2) + scale_colour_manual(values=custom_pal) + plot_theme + theme(legend.position = "right") ggsave("../plots/subsvdd_subspace_size_comparison.pdf", width = 8, height = 2, plot = last_plot(), device = "pdf") options(repr.plot.width=9, repr.plot.height=3) plotVar %>% # filter(model=="SubSVDD") %>% ggplot(aes(x=data_set, y=m_end_quality, color=factor(qs))) + labs(title="Comparison of Query Strategy", y="End Quality (MCC)", x = "", color = "QS") + geom_boxplot() + scale_colour_brewer(palette="Paired") + facet_grid(row=vars(model))+ plot_theme plotVar %>% filter(model=="SubSVDD") %>% ggplot(aes(x=data_set, y=m_end_quality, color=factor(num_subspaces))) + # geom_point(position = position_jitterdodge()) + labs(y="End Quality (MCC)", x = "", color = "#Subspaces") + ggtitle("Comparison of #Subspaces") + geom_boxplot() + scale_colour_brewer(palette="Paired") + plot_theme ###Output _____no_output_____
quizzes/quiz2/Q2_Unid.ipynb
###Markdown Quiz-2: Jove part General Notes* **USE Jupyter notebook -- not Jupyter lab** -- for this quiz, because Jupyter widgets don't work correctly under the latter* NOTE THAT since the quizzes are given to you each week and there are roughly 150 of you, we can't provide deep comments on your work. These Jove quizzes are mainly for your "self-study". If you have questions, please let us know through Canvas or during office hours.* VERY IMPORTANT: If a cell's numbering "In [ ]" remains stuck as follows```In [*]```It means that this cell is infinitely looping. Remove the infinite loop if you can. If you can't debug the situation, contact us via Canvas, sending us the Jupyter (Jove) notebook. Goals of Quiz-2* Learn about language operations* Read lots of Jove code * RUN and then READ ALL THE CODE IN THIS NOTEBOOK)!! We won't test you on all your reading, but still, reading is to your advantage* Watch two videos, one on languages and another on DFA* Learn how to extend these Jove modules* Learn how to define and test DFA How to answer* Each section starting with "QN: " is a question. An answer is expected under that section.* Run various commands and observe the results. __Wherever I have placed the string --answer-- , an answer is expected in either a code cell or a markdown cell, placed below the --answer-- string.__ Video on Alphabet, Languages, etc.__Unfortunately the recording volume was not high. Please wear a head-set__ ###Code # This video corresponds to the Jupyter file # Module2_LanguageOps.ipynb that you can find under "notebook/driver" # of the Jove github from IPython.display import YouTubeVideo YouTubeVideo('TAEYvJn5eGc') ###Output _____no_output_____ ###Markdown QN: Provide a summary of the above video Under the "--answer--" line -- just 5 bullets of one sentence each. Just pick out some highlights of Jove. There is no "best answer". A reasonable effort is what we are looking for.--answer-- * S1* S2* S3* S4* S5 Code to define language operations We first define the zero or phi or empty language ###Code # The theory of languages : Primitive languages and language builders def lphi(): """In : None. Out: Zero language, i.e. set({}). """ return set({}) # {} could be dict; so we put set(..) ###Output _____no_output_____ ###Markdown Now let us define the Unit language ("1" for languages with respect to concatenation viewed as multiplication).Let us also define language concatenation.> $L1 \; L2 \;\; =\;\; \{x y \; \mid \; x\in L1 \;\wedge\; y\in L2\}$ ###Code def lunit(): """In : None. Out: {""} (a language : a set). """ return {""} # Set with epsilon def lcat(L1,L2): """In : L1 (language : a set), L2 (language : a set). Out: L1 concat L2 (language : a set). Example: L1 = {'ab', 'bc'} L2 = {'11', 'ab', '22'} lcat(L1,L2) -> {'abab', 'bc22', 'ab11', 'ab22', 'bcab', 'bc11'} """ return {x+y for x in L1 for y in L2} ###Output _____no_output_____ ###Markdown Examples of language operations ###Code L = {'a','bc'} print( "lcat(lphi(), L) = ", lcat(lphi(), L) ) print( "lcat(lunit(), L) = ", lcat(lunit(), L) ) ###Output _____no_output_____ ###Markdown Let us define another language through set comprehension, and exercise many different applications of concatenation.* Consider the language > M = $\{ 0^m 1^n \; \mid \; 0 \leq m,n \leq 3 \;\wedge\; m < n \}$ ###Code M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } print(M) print("lcat(L,M) = ", lcat(L,M)) print("lcat(M,lphi()) = ", lcat(M,lphi())) print("lcat(M,lunit()) = ", lcat(M,lunit())) ###Output _____no_output_____ ###Markdown QN: Show that you understand how lcat works--answer--* lcat(M,lunit()) == {'1', '11', '00111', '111', '0111', '011'} because: ...one sentence... With concatenation and Unit under our belt, we can define exponentiation recursively. Exponentiation is repeated multiplication (which for us is concatenation).> $L^n = L L^{n-1}$> $L^0 = Unit$We must have $L^0 = lunit()$; that is the only logical choice. (If you defined $L^0 = lphi()$, bad things will happen! Know what those bad things are!!)The code below simulates the aforesaid recursion. ###Code def lexp(L,n): """In : L (language : a set), n (exponent : a nat). Out: L^n (language : a set). Example: L = {'ab', 'bc'} n = 2 lexp(A,2) -> {'abab', 'bcab', 'bcbc', 'abbc'} """ return lunit() if n == 0 else lcat(L, lexp(L, n-1)) L = {'a','bc'} M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } print('M = ', M) print('lexp(M,2) = ') lexp(M,2) L = {'a','bc'} M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } lexp(lcat(L,M),1) ###Output _____no_output_____ ###Markdown With lexp under our belt, we can define lunion and lstar. We will define "star up to n" and then set n to infinity.> $L^{*n} = L^n \; \cup \; L^{*(n-1)}$> $L^{*0} = Unit$And thus the classical $L^* = L^{*n}\;\; {\rm for}\;\; n=\infty$, which we won't bother to "run" in Python :-). We will only run $L^{*n}$ in Python.We also take care to test that lstar works correctly for lphi and Unit. ###Code def lunion(L1,L2): """In : L1 (language : a set), L2 (language : a set). Out: L1 union L2 (language : a set). """ return L1 | L2 def lstar(L,n): """In : L (language : a set), n (bound for lstar : a nat). Out: L*_n (language : a set) Example: L = {'ab','bc'} n = 2 lstar(L,2) -> {'abab', 'bcbc', 'ab', 'abbc', '', 'bc', 'bcab'} """ return lunit() if n == 0 else lunion(lexp(L,n), lstar(L,n-1)) ###Output _____no_output_____ ###Markdown QN: Recursive DefinitionsIn the code so far, we have lstar recursively defined in terms of lexp, and lexp defined recursively in terms of lcat. What are the basis cases in these recursive definitions?(under the "--answer--" line -- just 2 bullets of one sentence each--answer-- * Basis case for lstar: * Basis case for lexp: ###Code L1 = {'a','bc'} lstar(L1,2) L2 = {'ab','bc'} lstar(L2,2) L2 = {'ab','bc'} lstar(L2,3) ###Output _____no_output_____ ###Markdown RUN ALL CODE IN THIS NOTEBOOK USING JUPYTER NOTEBOOKJupyter lab does not like Jupyter widgets. So even if the code ran so far under jupyter lab, switch to Jupyter notebooks and rerun. Interactive depiction of star using widgetsRun the code below and show that you can make menu selections to pull-down select L1 and L2. ###Code import ipywidgets as wdg L1 = {'a','bc'} L2 = {'ab','bc'} M = {'011', '111', '11', '0111', '00111', '1'} wdg.interact(lstar, L={'L1': L1, 'L2':L2, 'M': M, 'lphi': lphi(), 'lunit' : lunit()}, n=(0,7)) import ipywidgets as wdg L1 = {'a','bc'} L2 = {'ab','bc'} # L3 = ...define L3 here... M = {'011', '111', '11', '0111', '00111', '1'} wdg.interact(lstar, L={ # Add the case for'L3': L3, ..here.. 'L1': L1, 'L2':L2, 'M': M, 'lphi': lphi(), 'lunit' : lunit()}, n=(0,7)) ###Output _____no_output_____ ###Markdown QN: The star of lunit and lphiArgue that the code for the star of lunit() and lphi() is correct. Write one sentence answer.--answer--* lunit()'s star appears correct because: ..one sentence here..* lphi()'s star appears correct because: ..one sentence here.. QN: Show that you can extend the Jove codeYou are required to modify the code in Section ``*Interactive depiction of star using widgets*''Copy the entire code to the cell below ("copy here" below) and make these changes:1) In the code```wdg.interact```add another menu item by adding```'L3': L3 ```Make sure you have defined ```L3 = {'0','1','2'}```right underneath L2's definition.Then show by running the cell below and show that you can obtain the lstar of L3 at size 6.That is, run ```lstar(L3, 6)```by using the menu selection.--answer--- Your answer will be in the next code cell, below. QN: Copy Here ###Code # YOUR CODE COPIED FROM ABOVE AND MODIFLED! # When this cell is run, you must be able to select L3 and produce lstar(L3, 6) # Your code copy-pasted and modified should be below this line. #--answer-- ###Output _____no_output_____ ###Markdown QN: My L3's star appears correct--answer--Write one sentence here saying why your definition of L3's star appears correct* My definition of L3's star appears correct because: ...one sentence... Reversal and homomorphism now ###Code # In Python, there isn't direct support for reversing a string. # The backward selection method implemented by S[::-1] is what # many recommend. This leaves the start and stride empty, and # specifies the direction to be going backwards. # Another method is "".join(reversed(s)) to reverse s def srev(S): """In : S (string) Out: reverse of S (string) Example: srev('ab') -> 'ba' """ return S[::-1] def lrev(L): """In : L (language : a set) Out: reverse of L (language : a set) Example: lrev({'ab', 'bc'}) -> {'cb', 'ba'} """ return set(map(lambda x: srev(x), L)) def shomo(S,f): """In : S (string) f (fun ction from char to char) Out: String homomorphism of S wrt f. Example: S = "abcd" f = lambda x: chr( (ord(x)+1) % 256 ) shomo("abcd",f) -> 'bcde' """ return "".join(map(f,S)) def lhomo(L,f): """In : L (language : set of strings) f (function from char to char) Out: Lang. homomorphism of L wrt f (language : set of str) Example: L = {"Hello there", "a", "A"} f = rot13 = lambda x: chr( (ord(x)+13) % 256 ) lhomo(L, rot13) -> {'N', 'Uryy|-\x81ur\x7fr', 'n'} """ return set(map(lambda S: shomo(S,f), L)) L={'ab', '007'} # modulo-rotate all chars by one. rot1 = lambda x: chr( (ord(x)+1) % 256 ) # Don't be baffled if the sets print in a different order! # Sets don't have a required positional presentation order # Watch for the CONTENTS of the set reversing !! print('lrev(L) = ', lrev(L)) print('lhomo(L, rot1) = ', lhomo(L, rot1)) print('lrev(lhomo(L), rot1) = ', lrev(lhomo(L, rot1))) ###Output _____no_output_____ ###Markdown QN: The answer is correct* Argue why the following assertion is true: ```lrev(lhomo(L), rot1) == {'811', 'cb'}```* .. your answer in one sentence .. Let us now introduce powersets We now define the powerset of a set S. We work with lists, as sets cannot contain other sets (not hashable, etc). But barring all that, here is the recursive definition being used.> Let $PowSminusX$ = $powset(S \setminus x)$> Then, given $x \in S$, we have $powset(S)$ = $PowSminusX \cup$ { $y\cup x$ $\mid$ $y\in PowSminusX$ } That is,* Take out some $x\in S$* Recursively compute $PowSminusX$* Now, $powset(S)$ has all the sets in $PowSminusX$ plus all the sets in $PowSminusX$ with $x$ added back, as well.Here is that code now. __Below, in a new markdown cell, write a clear description in about 3 sentences of how the mathematical definition above is captured in the code below. Ideal answer: Call out the above three bullets and under each of theabove bullets, write the code line that realizes these bullets.__ ###Code def powset(S): """In : S (set) Out: List of lists representing powerset. Since sets/lists are unhashable, we convert the set to a list,perform the powerset operations, leaving the result as a list (can't convert back to a set). Example: S = {'ab', 'bc'} powset(S) -> [['ab', 'bc'], ['bc'], ['ab'], []] """ L=list(S) if L==[]: return([[]]) else: pow_rest0 = powset(L[1:]) pow_rest1 = list(map(lambda Ls: [L[0]] + Ls, pow_rest0)) return(pow_rest0 + pow_rest1) ###Output _____no_output_____ ###Markdown QN: Testing powsetBelow, explain the results produced briefly.--answer--* powset of {a,b,c} appears to be correct, because: ...one sentence... ###Code powset({'a','b','c'}) ###Output _____no_output_____ ###Markdown Finally, we have a whole list of familiar language-theoretic operations:* lunion - language union* lint - language intersection* lsymdiff - language symmetric difference* lminus - language subtraction* lissubset - language subset test* lissuperset - language superset test* lcomplem - language complement with respect to "star upto m" of the alphabet (not the full alphabet star, mind you)* product - cartesian productWe do not provide too many tests for these rather familiar functions. But please make sure you understand language complements well! ###Code # Define lunion (as before) def lunion(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 union L2 (sets of strings) """ return L1 | L2 def lint(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 intersection L2 (sets of strings) """ return L1 & L2 def lsymdiff(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: (L1 \ L2) union (L2 \ L1) (sets of strings) Example: lsymdiff({'ab', 'bc'}, {'11', 'ab', '22'}) -> {'11', '22', 'bc'} """ return L1 ^ L2 def lminus(L1,L2): """Language subtraction of two languages (sets of strings) Can do it as L1.difference(L2) also. """ return L1 - L2 def lissubset(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 is subset or equal to L2 (True/False) """ return L1 <= L2 def lissuperset(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 is superset or equal to L2 (True/False) """ return L1 >= L2 def lcomplem(L,sigma,n): """In : L (language : set of strings) sigma (alphabet : set of strings) n (finite limit for lstar : int) Out : sigma*_n - L (language : set of strings) Example: L = {'0', '10', '010'} sigma = {'0', '1'} n = 3 lcomplem(L4,{'0','1'}, 3) -> {'', '000', '101', '011', '00', '1', '001', '110', '111', '100', '01', '11'} """ return lstar(sigma,n) - L def product(S1,S2): """In : S1 (set) S2 (set) Out: Cartesian product of S1 and S2 (set of pairs) """ return { (x,y) for x in S1 for y in S2 } #--end L1 = {'0101'} L2 = lstar({'0','1'}, 2) # Python variable L2L1 denotes concat of L2 and L1 L2L1 = lcat(L2,L1) L2L1 L3 = lcat(L1, lunion(lunit(), L2L1)) L3 ###Output _____no_output_____ ###Markdown QN: Show you can define the symmetric difference of two sets in Jove without using "^" The code for lsymdiff is written above using Python's```^```operator. Show that you don''t need to use this operator (you can define it using lminus, lunion, etc.)Call this new function new_lsymmdiff.Test it as follows:```new_lsymmdiff(lstar({'0','1'}, 2), lstar({'0','1'}, 3))``` Your answer code is in the next code cell below--answer--- ###Code # Write your new_lsymmdiff code here and test it in this very cell on the above test # ...code... ###Output _____no_output_____ ###Markdown Numeric Order ###Code from math import floor, log, pow def nthnumeric(N, Sigma={'a','b'}): """Assume Sigma is a 2-sized list/set of chars (default {'a','b'}). Produce the Nth string in numeric order, where N >= 0. Idea : Given N, get b = floor(log_2(N+1)) - need that many places; what to fill in the places is the binary code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1]. """ if (type(Sigma)==set): S = list(Sigma) else: assert(type(Sigma)==list ), "Expected to be given set/list for arg2 of nthnumeric." S = Sigma assert(len(Sigma)==2 ),"Expected to be given a Sigma of length 2." if(N==0): return '' else: width = floor(log(N+1, 2)) tofill = int(N - pow(2, width) + 1) relevant_binstr = bin(tofill)[2::] # strip the 0b # in the leading string len_to_makeup = width - len(relevant_binstr) return (S[0]*len_to_makeup + shomo(relevant_binstr, lambda x: S[1] if x=='1' else S[0])) ###Output _____no_output_____ ###Markdown Testing Numeric Order ###Code nthnumeric(7,['0','1']) ###Output _____no_output_____ ###Markdown QN: Justify that the above answer is correct--answer--* nthnumeric(7, ['0', '1']) produces '000' because: ...one sentence... ###Code # This is an excellent recipe for generating test inputs to machines tests = [ nthnumeric(i, ['0','1']) for i in range(18) ] for inp in tests: print("Test input =", inp) # Below, explain the results produced briefly. #--answer-- ###Output _____no_output_____ ###Markdown Introducing DFAThe video below corresponds to Drive_DFA_Unit1.ipynb that is foundin the "notebooks/driver" link of the Jove github ###Code # This Youtube video walks through this notebook # Watch and enjoy. No specific work yet. # You can put in 2x speed and quickly watch, slowing down when # interesting parts come! from IPython.display import YouTubeVideo YouTubeVideo('Bdr926TeQyQ') ###Output _____no_output_____ ###Markdown Basics of DFAThis unit is going to introduce you to the basics of Deterministic Finite Automata and Regular Languages.We have recorded a Youtube video that will explain this notebook plus a few related things! This will serve as material for Lecture 4 and perhaps also later lectures. Regular languagesRegular languages are one family (or set) of languages. (Both words "family" and "set" mean the same.)A regular language is specified by drawing a DFA. Once you finish drawing a DFA, you would have defined a regular language. (We will soon tell you why you might want to take the trouble of drawing DFA and obtain regular languages. For now, we will finish defining terms.) Ultimately the aim is to not produce drawings. We aim to define a simple type of machine that represents goto based programs. we shall define the notion of a transition system and introduce a simple text-based markdown language that helps specify transition systems. Once the transition system is defined, a drawing can be automatically produced using utilities we provide. However we shall continue to say "draw a DFA" to mean "specify a transition system." There are an infinite number of DFAs that can denote the same regular languageMuch like 1+1 and 3-1 both denote number 2, and there are an infinite number of arithmetic expressions that denote 2, there are an infinite number of transition systems that denote the same regular language. But usually we don't write 364-362 in order to convey "2" (e.g., you seldom order (364-362) pancakes in a restaurant.) The same way, you try to specify the simplest possible DFA to denote a particular regular language -- not an artificially bloated one.However, with numbers, we know that "2" is simpler than (364-362). With DFA, don't worry: even if you did not draw the simplest DFA, there is an automated tool that we shall give you that creates the simplest DFA. Yes, there is a unique simplest DFA called the minimal DFA for each regular language. There are an infinite number of regular languagesMuch like there are an infinite number of natural numbers, there are also an infinite number of regular languages. So let us get the two ideas of infinity introduced so far straight:* Each natural number can be written in an infinite number of variants. E.g., 1 = 2-1 = 3-2 = 4-3 = ... - Similarly, each regular language can be denoted by an infinite number of DFA * There are an infinite number of natural numbers, e.g., 0,1,2, ... - Similarly, there are an infinite number of regular languages JFLAP: a tool for interactive study of DFAI will be introducing JFLAP in class. Please take notes then. Using Jove for DFAWe shall be using Jove's markdown notation to specify DFA. ###Code from jove.DotBashers import * from jove.Def_md2mc import * from jove.Def_DFA import * ###Output _____no_output_____ ###Markdown Now we begin our work building DFA.As we go along, we will also be teaching you how to "think DFA" so that you can code them up "straight from your head" Jove's markdownJove's markdown is designed to cover four machine types:1. DFA2. NFA3. PDA4. TM There are only these four basic machine types one studies in most automata theory courses. Markdown syntax for DFAThe markdown syntax for DFA is quite simple. To understand what we are about to say below, kindly refer to Def_DFA.ipynb and unGiven that a DFA consists of a set of states, an initial state, a possibly empty set of final states, and a transition function, we want to have an arrangement by which we require the user to specify the least and infer everything else.Thus we settle on a notation that specifies the transition function. We will add a few details that allows us to infer the initial and final states. Specifically, to describe a DFA:* Specify a state with name beginning with I that will be the initial state (lower-case i is OK too)* If the DFA in question has an initial state that is also a final state, let the state name begin with IF (lowercase if is OK)* For final states, use a name that begins with F or f* Then just specify-away transitions. Example DFAWe will now specify the DFA whose language is the set of strings that have the same number of 01 and 10 transitions. We will specify the transitions below in markdown within triple quotes initially, and then present the same in a code cell.We decide to include $\varepsilon$ as well as single occurrences of $0$ and $1$. These strings all trivially contain an equal number of 01 and 10 transitions.Let us design this DFA bit by bit, this being our first example. We will show the final result under "putting it all together," below. Initial state and the first few movesWe begin in state IF, meaning that it is initial and final. This is how we admit $\varepsilon$ into the machine's language. Now from IF, upon 0 or upon 1, we must still go to a final state, as the machine must accept a $0$ or a $1$ because a single $0$ or $1$ has an equal number of $01$ and $10$ changes -- meaning $0$ such!We can even plot these partial DFA as we move along. Just don't run them -- that is all! ** NOTE ** : When you present your solutions, present only the final product, and not every intermediate DFA ###Code dotObj_dfa(md2mc(''' DFA IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state ''')) ###Output _____no_output_____ ###Markdown Fully decode at every state, transitioning to appropriate states We now fill all other moves, decoding upon a 0 or a 1 at every state, keeping the overall semantics in mind. ###Code # Pick up from before, adding more lines to the DFA description dotObj_dfa(md2mc(''' DFA IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state F0 : 0 -> F0 F1 : 1 -> F1 F0 : 1 -> S01 !! There is a 01 transition but no 10 transition. So go to non-accepting state F1 : 0 -> S10 !! ditto. It has introduced a 10 transition without a 01 transition ''')) ###Output _____no_output_____ ###Markdown Finish the DFANow that we have made incremental progress and have our thoughts flowing, let's go ahead andfinish the DFA. Plus we also name the DFA object and hold onto it, and then also plot wrt that name.See the details below. ###Code # Pick up from before, adding more lines to the DFA description EqChangeDFA = md2mc(''' DFA !!-- IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state F0 : 0 -> F0 F1 : 1 -> F1 F0 : 1 -> S01 !! There is a 01 transition but no 10 transition. So go to non-accepting state F1 : 0 -> S10 !! ditto. It has introduced a 10 transition without a 01 transition S01: 1 -> S01 !! Remain in S01 as the 01 vs 10 balance has not been restored S10: 0 -> S10 !! Similar reasoning as above S01: 0 -> F0 !! Balance restored now! S10: 1 -> F1 !! Balance restored now! !!--- !! this finishes the construction, as we have accounted for all transitions ''') # Let us view the internal Python representation of # DFA as an n-tuple (Q, Sigma, Delta, q0, F) EqChangeDFA # Now let us view the DFA as a graph dotObj_dfa(EqChangeDFA) ###Output _____no_output_____ ###Markdown Running a constructed DFAWe run a DFA by generating a collection of strings and generating the status of run (feeding it to accepts_dfa)The full language of the DFA is infinitary, and so we won't present all of it (obviously) but only enough of itto believe that we have built the correct DFA. Later we can check properties and conclude that the machine hasall the required moves. ###Code from math import floor, log, pow def nthnumeric(N, Sigma={'a','b'}): """Assume Sigma is a 2-sized list/set of chars (default {'a','b'}). Produce the Nth string in numeric order, where N >= 0. Idea : Given N, get b = floor(log_2(N+1)) - need that many places; what to fill in the places is the binary code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1]. """ if (type(Sigma)==set): S = list(Sigma) else: assert(type(Sigma)==list ), "Expected to be given set/list for arg2 of nthnumeric." S = Sigma assert(len(Sigma)==2 ),"Expected to be given a Sigma of length 2." if(N==0): return '' else: width = floor(log(N+1, 2)) tofill = int(N - pow(2, width) + 1) relevant_binstr = bin(tofill)[2::] # strip the 0b # in the leading string len_to_makeup = width - len(relevant_binstr) return (S[0]*len_to_makeup + shomo(relevant_binstr, lambda x: S[1] if x=='1' else S[0])) tests = [ nthnumeric(i, ['0','1']) for i in range(19) ] for t in tests: if accepts_dfa(EqChangeDFA, t): print("This DFA accepts ", t) else: print("This DFA rejects ", t) ###Output _____no_output_____ ###Markdown Quiz-2: Jove part General Notes* **USE Jupyter notebook -- not Jupyter lab** -- for this quiz, because Jupyter widgets don't work correctly under the latter* NOTE THAT since the quizzes are given to you each week and there are roughly 150 of you, we can't provide deep comments on your work. These Jove quizzes are mainly for your "self-study". If you have questions, please let us know through Canvas or during office hours.* VERY IMPORTANT: If a cell's numbering "In [ ]" remains stuck as follows```In [*]```It means that this cell is infinitely looping. Remove the infinite loop if you can. If you can't debug the situation, contact us via Canvas, sending us the Jupyter (Jove) notebook. Goals of Quiz-2* Learn about language operations* Read lots of Jove code * RUN and then READ ALL THE CODE IN THIS NOTEBOOK)!! We won't test you on all your reading, but still, reading is to your advantage* Watch two videos, one on languages and another on DFA* Learn how to extend these Jove modules* Learn how to define and test DFA How to answer* Each section starting with "QN: " is a question. An answer is expected under that section.* Run various commands and observe the results. __Wherever I have placed the string --answer-- , an answer is expected in either a code cell or a markdown cell, placed below the --answer-- string.__ Video on Alphabet, Languages, etc.__Unfortunately the recording volume was not high. Please wear a head-set__ ###Code # This video corresponds to the Jupyter file # Module2_LanguageOps.ipynb that you can find under "notebook/driver" # of the Jove github from IPython.display import YouTubeVideo YouTubeVideo('TAEYvJn5eGc') ###Output _____no_output_____ ###Markdown QN: Provide a summary of the above video Under the "--answer--" line -- just 5 bullets of one sentence each. Just pick out some highlights of Jove. There is no "best answer". A reasonable effort is what we are looking for.--answer-- * Language is a set of sequences (strings) of symbols.* lphi() returns the zero language. * L0 should return the Unit Language* lstar() uses a bound to be able to compute a star language* There's a lot functions that help with powersets, lunions, intersections, ect. Code to define language operations We first define the zero or phi or empty language ###Code # The theory of languages : Primitive languages and language builders def lphi(): """In : None. Out: Zero language, i.e. set({}). """ return set({}) # {} could be dict; so we put set(..) ###Output _____no_output_____ ###Markdown Now let us define the Unit language ("1" for languages with respect to concatenation viewed as multiplication).Let us also define language concatenation.> $L1 \; L2 \;\; =\;\; \{x y \; \mid \; x\in L1 \;\wedge\; y\in L2\}$ ###Code def lunit(): """In : None. Out: {""} (a language : a set). """ return {""} # Set with epsilon def lcat(L1,L2): """In : L1 (language : a set), L2 (language : a set). Out: L1 concat L2 (language : a set). Example: L1 = {'ab', 'bc'} L2 = {'11', 'ab', '22'} lcat(L1,L2) -> {'abab', 'bc22', 'ab11', 'ab22', 'bcab', 'bc11'} """ return {x+y for x in L1 for y in L2} ###Output _____no_output_____ ###Markdown Examples of language operations ###Code L = {'a','bc'} print( "lcat(lphi(), L) = ", lcat(lphi(), L) ) print( "lcat(lunit(), L) = ", lcat(lunit(), L) ) ###Output lcat(lphi(), L) = set() lcat(lunit(), L) = {'bc', 'a'} ###Markdown Let us define another language through set comprehension, and exercise many different applications of concatenation.* Consider the language > M = $\{ 0^m 1^n \; \mid \; 0 \leq m,n \leq 3 \;\wedge\; m < n \}$ ###Code M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } print(M) print("lcat(L,M) = ", lcat(L,M)) print("lcat(M,lphi()) = ", lcat(M,lphi())) print("lcat(M,lunit()) = ", lcat(M,lunit())) ###Output {'0111', '1', '011', '00111', '111', '11'} lcat(L,M) = {'bc1', 'bc11', 'bc011', 'bc00111', 'bc0111', 'a11', 'bc111', 'a00111', 'a1', 'a0111', 'a111', 'a011'} lcat(M,lphi()) = set() lcat(M,lunit()) = {'0111', '1', '011', '00111', '111', '11'} ###Markdown QN: Show that you understand how lcat works--answer--* lcat(M,lunit()) == {'1', '11', '00111', '111', '0111', '011'} because: lcat is finding something similiar to the cartesian product between M and the unit language and since any language concatened with the unit language just returns the first language, then we just get back M. With concatenation and Unit under our belt, we can define exponentiation recursively. Exponentiation is repeated multiplication (which for us is concatenation).> $L^n = L L^{n-1}$> $L^0 = Unit$We must have $L^0 = lunit()$; that is the only logical choice. (If you defined $L^0 = lphi()$, bad things will happen! Know what those bad things are!!)The code below simulates the aforesaid recursion. ###Code def lexp(L,n): """In : L (language : a set), n (exponent : a nat). Out: L^n (language : a set). Example: L = {'ab', 'bc'} n = 2 lexp(A,2) -> {'abab', 'bcab', 'bcbc', 'abbc'} """ return lunit() if n == 0 else lcat(L, lexp(L, n-1)) L = {'a','bc'} M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } print('M = ', M) print('lexp(M,2) = ') sorted(lexp(M,2), key= lambda l: len(l)) L = {'a','bc'} M = {"0"*m + "1"*n for m in range(3) for n in range(4) if m < n } sorted(lexp(lcat(L,M),1), key= lambda l: len(l)) ###Output _____no_output_____ ###Markdown With lexp under our belt, we can define lunion and lstar. We will define "star up to n" and then set n to infinity.> $L^{*n} = L^n \; \cup \; L^{*(n-1)}$> $L^{*0} = Unit$And thus the classical $L^* = L^{*n}\;\; {\rm for}\;\; n=\infty$, which we won't bother to "run" in Python :-). We will only run $L^{*n}$ in Python.We also take care to test that lstar works correctly for lphi and Unit. ###Code def lunion(L1,L2): """In : L1 (language : a set), L2 (language : a set). Out: L1 union L2 (language : a set). """ return L1 | L2 def lstar(L,n): """In : L (language : a set), n (bound for lstar : a nat). Out: L*_n (language : a set) Example: L = {'ab','bc'} n = 2 lstar(L,2) -> {'abab', 'bcbc', 'ab', 'abbc', '', 'bc', 'bcab'} """ return lunit() if n == 0 else lunion(lexp(L,n), lstar(L,n-1)) ###Output _____no_output_____ ###Markdown QN: Recursive DefinitionsIn the code so far, we have lstar recursively defined in terms of lexp, and lexp defined recursively in terms of lcat. What are the basis cases in these recursive definitions?(under the "--answer--" line -- just 2 bullets of one sentence each--answer-- * Basis case for lstar: We check 'if n == 0: return lunit()' to see whether we concatened the n langugages and then we return unit language as the last one* Basis case for lexp: We check 'if n == 0: return lunit()' to see whether we concatened the langugage n times with itself and then we return unit language ###Code L1 = {'a','bc'} lstar(L1,2) L2 = {'ab','bc'} lstar(L2,2) L2 = {'ab','bc'} lstar(L2,3) ###Output _____no_output_____ ###Markdown RUN ALL CODE IN THIS NOTEBOOK USING JUPYTER NOTEBOOKJupyter lab does not like Jupyter widgets. So even if the code ran so far under jupyter lab, switch to Jupyter notebooks and rerun. Interactive depiction of star using widgetsRun the code below and show that you can make menu selections to pull-down select L1 and L2. ###Code import ipywidgets as wdg L1 = {'a','bc'} L2 = {'ab','bc'} M = {'011', '111', '11', '0111', '00111', '1'} wdg.interact(lstar, L={'L1': L1, 'L2':L2, 'M': M, 'lphi': lphi(), 'lunit' : lunit()}, n=(0,7)) import ipywidgets as wdg L1 = {'a','bc'} L2 = {'ab','bc'} # L3 = ...define L3 here... M = {'011', '111', '11', '0111', '00111', '1'} wdg.interact(lstar, L={ # Add the case for'L3': L3, ..here.. 'L1': L1, 'L2':L2, 'M': M, 'lphi': lphi(), 'lunit' : lunit()}, n=(0,7)) ###Output _____no_output_____ ###Markdown QN: The star of lunit and lphiArgue that the code for the star of lunit() and lphi() is correct. Write one sentence answer.--answer--* lunit()'s star appears correct because: This is the same as multiplying 1 by 1 an infinite amount of times, you will always get 1 back. * lphi()'s star appears correct because: By convention we decided that L0 will return the unit language and since we are computing the star of a language, we are simple computuing the union of an infinite unit languages. QN: Show that you can extend the Jove codeYou are required to modify the code in Section ``*Interactive depiction of star using widgets*''Copy the entire code to the cell below ("copy here" below) and make these changes:1) In the code```wdg.interact```add another menu item by adding```'L3': L3 ```Make sure you have defined ```L3 = {'0','1','2'}```right underneath L2's definition.Then show by running the cell below and show that you can obtain the lstar of L3 at size 6.That is, run ```lstar(L3, 6)```by using the menu selection.--answer--- Your answer will be in the next code cell, below. QN: Copy Here ###Code # YOUR CODE COPIED FROM ABOVE AND MODIFLED! # When this cell is run, you must be able to select L3 and produce lstar(L3, 6) # Your code copy-pasted and modified should be below this line. #--answer-- import ipywidgets as wdg L1 = {'a','bc'} L2 = {'ab','bc'} L3 = {'0','1','2'} M = {'011', '111', '11', '0111', '00111', '1'} wdg.interact(lstar, L={ 'L3': L3, 'L1': L1, 'L2':L2, 'M': M, 'lphi': lphi(), 'lunit' : lunit()}, n=(6,6)) ###Output _____no_output_____ ###Markdown QN: My L3's star appears correct--answer--Write one sentence here saying why your definition of L3's star appears correct* My definition of L3's star appears correct because: It appears correct because it is bounded by length 6 and the words contain only symbols from the alphabet L3. Reversal and homomorphism now ###Code # In Python, there isn't direct support for reversing a string. # The backward selection method implemented by S[::-1] is what # many recommend. This leaves the start and stride empty, and # specifies the direction to be going backwards. # Another method is "".join(reversed(s)) to reverse s def srev(S): """In : S (string) Out: reverse of S (string) Example: srev('ab') -> 'ba' """ return S[::-1] def lrev(L): """In : L (language : a set) Out: reverse of L (language : a set) Example: lrev({'ab', 'bc'}) -> {'cb', 'ba'} """ return set(map(lambda x: srev(x), L)) def shomo(S,f): """In : S (string) f (fun ction from char to char) Out: String homomorphism of S wrt f. Example: S = "abcd" f = lambda x: chr( (ord(x)+1) % 256 ) shomo("abcd",f) -> 'bcde' """ return "".join(map(f,S)) def lhomo(L,f): """In : L (language : set of strings) f (function from char to char) Out: Lang. homomorphism of L wrt f (language : set of str) Example: L = {"Hello there", "a", "A"} f = rot13 = lambda x: chr( (ord(x)+13) % 256 ) lhomo(L, rot13) -> {'N', 'Uryy|-\x81ur\x7fr', 'n'} """ return set(map(lambda S: shomo(S,f), L)) L={'ab', '007'} # modulo-rotate all chars by one. rot1 = lambda x: chr( (ord(x)+1) % 256 ) # Don't be baffled if the sets print in a different order! # Sets don't have a required positional presentation order # Watch for the CONTENTS of the set reversing !! print('lrev(L) = ', lrev(L)) print('lhomo(L, rot1) = ', lhomo(L, rot1)) print('lrev(lhomo(L), rot1) = ', lrev(lhomo(L, rot1))) ###Output lrev(L) = {'ba', '700'} lhomo(L, rot1) = {'bc', '118'} lrev(lhomo(L), rot1) = {'811', 'cb'} ###Markdown QN: The answer is correct* Argue why the following assertion is true: ```lrev(lhomo(L), rot1) == {'811', 'cb'}```* This method is applying a homomorphism that in this case is just increasing the ascii value of the letter by one for every letter in every word of the language and in the end is reversing all the words, so if we start with 'ab' we get 'bc' then 'cb' while with '007' we get '118' then '811'. Let us now introduce powersets We now define the powerset of a set S. We work with lists, as sets cannot contain other sets (not hashable, etc). But barring all that, here is the recursive definition being used.> Let $PowSminusX$ = $powset(S \setminus x)$> Then, given $x \in S$, we have $powset(S)$ = $PowSminusX \cup$ { $y\cup x$ $\mid$ $y\in PowSminusX$ } That is,* Take out some $x\in S$* Recursively compute $PowSminusX$* Now, $powset(S)$ has all the sets in $PowSminusX$ plus all the sets in $PowSminusX$ with $x$ added back, as well.Here is that code now. __Below, in a new markdown cell, write a clear description in about 3 sentences of how the mathematical definition above is captured in the code below. Ideal answer: Call out the above three bullets and under each of theabove bullets, write the code line that realizes these bullets.__ * Take out some $x\in S$ pow_rest0 = powset(L[1:]) * Recursively compute $PowSminusX$ -pow_rest1 = list(map(lambda Ls: [L[0]] + Ls, pow_rest0))* Now, $powset(S)$ has all the sets in $PowSminusX$ plus all the sets in $PowSminusX$ with $x$ added back, as well. return(pow_rest0 + pow_rest1) ###Code def powset(S): """In : S (set) Out: List of lists representing powerset. Since sets/lists are unhashable, we convert the set to a list,perform the powerset operations, leaving the result as a list (can't convert back to a set). Example: S = {'ab', 'bc'} powset(S) -> [['ab', 'bc'], ['bc'], ['ab'], []] """ L=list(S) if L==[]: return([[]]) else: pow_rest0 = powset(L[1:]) pow_rest1 = list(map(lambda Ls: [L[0]] + Ls, pow_rest0)) return(pow_rest0 + pow_rest1) ###Output _____no_output_____ ###Markdown QN: Testing powsetBelow, explain the results produced briefly.--answer--* powset of {a,b,c} appears to be correct, because: It contains all the possible subsets of the {a,b,c} set. ###Code powset({'a','b','c'}) ###Output _____no_output_____ ###Markdown Finally, we have a whole list of familiar language-theoretic operations:* lunion - language union* lint - language intersection* lsymdiff - language symmetric difference* lminus - language subtraction* lissubset - language subset test* lissuperset - language superset test* lcomplem - language complement with respect to "star upto m" of the alphabet (not the full alphabet star, mind you)* product - cartesian productWe do not provide too many tests for these rather familiar functions. But please make sure you understand language complements well! ###Code # Define lunion (as before) def lunion(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 union L2 (sets of strings) """ return L1 | L2 def lint(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 intersection L2 (sets of strings) """ return L1 & L2 def lsymdiff(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: (L1 \ L2) union (L2 \ L1) (sets of strings) Example: lsymdiff({'ab', 'bc'}, {'11', 'ab', '22'}) -> {'11', '22', 'bc'} """ return L1 ^ L2 def lminus(L1,L2): """Language subtraction of two languages (sets of strings) Can do it as L1.difference(L2) also. """ return L1 - L2 def lissubset(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 is subset or equal to L2 (True/False) """ return L1 <= L2 def lissuperset(L1,L2): """In : L1 (language : set of strings) L2 (language : set of strings) Out: L1 is superset or equal to L2 (True/False) """ return L1 >= L2 def lcomplem(L,sigma,n): """In : L (language : set of strings) sigma (alphabet : set of strings) n (finite limit for lstar : int) Out : sigma*_n - L (language : set of strings) Example: L = {'0', '10', '010'} sigma = {'0', '1'} n = 3 lcomplem(L4,{'0','1'}, 3) -> {'', '000', '101', '011', '00', '1', '001', '110', '111', '100', '01', '11'} """ return lstar(sigma,n) - L def product(S1,S2): """In : S1 (set) S2 (set) Out: Cartesian product of S1 and S2 (set of pairs) """ return { (x,y) for x in S1 for y in S2 } #--end L1 = {'0101'} L2 = lstar({'0','1'}, 2) # Python variable L2L1 denotes concat of L2 and L1 L2L1 = lcat(L2,L1) L2L1 L3 = lcat(L1, lunion(lunit(), L2L1)) L3 ###Output _____no_output_____ ###Markdown QN: Show you can define the symmetric difference of two sets in Jove without using "^" The code for lsymdiff is written above using Python's```^```operator. Show that you don''t need to use this operator (you can define it using lminus, lunion, etc.)Call this new function new_lsymmdiff.Test it as follows:```new_lsymmdiff(lstar({'0','1'}, 2), lstar({'0','1'}, 3))``` Your answer code is in the next code cell below--answer--- ###Code # Write your new_lsymmdiff code here and test it in this very cell on the above test # ...code... def new_lsymmdiff(L1, L2): return lunion(lminus(L1, L2), lminus(L2,L1)) new_lsymmdiff(lstar({'0','1'}, 2), lstar({'0','1'}, 3)) lsymdiff(lstar({'0','1'}, 2), lstar({'0','1'}, 3)) ###Output _____no_output_____ ###Markdown Numeric Order ###Code from math import floor, log, pow def nthnumeric(N, Sigma={'a','b'}): """Assume Sigma is a 2-sized list/set of chars (default {'a','b'}). Produce the Nth string in numeric order, where N >= 0. Idea : Given N, get b = floor(log_2(N+1)) - need that many places; what to fill in the places is the binary code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1]. """ if (type(Sigma)==set): S = list(Sigma) else: assert(type(Sigma)==list ), "Expected to be given set/list for arg2 of nthnumeric." S = Sigma assert(len(Sigma)==2 ),"Expected to be given a Sigma of length 2." if(N==0): return '' else: width = floor(log(N+1, 2)) tofill = int(N - pow(2, width) + 1) relevant_binstr = bin(tofill)[2::] # strip the 0b # in the leading string len_to_makeup = width - len(relevant_binstr) return (S[0]*len_to_makeup + shomo(relevant_binstr, lambda x: S[1] if x=='1' else S[0])) ###Output _____no_output_____ ###Markdown Testing Numeric Order ###Code nthnumeric(7,['0','1']) ###Output _____no_output_____ ###Markdown QN: Justify that the above answer is correct--answer--* nthnumeric(7, ['0', '1']) produces '000' because: If we enumerate based by length we have: '', 0, 1, 00, 01, 10, 11, 000,..., so 000 is the element in index 7. ###Code # This is an excellent recipe for generating test inputs to machines tests = [ nthnumeric(i, ['0','1']) for i in range(18) ] for inp in tests: print("Test input =", inp) # Below, explain the results produced briefly. #--answer-- ###Output Test input = Test input = 0 Test input = 1 Test input = 00 Test input = 01 Test input = 10 Test input = 11 Test input = 000 Test input = 001 Test input = 010 Test input = 011 Test input = 100 Test input = 101 Test input = 110 Test input = 111 Test input = 0000 Test input = 0001 Test input = 0010 ###Markdown Introducing DFAThe video below corresponds to Drive_DFA_Unit1.ipynb that is foundin the "notebooks/driver" link of the Jove github ###Code # This Youtube video walks through this notebook # Watch and enjoy. No specific work yet. # You can put in 2x speed and quickly watch, slowing down when # interesting parts come! from IPython.display import YouTubeVideo YouTubeVideo('Bdr926TeQyQ') ###Output _____no_output_____ ###Markdown Basics of DFAThis unit is going to introduce you to the basics of Deterministic Finite Automata and Regular Languages.We have recorded a Youtube video that will explain this notebook plus a few related things! This will serve as material for Lecture 4 and perhaps also later lectures. Regular languagesRegular languages are one family (or set) of languages. (Both words "family" and "set" mean the same.)A regular language is specified by drawing a DFA. Once you finish drawing a DFA, you would have defined a regular language. (We will soon tell you why you might want to take the trouble of drawing DFA and obtain regular languages. For now, we will finish defining terms.) Ultimately the aim is to not produce drawings. We aim to define a simple type of machine that represents goto based programs. we shall define the notion of a transition system and introduce a simple text-based markdown language that helps specify transition systems. Once the transition system is defined, a drawing can be automatically produced using utilities we provide. However we shall continue to say "draw a DFA" to mean "specify a transition system." There are an infinite number of DFAs that can denote the same regular languageMuch like 1+1 and 3-1 both denote number 2, and there are an infinite number of arithmetic expressions that denote 2, there are an infinite number of transition systems that denote the same regular language. But usually we don't write 364-362 in order to convey "2" (e.g., you seldom order (364-362) pancakes in a restaurant.) The same way, you try to specify the simplest possible DFA to denote a particular regular language -- not an artificially bloated one.However, with numbers, we know that "2" is simpler than (364-362). With DFA, don't worry: even if you did not draw the simplest DFA, there is an automated tool that we shall give you that creates the simplest DFA. Yes, there is a unique simplest DFA called the minimal DFA for each regular language. There are an infinite number of regular languagesMuch like there are an infinite number of natural numbers, there are also an infinite number of regular languages. So let us get the two ideas of infinity introduced so far straight:* Each natural number can be written in an infinite number of variants. E.g., 1 = 2-1 = 3-2 = 4-3 = ... - Similarly, each regular language can be denoted by an infinite number of DFA * There are an infinite number of natural numbers, e.g., 0,1,2, ... - Similarly, there are an infinite number of regular languages JFLAP: a tool for interactive study of DFAI will be introducing JFLAP in class. Please take notes then. Using Jove for DFAWe shall be using Jove's markdown notation to specify DFA. ###Code from jove.DotBashers import * from jove.Def_md2mc import * from jove.Def_DFA import * ###Output You may use any of these help commands: help(ResetStNum) help(NxtStateStr) You may use any of these help commands: help(md2mc) .. and if you want to dig more, then .. help(default_line_attr) help(length_ok_input_items) help(union_line_attr_list_fld) help(extend_rsltdict) help(form_delta) help(get_machine_components) You may use any of these help commands: help(mkp_dfa) help(mk_dfa) help(totalize_dfa) help(addtosigma_delta) help(step_dfa) help(run_dfa) help(accepts_dfa) help(comp_dfa) help(union_dfa) help(intersect_dfa) help(pruneUnreach) help(iso_dfa) help(langeq_dfa) help(same_status) help(h_langeq_dfa) help(fixptDist) help(min_dfa) help(pairFR) help(state_combos) help(sepFinNonFin) help(bash_eql_classes) help(listminus) help(bash_1) help(mk_rep_eqc) help(F_of) help(rep_of_s) help(q0_of) help(Delta_of) help(mk_state_eqc_name) ###Markdown Now we begin our work building DFA.As we go along, we will also be teaching you how to "think DFA" so that you can code them up "straight from your head" Jove's markdownJove's markdown is designed to cover four machine types:1. DFA2. NFA3. PDA4. TM There are only these four basic machine types one studies in most automata theory courses. Markdown syntax for DFAThe markdown syntax for DFA is quite simple. To understand what we are about to say below, kindly refer to Def_DFA.ipynb and unGiven that a DFA consists of a set of states, an initial state, a possibly empty set of final states, and a transition function, we want to have an arrangement by which we require the user to specify the least and infer everything else.Thus we settle on a notation that specifies the transition function. We will add a few details that allows us to infer the initial and final states. Specifically, to describe a DFA:* Specify a state with name beginning with I that will be the initial state (lower-case i is OK too)* If the DFA in question has an initial state that is also a final state, let the state name begin with IF (lowercase if is OK)* For final states, use a name that begins with F or f* Then just specify-away transitions. Example DFAWe will now specify the DFA whose language is the set of strings that have the same number of 01 and 10 transitions. We will specify the transitions below in markdown within triple quotes initially, and then present the same in a code cell.We decide to include $\varepsilon$ as well as single occurrences of $0$ and $1$. These strings all trivially contain an equal number of 01 and 10 transitions.Let us design this DFA bit by bit, this being our first example. We will show the final result under "putting it all together," below. Initial state and the first few movesWe begin in state IF, meaning that it is initial and final. This is how we admit $\varepsilon$ into the machine's language. Now from IF, upon 0 or upon 1, we must still go to a final state, as the machine must accept a $0$ or a $1$ because a single $0$ or $1$ has an equal number of $01$ and $10$ changes -- meaning $0$ such!We can even plot these partial DFA as we move along. Just don't run them -- that is all! ** NOTE ** : When you present your solutions, present only the final product, and not every intermediate DFA ###Code dotObj_dfa(md2mc(''' DFA IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state ''')) ###Output Generating LALR tables ###Markdown Fully decode at every state, transitioning to appropriate states We now fill all other moves, decoding upon a 0 or a 1 at every state, keeping the overall semantics in mind. ###Code # Pick up from before, adding more lines to the DFA description dotObj_dfa(md2mc(''' DFA IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state F0 : 0 -> F0 F1 : 1 -> F1 F0 : 1 -> S01 !! There is a 01 transition but no 10 transition. So go to non-accepting state F1 : 0 -> S10 !! ditto. It has introduced a 10 transition without a 01 transition ''')) ###Output _____no_output_____ ###Markdown Finish the DFANow that we have made incremental progress and have our thoughts flowing, let's go ahead andfinish the DFA. Plus we also name the DFA object and hold onto it, and then also plot wrt that name.See the details below. ###Code # Pick up from before, adding more lines to the DFA description EqChangeDFA = md2mc(''' DFA !!-- IF : 0 -> F0 !! a single 0 does not change the number of 01 or 10 transitions IF : 1 -> F1 !! so, go to an accepting state F0 : 0 -> F0 F1 : 1 -> F1 F0 : 1 -> S01 !! There is a 01 transition but no 10 transition. So go to non-accepting state F1 : 0 -> S10 !! ditto. It has introduced a 10 transition without a 01 transition S01: 1 -> S01 !! Remain in S01 as the 01 vs 10 balance has not been restored S10: 0 -> S10 !! Similar reasoning as above S01: 0 -> F0 !! Balance restored now! S10: 1 -> F1 !! Balance restored now! !!--- !! this finishes the construction, as we have accounted for all transitions ''') # Let us view the internal Python representation of # DFA as an n-tuple (Q, Sigma, Delta, q0, F) EqChangeDFA # Now let us view the DFA as a graph dotObj_dfa(EqChangeDFA) ###Output _____no_output_____ ###Markdown Running a constructed DFAWe run a DFA by generating a collection of strings and generating the status of run (feeding it to accepts_dfa)The full language of the DFA is infinitary, and so we won't present all of it (obviously) but only enough of itto believe that we have built the correct DFA. Later we can check properties and conclude that the machine hasall the required moves. ###Code from math import floor, log, pow def nthnumeric(N, Sigma={'a','b'}): """Assume Sigma is a 2-sized list/set of chars (default {'a','b'}). Produce the Nth string in numeric order, where N >= 0. Idea : Given N, get b = floor(log_2(N+1)) - need that many places; what to fill in the places is the binary code for N - (2^b - 1) with 0 as Sigma[0] and 1 as Sigma[1]. """ if (type(Sigma)==set): S = list(Sigma) else: assert(type(Sigma)==list ), "Expected to be given set/list for arg2 of nthnumeric." S = Sigma assert(len(Sigma)==2 ),"Expected to be given a Sigma of length 2." if(N==0): return '' else: width = floor(log(N+1, 2)) tofill = int(N - pow(2, width) + 1) relevant_binstr = bin(tofill)[2::] # strip the 0b # in the leading string len_to_makeup = width - len(relevant_binstr) return (S[0]*len_to_makeup + shomo(relevant_binstr, lambda x: S[1] if x=='1' else S[0])) tests = [ nthnumeric(i, ['0','1']) for i in range(19) ] for t in tests: if accepts_dfa(EqChangeDFA, t): print("This DFA accepts ", t) else: print("This DFA rejects ", t) ###Output This DFA accepts This DFA accepts 0 This DFA accepts 1 This DFA accepts 00 This DFA rejects 01 This DFA rejects 10 This DFA accepts 11 This DFA accepts 000 This DFA rejects 001 This DFA accepts 010 This DFA rejects 011 This DFA rejects 100 This DFA accepts 101 This DFA rejects 110 This DFA accepts 111 This DFA accepts 0000 This DFA rejects 0001 This DFA accepts 0010 This DFA rejects 0011
Examples/CyclicVoltammetry/CyclicVoltammetry.ipynb
###Markdown Cyclic VoltammetryThis example shows how the Thales CV software package can be controlled with Python.The [CV manual](http://zahner.de/pdf/CV.pdf) provides further explanation of this method and explains the settings. ###Code import sys from thales_remote.connection import ThalesRemoteConnection from thales_remote.script_wrapper import ThalesRemoteScriptWrapper from jupyter_utils import executionInNotebook, notebookCodeToPython ###Output _____no_output_____ ###Markdown Establish and initialize The Term software must be started before the script is executed to be able to connect. ###Code if __name__ == "__main__": zenniumConnection = ThalesRemoteConnection() connectionSuccessful = zenniumConnection.connectToTerm("localhost", "ScriptRemote") if connectionSuccessful: print("connection successfull") else: print("connection not possible") sys.exit() zahnerZennium = ThalesRemoteScriptWrapper(zenniumConnection) zahnerZennium.forceThalesIntoRemoteScript() ###Output connection successfull ###Markdown CV output file setupThe first step is to set where the measurement data is to be saved. The path must exist otherwise you will get an error. ###Code zahnerZennium.setCVOutputPath(r"C:\THALES\temp\cv") ###Output _____no_output_____ ###Markdown Then it is set that the measurements should be numbered and the numbering starts with 1. The basic file name "cv_series" is then extended with a number. ###Code zahnerZennium.setCVOutputFileName("cv_series") zahnerZennium.setCVNaming("counter") zahnerZennium.setCVCounter(1) ###Output _____no_output_____ ###Markdown CV measurement parametersIn the next step, the actual parameters for the measurement method are set. Alternatively, a rule file could be loaded which sets the parameters for the measurement.The methods are named after the parameters they set. Additional information can be found in the [API documentation](http://zahner.de/documentation/thales_remote/script_wrapper.html). ###Code zahnerZennium.setCVStartPotential(1) zahnerZennium.setCVUpperReversingPotential(2) zahnerZennium.setCVLowerReversingPotential(0) zahnerZennium.setCVEndPotential(1) zahnerZennium.setCVStartHoldTime(2) zahnerZennium.setCVEndHoldTime(2) zahnerZennium.setCVCycles(1.5) zahnerZennium.setCVSamplesPerCycle(400) zahnerZennium.setCVScanRate(0.5) zahnerZennium.setCVMaximumCurrent(0.03) zahnerZennium.setCVMinimumCurrent(-0.03) zahnerZennium.setCVOhmicDrop(0) zahnerZennium.disableCVAutoRestartAtCurrentOverflow() zahnerZennium.disableCVAutoRestartAtCurrentUnderflow() zahnerZennium.disableCVAnalogFunctionGenerator() ###Output _____no_output_____ ###Markdown Execute the measurementAfter checking whether the parameters have been set correctly, the measurement is started. ###Code zahnerZennium.checkCVSetup() print(zahnerZennium.readCVSetup()) zahnerZennium.measureCV() ###Output OK;CVSETUP;CV_Pstart=1.0000e+00;CV_Tstart=2;CV_Pupper=2.0000e+00;CV_Plower=0.0000e+00;CV_Pend=1.0000e+00;CV_Tend=2;CV_Srate=5.0000e-01;CV_Periods=2;CV_PpPer=400;CV_Imi=-3.0000e-02;CV_Ima=3.0000e-02;CV_Odrop=0.0000e+00;CV_Sstart=0.0000e+00;CV_Send=2.0000e+01;CV_AutoReStart=0;CV_AutoScale=0;CV_AFGena=0;ENDSETUP ###Markdown Changing the potentiostatBy default the main potentiostat with the number 0 is selected. 1 corresponds to the external potentiostat connected to EPC channel 1.Zahner offers various [External Potentiostats](http://zahner.de/products/external-potentiostats.html) or [Electronic Loads](http://zahner.de/products/electronic-loads.html) with higher power, voltage and current which can be controlled like the internal potentiostat. ###Code zahnerZennium.selectPotentiostat(1) ###Output _____no_output_____ ###Markdown Configuration of the next output dataFor each of the following CV measurements an individual filename is generated, which includes the scan rate of the measurement. ###Code zahnerZennium.setCVNaming("individual") zahnerZennium.setCVOutputPath(r"C:\THALES\temp\cv") ScanRatesForMeasurement = [0.1, 0.2, 0.5, 1.0] ###Output _____no_output_____ ###Markdown After configuration, a CV measurement is performed for each scan rate in the **ScanRatesForMeasurement** array. ###Code for scanRate in ScanRatesForMeasurement: zahnerZennium.setCVOutputFileName("cv_scanrate_{:d}mVs".format(int(scanRate * 1000))) zahnerZennium.setCVScanRate(scanRate) zahnerZennium.checkCVSetup() print(zahnerZennium.readCVSetup()) zahnerZennium.measureCV() ###Output OK;CVSETUP;CV_Pstart=1.0000e+00;CV_Tstart=2;CV_Pupper=2.0000e+00;CV_Plower=0.0000e+00;CV_Pend=1.0000e+00;CV_Tend=2;CV_Srate=1.0000e-01;CV_Periods=2;CV_PpPer=400;CV_Imi=-3.0000e-02;CV_Ima=3.0000e-02;CV_Odrop=0.0000e+00;CV_Sstart=0.0000e+00;CV_Send=8.4000e+01;CV_AutoReStart=0;CV_AutoScale=0;CV_AFGena=0;ENDSETUP OK;CVSETUP;CV_Pstart=1.0000e+00;CV_Tstart=2;CV_Pupper=2.0000e+00;CV_Plower=0.0000e+00;CV_Pend=1.0000e+00;CV_Tend=2;CV_Srate=2.0000e-01;CV_Periods=2;CV_PpPer=400;CV_Imi=-3.0000e-02;CV_Ima=3.0000e-02;CV_Odrop=0.0000e+00;CV_Sstart=0.0000e+00;CV_Send=4.4000e+01;CV_AutoReStart=0;CV_AutoScale=0;CV_AFGena=0;ENDSETUP OK;CVSETUP;CV_Pstart=1.0000e+00;CV_Tstart=2;CV_Pupper=2.0000e+00;CV_Plower=0.0000e+00;CV_Pend=1.0000e+00;CV_Tend=2;CV_Srate=5.0000e-01;CV_Periods=2;CV_PpPer=400;CV_Imi=-3.0000e-02;CV_Ima=3.0000e-02;CV_Odrop=0.0000e+00;CV_Sstart=0.0000e+00;CV_Send=2.0000e+01;CV_AutoReStart=0;CV_AutoScale=0;CV_AFGena=0;ENDSETUP OK;CVSETUP;CV_Pstart=1.0000e+00;CV_Tstart=2;CV_Pupper=2.0000e+00;CV_Plower=0.0000e+00;CV_Pend=1.0000e+00;CV_Tend=2;CV_Srate=1.0000e+00;CV_Periods=2;CV_PpPer=400;CV_Imi=-3.0000e-02;CV_Ima=3.0000e-02;CV_Odrop=0.0000e+00;CV_Sstart=0.0000e+00;CV_Send=1.2000e+01;CV_AutoReStart=0;CV_AutoScale=0;CV_AFGena=0;ENDSETUP ###Markdown DisconnectAfter the measurements are completed, the device switches back to the main potentiostat and the connection to the term is disconnected. ###Code zahnerZennium.selectPotentiostat(0) zenniumConnection.disconnectFromTerm() print("finish") ###Output finish ###Markdown Deployment of the source code**The following instruction is not needed by the user.**It automatically extracts the pure python code from the jupyter notebook to provide it to the user. Thus the user does not need jupyter itself and does not have to copy the code manually.The source code is saved in a .py file with the same name as the notebook. ###Code if executionInNotebook() == True: notebookCodeToPython("CyclicVoltammetry.ipynb") ###Output _____no_output_____
notebooks/Week_2_em_assignment.ipynb
###Markdown First things firstClick **File -> Save a copy in Drive** and click **Open in new tab** in the pop-up window to save your progress in Google Drive. Expectation-maximization algorithm In this assignment, we will derive and implement formulas for Gaussian Mixture Model — one of the most commonly used methods for performing soft clustering of the data. SetupLoading auxiliary files and importing the necessary libraries. ###Code try: import google.colab IN_COLAB = True except: IN_COLAB = False if IN_COLAB: print("Downloading Colab files") ! shred -u setup_google_colab.py ! wget https://raw.githubusercontent.com/hse-aml/bayesian-methods-for-ml/master/setup_google_colab.py -O setup_google_colab.py import setup_google_colab setup_google_colab.load_data_week2() import numpy as np from numpy.linalg import slogdet, det, solve import matplotlib.pyplot as plt import time from sklearn.datasets import load_digits from w2_grader import EMGrader %matplotlib inline ###Output _____no_output_____ ###Markdown GradingWe will create a grader instance below and use it to collect your answers. Note that these outputs will be stored locally inside grader and will be uploaded to the platform only after running submitting function in the last part of this assignment. If you want to make a partial submission, you can run that cell anytime you want. ###Code grader = EMGrader() ###Output _____no_output_____ ###Markdown Implementing EM for GMM For debugging, we will use samples from a Gaussian mixture model with unknown mean, variance, and priors. We also added initial values of parameters for grading purposes. ###Code samples = np.load('samples.npz') X = samples['data'] pi0 = samples['pi0'] mu0 = samples['mu0'] sigma0 = samples['sigma0'] plt.scatter(X[:, 0], X[:, 1], c='grey', s=30) plt.axis('equal') plt.show() ###Output _____no_output_____ ###Markdown Reminder Remember, that EM algorithm is a coordinate descent optimization of variational lower bound $\mathcal{L}(\theta, q) = \int q(T) \log\frac{p(X, T|\theta)}{q(T)}dT\to \max$.E-step:$\mathcal{L}(\theta, q) \to \max\limits_{q} \Leftrightarrow \mathcal{KL} [q(T) \,\|\, p(T|X, \theta)] \to \min \limits_{q\in Q} \Rightarrow q(T) = p(T|X, \theta)$M-step: $\mathcal{L}(\theta, q) \to \max\limits_{\theta} \Leftrightarrow \mathbb{E}_{q(T)}\log p(X,T | \theta) \to \max\limits_{\theta}$For GMM, $\theta$ is a set of parameters that consists of mean vectors $\mu_c$, covariance matrices $\Sigma_c$ and priors $\pi_c$ for each component.Latent variables $T$ are indices of components to which each data point is assigned, i.e. $t_i$ is the cluster index for object $x_i$.The joint distribution can be written as follows: $\log p(T, X \mid \theta) = \sum\limits_{i=1}^N \log p(t_i, x_i \mid \theta) = \sum\limits_{i=1}^N \sum\limits_{c=1}^C q(t_i = c) \log \left (\pi_c \, f_{\!\mathcal{N}}(x_i \mid \mu_c, \Sigma_c)\right)$,where $f_{\!\mathcal{N}}(x \mid \mu_c, \Sigma_c) = \frac{1}{\sqrt{(2\pi)^n|\boldsymbol\Sigma_c|}}\exp\left(-\frac{1}{2}({x}-{\mu_c})^T{\boldsymbol\Sigma_c}^{-1}({x}-{\mu_c})\right)$ is the probability density function (pdf) of the normal distribution $\mathcal{N}(x_i \mid \mu_c, \Sigma_c)$. E-stepIn this step we need to estimate the posterior distribution over the latent variables with fixed values of parameters: $q_i(t_i) = p(t_i \mid x_i, \theta)$. We assume that $t_i$ equals to the cluster index of the true component of the $x_i$ object. To do so we need to compute $\gamma_{ic} = p(t_i = c \mid x_i, \theta)$. Note that $\sum\limits_{c=1}^C\gamma_{ic}=1$. Important trick 1: It is important to avoid numerical errors. At some point you will have to compute the formula of the following form: $\frac{e^{y_i}}{\sum_j e^{y_j}}$, which is called _softmax_. When you compute exponents of large numbers, some numbers may become infinity. You can avoid this by dividing numerator and denominator by $e^{\max(y)}$: $\frac{e^{y_i-\max(y)}}{\sum_j e^{y_j - \max(y)}}$. After this transformation maximum value in the denominator will be equal to one. All other terms will contribute smaller values. So, to compute desired formula you first subtract maximum value from each component in vector $\mathbf{y}$ and then compute everything else as before.Important trick 2: You will probably need to compute formula of the form $A^{-1}x$ at some point. You would normally inverse $A$ and then multiply it by $x$. A bit faster and more numerically accurate way to do this is to directly solve equation $Ay = x$ by using a special function. Its solution is $y=A^{-1}x$, but the equation $Ay = x$ can be solved by methods which do not explicitely invert the matrix. You can use ```np.linalg.solve``` for this.Other usefull functions: ```slogdet``` and ```det``` Task 1: Implement E-step for GMM using template below. ###Code def E_step(X, pi, mu, sigma): """ Performs E-step on GMM model Each input is numpy array: X: (N x d), data points pi: (C), mixture component weights mu: (C x d), mixture component means sigma: (C x d x d), mixture component covariance matrices Returns: gamma: (N x C), probabilities of clusters for objects """ N = X.shape[0] # number of objects C = pi.shape[0] # number of clusters d = mu.shape[1] # dimension of each object gamma = np.zeros((N, C)) # distribution q(T) ### YOUR CODE HERE gaussians = np.einsum( 'ijkl, ijkl -> ij', (X[:, np.newaxis, :] - mu)[:, :, :, np.newaxis], np.linalg.solve(sigma, (X[:, np.newaxis, :] - mu)[:, :, :, np.newaxis]) ) gaussians = gaussians - np.max(gaussians, axis=1)[:, np.newaxis] # trick for numerical stability gaussians = np.exp(-0.5 * gaussians) weighted_gaussians = pi * gaussians / np.sqrt(np.linalg.det(sigma)) gamma = weighted_gaussians / np.sum(weighted_gaussians, axis=1)[:, np.newaxis] return gamma gamma = E_step(X, pi0, mu0, sigma0) grader.submit_e_step(gamma) ###Output Current answer for task Task 1 (E-step) is: 0.5337178741081263 ###Markdown M-stepIn M-step we need to maximize $\mathbb{E}_{q(T)}\log p(X,T | \theta)$ with respect to $\theta$. In our model this means that we need to find optimal values of $\pi$, $\mu$, $\Sigma$. To do so, you need to compute the derivatives and set them to zero. You should start by deriving formulas for $\mu$ as it is the easiest part. Then move on to $\Sigma$. Here it is crucial to optimize function w.r.t. to $\Lambda = \Sigma^{-1}$ and then inverse obtained result. Finaly, to compute $\pi$, you will need Lagrange Multipliers technique to satisfy constraint $\sum\limits_{i=1}^{n}\pi_i = 1$.Important note: You will need to compute derivatives of scalars with respect to matrices. To refresh this technique from previous courses, see wiki article about it . Main formulas of matrix derivatives can be found in Chapter 2 of The Matrix Cookbook. For example, there you may find that $\frac{\partial}{\partial A}\log |A| = A^{-T}$. Task 2: Implement M-step for GMM using template below. ###Code def M_step(X, gamma): """ Performs M-step on GMM model Each input is numpy array: X: (N x d), data points gamma: (N x C), distribution q(T) Returns: pi: (C) mu: (C x d) sigma: (C x d x d) """ N = X.shape[0] # number of objects C = gamma.shape[1] # number of clusters d = X.shape[1] # dimension of each object ### YOUR CODE HERE pi = gamma.sum(axis=0) / N mu = np.einsum('nc, nd -> cd', gamma, X) / gamma.sum(axis=0)[:, np.newaxis] matrix_term = np.matmul( (X[:, np.newaxis, :] - mu)[:, :, :, np.newaxis], (X[:, np.newaxis, :] - mu)[:, :, np.newaxis, :] # transpose on last 2 terms, i.e. transpose(0, 1, 3, 2) ) sigma = np.einsum( 'nc, ncab -> ncab', gamma, matrix_term ).sum(axis=0) / gamma.sum(axis=0)[:, np.newaxis, np.newaxis] return pi, mu, sigma gamma = E_step(X, pi0, mu0, sigma0) pi, mu, sigma = M_step(X, gamma) grader.submit_m_step(pi, mu, sigma) ###Output Current answer for task Task 2 (M-step: mu) is: 2.8993918820503843 Current answer for task Task 2 (M-step: sigma) is: 5.977105216897525 Current answer for task Task 2 (M-step: pi) is: 0.5507624459218775 ###Markdown Loss function Finally, we need some function to track convergence. We will use variational lower bound $\mathcal{L}$ for this purpose. We will stop our EM iterations when $\mathcal{L}$ will saturate. Usually, you will need only about 10-20 iterations to converge. It is also useful to check that this function never decreases during training. If it does, you have a bug in your code.Task 3: Implement a function that will compute $\mathcal{L}$ using template below.$$\mathcal{L} = \sum_{i=1}^{N} \sum_{c=1}^{C} q(t_i =c) (\log \pi_c + \log f_{\!\mathcal{N}}(x_i \mid \mu_c, \Sigma_c)) - \sum_{i=1}^{N} \sum_{c=1}^{K} q(t_i =c) \log q(t_i =c)$$ ###Code def compute_vlb(X, pi, mu, sigma, gamma): """ Each input is numpy array: X: (N x d), data points gamma: (N x C), distribution q(T) pi: (C) mu: (C x d) sigma: (C x d x d) Returns value of variational lower bound """ N = X.shape[0] # number of objects C = gamma.shape[1] # number of clusters d = X.shape[1] # dimension of each object ### YOUR CODE HERE norm_coeff = (1 / np.sqrt(np.power(2 * np.pi, d) * np.linalg.det(sigma))) gaussian_terms = - 0.5 * np.einsum( 'ijkl, ijkl -> ij', (X[:, np.newaxis, :] - mu)[:, :, :, np.newaxis], np.linalg.solve(sigma, (X[:, np.newaxis, :] - mu)[:, :, :, np.newaxis]) ) loss = (gamma * (np.log(pi+1e-20) + np.log(norm_coeff+1e-20) + gaussian_terms - np.log(gamma+1e-20))).sum() return loss pi, mu, sigma = pi0, mu0, sigma0 gamma = E_step(X, pi, mu, sigma) pi, mu, sigma = M_step(X, gamma) loss = compute_vlb(X, pi, mu, sigma, gamma) grader.submit_VLB(loss) ###Output Current answer for task Task 3 (VLB) is: -1213.9734643060183 ###Markdown Bringing it all together Now that we have E step, M step and VLB, we can implement the training loop. We will initialize values of $\pi$, $\mu$ and $\Sigma$ to some random numbers, train until $\mathcal{L}$ stops changing, and return the resulting points. We also know that the EM algorithm converges to local optima. To find a better local optima, we will restart the algorithm multiple times from different (random) starting positions. Each training trial should stop either when maximum number of iterations is reached or when relative improvement is smaller than given tolerance ($|\frac{\mathcal{L}_i-\mathcal{L}_{i-1}}{\mathcal{L}_{i-1}}| \le \text{rtol}$).Remember, that initial (random) values of $\pi$ that you generate must be non-negative and sum up to 1. Also, $\Sigma$ matrices must be symmetric and positive semi-definite. If you don't know how to generate those matrices, you can use $\Sigma=I$ as initialization.You will also sometimes get numerical errors because of component collapsing. The easiest way to deal with this problems is to restart the procedure.Task 4: Implement training procedure ###Code def train_EM(X, C, rtol=1e-3, max_iter=100, restarts=10): ''' Starts with random initialization *restarts* times Runs optimization until saturation with *rtol* reached or *max_iter* iterations were made. X: (N, d), data points C: int, number of clusters ''' N = X.shape[0] # number of objects d = X.shape[1] # dimension of each object best_loss = None best_pi = None best_mu = None best_sigma = None for _ in range(restarts): try: ### YOUR CODE HERE pi = np.random.uniform(low=0.0, high=1.0, size=C) pi = pi / pi.sum() # normalisation mu = np.random.uniform(low=0.0, high=1.0, size=(C, d)) sigma = np.repeat(np.eye(d)[np.newaxis, :, :], repeats=C, axis=0) loss = None for iter_ in range(max_iter): gamma = E_step(X, pi, mu, sigma) pi, mu, sigma = M_step(X, gamma) current_loss = compute_vlb(X, pi, mu, sigma, gamma) if loss is not None and current_loss < loss: raise ValueError("The vlb loss is increasing, there is a bug somewhere!") if iter_ > 0 and np.abs((current_loss - loss) / loss) <= rtol: print(f"Reached convergence in {iter_} iterations out ot {max_iter}") break loss = current_loss if best_loss is None or loss > best_loss: best_loss = loss best_pi = pi best_mu = mu best_sigma = sigma except np.linalg.LinAlgError: print("Singular matrix: components collapsed") pass return best_loss, best_pi, best_mu, best_sigma best_loss, best_pi, best_mu, best_sigma = train_EM(X, 3) grader.submit_EM(best_loss) ###Output Reached convergence in 27 iterations out ot 100 Reached convergence in 9 iterations out ot 100 Reached convergence in 16 iterations out ot 100 Reached convergence in 36 iterations out ot 100 Reached convergence in 27 iterations out ot 100 Reached convergence in 9 iterations out ot 100 Reached convergence in 14 iterations out ot 100 Reached convergence in 7 iterations out ot 100 Reached convergence in 10 iterations out ot 100 Reached convergence in 6 iterations out ot 100 Current answer for task Task 4 (EM) is: -1064.1946904165547 ###Markdown If you implemented all the steps correctly, your algorithm should converge in about 20 iterations. Let's plot the clusters to see it. We will assign a cluster label as the most probable cluster index. This can be found using a matrix $\gamma$ computed on last E-step. ###Code gamma = E_step(X, best_pi, best_mu, best_sigma) labels = gamma.argmax(axis=1) colors = np.array([(31, 119, 180), (255, 127, 14), (44, 160, 44)]) / 255. plt.scatter(X[:, 0], X[:, 1], c=colors[labels], s=30) plt.axis('equal') plt.show() ###Output _____no_output_____ ###Markdown Authorization & SubmissionTo submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate a token on this programming assignment's page. Note: The token expires 30 minutes after generation. ###Code STUDENT_EMAIL = '' STUDENT_TOKEN = '' grader.status() ###Output You want to submit these numbers: Task Task 1 (E-step): 0.5337178741081263 Task Task 2 (M-step: mu): 2.8993918820503843 Task Task 2 (M-step: sigma): 5.977105216897525 Task Task 2 (M-step: pi): 0.5507624459218775 Task Task 3 (VLB): -1213.9734643060183 Task Task 4 (EM): -1064.1946904165547 ###Markdown If you want to submit these answers, run cell below ###Code grader.submit(STUDENT_EMAIL, STUDENT_TOKEN) ###Output Submitted to Coursera platform. See results on assignment page!
ML-AML-Walkthrough/03-Operationalization/03-operationalization.ipynb
###Markdown Lab 3 - Model Deplyoment In this lab, you will learn how to use Azure Machine Learning Service to deploy, manage, and monitor the trained models.The following diagram illustrates the complete deployment workflow.![AML Arch](https://github.com/jakazmie/images-for-hands-on-labs/raw/master/model-ci-cd.png)The deployment workflow includes the following steps:- Create/Retrain the model- Register the model in a registry hosted in your Azure Machine Learning Service workspace- Register an image that pairs a model with a scoring script and dependencies in a portable container- Deploy the image as a web service in the cloud or to edge devices- Monitor and collect dataYou completed the first two steps in the previous labs.In this lab we will walk-through the reminder of the deployment workflow. Connect to the workspace ###Code # Verify AML SDK Installed # view version history at https://pypi.org/project/azureml-sdk/#history import azureml.core print("SDK Version:", azureml.core.VERSION) from azureml.core import Workspace # Read the workspace config from file ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n') ###Output _____no_output_____ ###Markdown Create and deploy the container image encapsulating the modelWhen you deploy a model using AML to either ACI or AKS, you are deploying a Docker container encapsulating a trained model, its dependencies, and a web services wrapper around the model. Create scoring scriptCreate the scoring script, called score.py, used by the web service call to invoke the model.You must include two required functions in the scoring script:- The `init()` function, which loads the model into a global object. This function is run only once when the Docker container is started.- The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats can be used. ###Code %%writefile score.py import json import os import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.externals import joblib from azureml.core.model import Model from azureml.core import Workspace def init(): try: global model model_name = '<<modelid>>' model_path = Model.get_model_path(model_name) model = joblib.load(model_path) except Exception as e: print('Exception during init: ', str(e)) def run(input_json): try: inputs = json.loads(input_json) prediction = model.predict(inputs) prediction = json.dumps(prediction.tolist()) except Exception as e: prediction = str(e) return prediction ###Output _____no_output_____ ###Markdown Substitute the actual model ID in the script file. ###Code from azureml.core.model import Model model_name = 'propensity_to_buy_predictor' model = Model(ws, name=model_name) script_file_name = 'score.py' with open(script_file_name, 'r') as cefr: content = cefr.read() with open(script_file_name, 'w') as cefw: cefw.write(content.replace('<<modelid>>', model.name)) ###Output _____no_output_____ ###Markdown Review the updated script. ###Code with open("score.py","r") as f: print(f.read()) ###Output _____no_output_____ ###Markdown Create a Conda dependencies environment file.Next, create an environment file that specifies the script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image. To ensure the consistency of the prediction results with the training results, the AML SDK dependency versions used by the scoring environment needs to be the same as in the environment that was used to train the model.The SDK dependency versions used to train the model can be retrieved from the run history.You need to replace the values in `experiment_name` with the name of your experiment. Create conda environment file. ###Code from azureml.core.conda_dependencies import CondaDependencies mycondaenv = CondaDependencies.create(conda_packages=['scikit-learn','numpy','pandas']) with open("mydeployenv.yml","w") as f: f.write(mycondaenv.serialize_to_string()) ###Output _____no_output_____ ###Markdown Review the content of 'yml' file. ###Code with open("mydeployenv.yml","r") as f: print(f.read()) ###Output _____no_output_____ ###Markdown Create docker image for deploymentTo create a Container Image, you need four things: the model metadata (as retrieved from Model Registry), the scoring script file, the runtime configuration (defining whether Python or PySpark should be used) and the Conda Dependencies file. ###Code from azureml.core.image import ContainerImage, Image # Define runtime runtime = "python" # Define scoring script driver_file = "score.py" # Define conda dependencies conda_file = "mydeployenv.yml" # configure the image image_config = ContainerImage.image_configuration(execution_script=driver_file, runtime=runtime, conda_file=conda_file, description="Image for propensity to buy predictor", tags={"Classifier": "AutomatedML"}) image = Image.create(name = "propensity-to-buy-classifier", models = [model], image_config = image_config, workspace = ws) image.wait_for_creation(show_output = True) ###Output _____no_output_____ ###Markdown Deploy the container image to ACIWith the Container Image in hand, you are almost ready to deploy to ACI. The next step is to define the size of the VM that ACI will use to run your Container. ###Code from azureml.core.webservice import AciWebservice, Webservice aci_config = AciWebservice.deploy_configuration( cpu_cores = 1, memory_gb = 1, tags = {'name':'Azure ML ACI'}, description = 'This is a deployment of the propensity to buy predictor.') ###Output _____no_output_____ ###Markdown At this point you can deploy the image to the webservice to ACI ###Code from azureml.core.webservice import Webservice service_name = "propensity-to-buy-predictor-aci" print("Deploying: ", service_name) aci_service = Webservice.deploy_from_image(deployment_config = aci_config, image = image, name = service_name, workspace = ws) aci_service.wait_for_deployment(True) #print(aci_service.get_logs()) ###Output _____no_output_____ ###Markdown Test the serviceOnce the webservice deployment completes, you can use the returned webservice object to invoke the webservice. Load test data ###Code import numpy as np import pandas as pd import os # Load a test dataset folder = '../datasets' filename = 'banking_test.csv' pathname = os.path.join(folder, filename) df_test = pd.read_csv(pathname, delimiter=',') feature_columns = [ # Demographic 'age', 'job', 'education', 'marital', 'housing', 'loan', # Previous campaigns 'month', 'campaign', 'poutcome', # Economic indicators 'emp_var_rate', 'cons_price_idx', 'cons_conf_idx', 'euribor3m', 'nr_employed'] df_test = df_test[feature_columns] df_test = pd.get_dummies(df_test, drop_first=True).astype(dtype='float') ###Output _____no_output_____ ###Markdown Invoke the service ###Code import json test_data = json.dumps(df_test[0:10].values.tolist()) result = aci_service.run(input_data = test_data) print(result) ###Output _____no_output_____ ###Markdown Clean up ###Code aci_service.delete() ###Output _____no_output_____ ###Markdown Deploy the container image to AKSOnce you are familiar with the process for deploying a webservice to ACI, you will find the process for deploying to AKS to be similar with one additional step that creates the AKS cluster first. ###Code # Provision an AKS cluster from azureml.core.compute import AksCompute, ComputeTarget from azureml.core.webservice import Webservice, AksWebservice # Use the default configuration, overriding the default location to a known region that supports AKS prov_config = AksCompute.provisioning_configuration(location='westus2') aks_name = 'aks-cluster01' # Create the cluster aks_target = ComputeTarget.create(workspace = ws, name = aks_name, provisioning_configuration = prov_config) # Wait for cluster to be ready aks_target.wait_for_completion(show_output = True) print(aks_target.provisioning_state) print(aks_target.provisioning_errors) ###Output _____no_output_____ ###Markdown With your AKS cluster ready, now you can deploy your webservice. Once again, you need to provide a configuration for the size of resources allocated from the AKS cluster to run instances of your Container. ###Code from azureml.core.image import ContainerImage, Image images = Image.list(ws, image_name="propensity-to-buy-classifier") images image = images[0] # Create the web service configuration (using defaults) aks_config = AksWebservice.deploy_configuration() aks_service_name ='propensity-to-buy-predictor-aks' aks_service = Webservice.deploy_from_image( workspace=ws, name=aks_service_name, image = image, deployment_target=aks_target ) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown Test the serviceAs before, you can use the webservice object returned by the deploy_from_model method to invoke your deployed webservice. ###Code import json test_data = json.dumps(df_test[0:10].values.tolist()) result = aks_service.run(input_data = test_data) print(result) ###Output _____no_output_____ ###Markdown Lab 3 - Model Deplyoment In this lab, you will learn how to use Azure Machine Learning Service to deploy, manage, and monitor the trained models.The following diagram illustrates the complete deployment workflow.![AML Arch](https://github.com/jakazmie/images-for-hands-on-labs/raw/master/model-ci-cd.png)The deployment workflow includes the following steps:- Create/Retrain the model- Register the model in a registry hosted in your Azure Machine Learning Service workspace- Register an image that pairs a model with a scoring script and dependencies in a portable container- Deploy the image as a web service in the cloud or to edge devices- Monitor and collect dataYou completed the first two steps in the previous labs.In this lab we will walk-through the reminder of the deployment workflow. Connect to the workspace ###Code # Verify AML SDK Installed # view version history at https://pypi.org/project/azureml-sdk/#history import azureml.core print("SDK Version:", azureml.core.VERSION) from azureml.core import Workspace # Read the workspace config from file ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\n') ###Output _____no_output_____ ###Markdown Create and deploy the container image encapsulating the modelWhen you deploy a model using AML to either ACI or AKS, you are deploying a Docker container encapsulating a trained model, its dependencies, and a web services wrapper around the model. Create scoring scriptCreate the scoring script, called score.py, used by the web service call to invoke the model.You must include two required functions in the scoring script:- The `init()` function, which loads the model into a global object. This function is run only once when the Docker container is started.- The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats can be used. ###Code %%writefile score.py import json import os import numpy as np import pandas as pd from sklearn.pipeline import Pipeline from sklearn.externals import joblib from azureml.core.model import Model from azureml.core import Workspace def init(): try: global model model_name = '<<modelid>>' model_path = Model.get_model_path(model_name) model = joblib.load(model_path) except Exception as e: print('Exception during init: ', str(e)) def run(input_json): try: inputs = json.loads(input_json) prediction = model.predict(inputs) prediction = json.dumps(prediction.tolist()) except Exception as e: prediction = str(e) return prediction ###Output _____no_output_____ ###Markdown Substitute the actual model ID in the script file. ###Code from azureml.core.model import Model model_name = 'propensity_to_buy_predictor' model = Model(ws, name=model_name) script_file_name = 'score.py' with open(script_file_name, 'r') as cefr: content = cefr.read() with open(script_file_name, 'w') as cefw: cefw.write(content.replace('<<modelid>>', model.name)) ###Output _____no_output_____ ###Markdown Review the updated script. ###Code with open("score.py","r") as f: print(f.read()) ###Output _____no_output_____ ###Markdown Create a Conda dependencies environment file.Next, create an environment file that specifies the script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image. You need to replace the values in `experiment_name` with the name of your experiment. ###Code from azureml.core.conda_dependencies import CondaDependencies mycondaenv = CondaDependencies.create(conda_packages=['scikit-learn','numpy','pandas']) with open("mydeployenv.yml","w") as f: f.write(mycondaenv.serialize_to_string()) ###Output _____no_output_____ ###Markdown Review the content of 'yml' file. ###Code with open("mydeployenv.yml","r") as f: print(f.read()) ###Output _____no_output_____ ###Markdown Create docker image for deploymentTo create a Container Image, you need four things: the model metadata (as retrieved from Model Registry), the scoring script file, the runtime configuration (defining whether Python or PySpark should be used) and the Conda Dependencies file. ###Code from azureml.core.image import ContainerImage, Image # Define runtime runtime = "python" # Define scoring script driver_file = "score.py" # Define conda dependencies conda_file = "mydeployenv.yml" # configure the image image_config = ContainerImage.image_configuration(execution_script=driver_file, runtime=runtime, conda_file=conda_file, description="Image for propensity to buy predictor", tags={"Classifier": "AutomatedML"}) image = Image.create(name = "propensity-to-buy-classifier", models = [model], image_config = image_config, workspace = ws) image.wait_for_creation(show_output = True) ###Output _____no_output_____ ###Markdown Deploy the container image to ACIWith the Container Image in hand, you are almost ready to deploy to ACI. The next step is to define the size of the VM that ACI will use to run your Container. ###Code from azureml.core.webservice import AciWebservice, Webservice aci_config = AciWebservice.deploy_configuration( cpu_cores = 1, memory_gb = 1, tags = {'name':'Azure ML ACI'}, description = 'This is a deployment of the propensity to buy predictor.') ###Output _____no_output_____ ###Markdown At this point you can deploy the image to the webservice to ACI ###Code from azureml.core.webservice import Webservice service_name = "propensity-to-buy-predictor-aci" print("Deploying: ", service_name) aci_service = Webservice.deploy_from_image(deployment_config = aci_config, image = image, name = service_name, workspace = ws) aci_service.wait_for_deployment(True) #print(aci_service.get_logs()) ###Output _____no_output_____ ###Markdown Test the serviceOnce the webservice deployment completes, you can use the returned webservice object to invoke the webservice. Load test data ###Code import numpy as np import pandas as pd import os # Load a test dataset folder = '../datasets' filename = 'banking_test.csv' pathname = os.path.join(folder, filename) df_test = pd.read_csv(pathname, delimiter=',') feature_columns = [ # Demographic 'age', 'job', 'education', 'marital', 'housing', 'loan', # Previous campaigns 'month', 'campaign', 'poutcome', # Economic indicators 'emp_var_rate', 'cons_price_idx', 'cons_conf_idx', 'euribor3m', 'nr_employed'] df_test = df_test[feature_columns] df_test = pd.get_dummies(df_test, drop_first=True).astype(dtype='float') ###Output _____no_output_____ ###Markdown Invoke the service ###Code import json test_data = json.dumps(df_test[0:10].values.tolist()) result = aci_service.run(input_data = test_data) print(result) ###Output _____no_output_____ ###Markdown Clean up ###Code aci_service.delete() ###Output _____no_output_____ ###Markdown Deploy the container image to AKSOnce you are familiar with the process for deploying a webservice to ACI, you will find the process for deploying to AKS to be similar with one additional step that creates the AKS cluster first. ###Code # Provision an AKS cluster from azureml.core.compute import AksCompute, ComputeTarget from azureml.core.webservice import Webservice, AksWebservice # Use the default configuration, overriding the default location to a known region that supports AKS prov_config = AksCompute.provisioning_configuration(location='westus2') aks_name = 'aks-cluster01' # Create the cluster aks_target = ComputeTarget.create(workspace = ws, name = aks_name, provisioning_configuration = prov_config) # Wait for cluster to be ready aks_target.wait_for_completion(show_output = True) print(aks_target.provisioning_state) print(aks_target.provisioning_errors) ###Output _____no_output_____ ###Markdown With your AKS cluster ready, now you can deploy your webservice. Once again, you need to provide a configuration for the size of resources allocated from the AKS cluster to run instances of your Container. ###Code from azureml.core.image import ContainerImage, Image images = Image.list(ws, image_name="propensity-to-buy-classifier") images image = images[0] # Create the web service configuration (using defaults) aks_config = AksWebservice.deploy_configuration() aks_service_name ='propensity-to-buy-predictor-aks' aks_service = Webservice.deploy_from_image( workspace=ws, name=aks_service_name, image = image, deployment_target=aks_target ) aks_service.wait_for_deployment(show_output = True) print(aks_service.state) ###Output _____no_output_____ ###Markdown Test the serviceAs before, you can use the webservice object returned by the deploy_from_model method to invoke your deployed webservice. ###Code import json test_data = json.dumps(df_test[0:10].values.tolist()) result = aks_service.run(input_data = test_data) print(result) ###Output _____no_output_____
03_Python_Flow_Control_examples/007_find_the_factorial_of_a_number.ipynb
###Markdown All the IPython Notebooks in this **Python Examples** series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/90_Python_Examples)** Python Program to Find the Factorial of a NumberIn this article, you'll learn to find the factorial of a number and display it.To understand this example, you should have the knowledge of the following **[Python programming](https://github.com/milaan9/01_Python_Introduction/blob/main/000_Intro_to_Python.ipynb)** topics:* **[Python if-else Statement](https://github.com/milaan9/03_Python_Flow_Control/blob/main/002_Python_if_else_statement.ipynb)*** **[Python if-elif-else Statement](https://github.com/milaan9/03_Python_Flow_Control/blob/main/003_Python_if_elif_else_statement%20.ipynb)*** **[Python for Loop](https://github.com/milaan9/03_Python_Flow_Control/blob/main/005_Python_for_Loop.ipynb)** The factorial of a number is the product of all the integers from 1 to that number.For example, the factorial of 6 is **`1*2*3*4*5*6 = 720`**. Factorial is not defined for negative numbers, and the factorial of zero is one, **`0! = 1`**. ###Code # Example 1: find the factorial of a number provided by the user. # change the value for a different result num = 6 # To take input from the user #num = int(input("Enter a number: ")) factorial = 1 # check if the number is negative, positive or zero if num < 0: print("Sorry, factorial does not exist for negative numbers") elif num == 0: print("The factorial of 0 is 1") else: for i in range(1,num + 1): factorial = factorial*i print("The factorial of",num,"is",factorial) ''' >>Output/Runtime Test Cases: The factorial of 6 is 720 ''' ###Output The factorial of 6 is 720
Choropleth_paints_thousand_words.ipynb
###Markdown Making Choropleth's for FAO Land use data 1) Additional packages to import are geopandas and descartes. ###Code import geopandas as gpd import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import descartes pd.set_option('display.max_rows',500, 'display.max_columns', None) %matplotlib inline ###Output _____no_output_____ ###Markdown 2) Import world map shapefile ###Code shapefile = 'ne_110m_admin_0_countries_lakes-Copy1.shp' world_gdf = gpd.read_file(shapefile)[['ADMIN','NAME','geometry']] world_gdf.head() world_gdf.columns = ['Country','Country_short','geometry'] world_gdf.head(2) world_gdf.plot() plt.show() world_gdf.info() ###Output _____no_output_____ ###Markdown 2.1) Remove Antarctica ###Code print(world_gdf[world_gdf['Country']=='Antarctica']) world_gdf = world_gdf.drop(world_gdf.index[159]) world_gdf.plot(); ###Output _____no_output_____ ###Markdown 3) Import countries_land_use.csv file saved from the Jupyter notebook EDA of Land use FAO data_blog ###Code countries_land_use = pd.read_csv('countries_land_use.csv') countries_land_use.head(3) ###Output _____no_output_____ ###Markdown 3.1) Rename countries to world_gdf names ###Code countries_land_use['Country'].unique() world_country_names = world_gdf['Country'].to_list() world_country_names.sort() world_country_names rename_map = {'Bolivia (Plurinational State of)':'Bolivia','Brunei Darussalam':'Brunei', 'Falkland Islands (Malvinas)':'Falkland Islands','Iran (Islamic Republic of)':'Iran', 'Lao People''s Democratic Republic':'Laos','Democratic People''s Republic of Korea':'North Korea', 'Republic of Korea':'South Korea','Serbia':'Republic of Serbia','Sudan(former)':'Sudan', 'Syrian Arab Republic':'Syria','Timor-Leste':'East Timor','USSR':'Russia','Russian Federation':'Russia', 'Venezuela (Bolivarian Republic of)':'Venezuela','Viet Nam':'Vietnam'} countries_land_use['Country']=countries_land_use['Country'].map(rename_map).fillna(countries_land_use['Country']) ###Output _____no_output_____ ###Markdown 3.2) Create subset of DataFrame for arable land use ###Code arable = countries_land_use[countries_land_use['Land_use']=='Arable land'] Rus = (arable[arable['Country']== 'Russia']) Rus Rus_sum = Rus.sum(skipna=True) Rus_DF = pd.DataFrame(Rus_sum) Rus_DF = Rus_DF.T Rus_DF['Country'] = Rus_DF['Country'].replace({'RussiaRussia':'Russia'}) Rus_DF['Land_use'] = Rus_DF['Land_use'].replace({'Arable landArable land': 'Arable land'}) Rus_DF['Element'] = Rus_DF['Element'].replace({'AreaArea':'Area'}) Rus_DF arable = arable.drop([3901,4962]) arable = arable.append(Rus_DF, ignore_index=True) arable.tail() arable['Country'].value_counts() ###Output _____no_output_____ ###Markdown 4) Merge world GeoDataFrame and Arable DataFrameCreate a GeoDataFrame that contains both polygon geometry and land use data ###Code arable_gdf = world_gdf.merge(arable, on='Country', how='outer') type(arable_gdf) arable_gdf ###Output _____no_output_____ ###Markdown 4.1) Fill NaN with zeroCountry rows with NaN will not appear on the map ###Code arable_gdf.loc[:,'1961':'2017'] = arable_gdf.loc[:,'1961':'2017'].fillna(0) arable_gdf ###Output _____no_output_____ ###Markdown 5) Plot Choropleth ###Code plt.rcParams['figure.figsize'] = [20, 10] arable_gdf.plot(column = '2017', cmap = 'Oranges', edgecolor = 'gray', legend=True) plt.show() ###Output _____no_output_____ ###Markdown 5.1) Customise the plot and colorbarTo customize the plot and colorbar define the plot axes as ax and the legend axes as cax, then pass these to the .plot() function. The below example uses the mpl_toolkits make_axes_locatable function to vertically align the plot and legend axes. ###Code from mpl_toolkits.axes_grid1 import make_axes_locatable fig, ax = plt.subplots(1, 1, figsize=(20,10)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) arable_gdf.plot(column = '2017', ax=ax, legend=True, cax=cax, cmap = 'Oranges', edgecolor = 'gray', linewidth=0.5) ax.set_title('Hectares used for arable land, 2017', fontsize=20) cax.set_title('Hectares', fontsize=14); #plt.rcParams(['figure.figsize'] = [90, 90], 'fontsize' = 20) ###Output _____no_output_____ ###Markdown Repeat for Permanent Meadows and Pastures ###Code perm_meadows = countries_land_use[countries_land_use['Land_use']=='Land under perm. meadows and pastures'] Rus2 = (perm_meadows[perm_meadows['Country']== 'Russia']) Rus2 Rus2_sum = Rus2.sum(skipna=True) Rus2_DF = pd.DataFrame(Rus2_sum) Rus2_DF = Rus2_DF.T Rus2_DF['Country'] = Rus2_DF['Country'].replace({'RussiaRussia':'Russia'}) Rus2_DF['Land_use'] = Rus2_DF['Land_use'].replace({'Land under perm. meadows and pasturesLand under perm. meadows and pastures': 'Land under perm. meadows and pastures'}) Rus2_DF['Element'] = Rus2_DF['Element'].replace({'AreaArea':'Area'}) Rus2_DF perm_meadows = perm_meadows.drop([3904,4964]) perm_meadows = perm_meadows.append(Rus2_DF, ignore_index=True) perm_meadows.tail() perm_meadows['Country'].value_counts() perm_meadows_gdf = world_gdf.merge(perm_meadows, on='Country', how='outer') type(perm_meadows_gdf) perm_meadows_gdf perm_meadows_gdf.loc[:,'1961':'2017'] = perm_meadows_gdf.loc[:,'1961':'2017'].fillna(0) from mpl_toolkits.axes_grid1 import make_axes_locatable fig, ax = plt.subplots(1, 1, figsize=(20,10)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) perm_meadows_gdf.plot(column = '2017', ax=ax, legend=True, cax=cax, cmap = 'Greens', edgecolor = 'gray', linewidth=0.5) ax.set_title('Hectares used for permanent meadows & pasture land, 2017', fontsize=20) cax.set_title('Hectares', fontsize=14); plt.show() #plt.rcParams(['figure.figsize'] = [90, 90], 'fontsize' = 20) fig, ax = plt.subplots(1,1) perm_meadows_gdf.plot(column = '2017', ax=ax, legend=True, legend_kwds={'pad': 0.02, 'label':"Permanent meadows & pastures (ha)", 'orientation':"horizontal"}, cmap = 'Greens', edgecolor = 'gray') #plt.xticks(fontsize=200) ###Output _____no_output_____ ###Markdown 6) Choropleth showing change in hectares in use since 1961 Make a copy of arable_gdf as I don't want to change the original. ###Code arable_gdf_diff = arable_gdf ###Output _____no_output_____ ###Markdown Create a column to contain the values for the calculation 2017 hectares - 1961 hectares, to see how hectares in use have changed since 1961. ###Code arable_gdf_diff['2017-1961'] = arable_gdf_diff['2017'] - arable_gdf_diff['1961'] arable_gdf_diff ###Output _____no_output_____ ###Markdown 6.1) Normalise the colorbarBy default any colorbar applied in matplotlib will diverge from the midpoint between the max and min values of the plotted column e.g. 1000 to -4500. This is not very useful when using divering colormaps to show positive and negative values as the plot below shows. The zero point is represented by blue and some negative values are also blue, when what we want is the colourmap to diverge from zero, the grey midpoint color, positive values to be blue and negative values to be red. ###Code from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.colors as colors fig, ax = plt.subplots(1, 1, figsize=(20,10)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) arable_gdf_diff.plot(column = '2017-1961', ax=ax, legend=True, cax=cax, cmap = 'coolwarm_r', edgecolor = 'gray', linewidth=0.5) # _r reverses the colormap so red represents negative values ax.set_title('Change in hectares used for Arable land between 1961 and 2017', fontsize=20) cax.set_title('Hectares', fontsize=14); ###Output _____no_output_____ ###Markdown 6.2) Rest color midpoint to zero, DivergingNorm FunctionNormalise the colormap around the zero centerpoint by using the DivergingNorm function in Matplotlib as shown below. The resulting choropleth is a much clearer representation of how land use has changed between 1961 and 2017. ###Code from mpl_toolkits.axes_grid1 import make_axes_locatable import matplotlib.colors as colors fig, ax = plt.subplots(1, 1, figsize=(20,10)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) #normalise the colormap around zero vmin, vmax, vcenter = arable_gdf_diff['2017-1961'].min(), arable_gdf_diff['2017-1961'].max(), 0 divnorm = colors.DivergingNorm(vmin=vmin, vcenter=vcenter, vmax=vmax) arable_gdf_diff.plot(column = '2017-1961', ax=ax, legend=True, cax=cax, cmap = 'coolwarm_r', norm=divnorm, edgecolor = 'gray', linewidth=0.5) # _r reverses the colormap so red represents negative values ax.set_title('Change in hectares used for Arable land between 1961 and 2017', fontsize=20) cax.set_title('Hectares', fontsize=14); perm_meadows_gdf_diff = perm_meadows_gdf perm_meadows_gdf_diff['2017-1961'] = perm_meadows_gdf_diff['2017'] - perm_meadows_gdf_diff['1961'] perm_meadows_gdf_diff from mpl_toolkits.axes_grid1 import make_axes_locatable fig, ax = plt.subplots(1, 1, figsize=(20,10)) divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.1) vmin, vmax, vcenter = perm_meadows_gdf_diff['2017-1961'].min(), perm_meadows_gdf_diff['2017-1961'].max(), 0 divnorm = colors.DivergingNorm(vmin=vmin, vcenter=vcenter, vmax=vmax) perm_meadows_gdf_diff.plot(column = '2017-1961', ax=ax, legend=True, cax=cax, cmap = 'coolwarm_r', norm=divnorm, edgecolor = 'gray', linewidth=0.5) ax.set_title('Change in hectares used for Permanent meadows & pasture land between 1961 and 2017', fontsize=20) cax.set_title('Hectares', fontsize=14); ###Output _____no_output_____
notebook/Milestone3-Task3.ipynb
###Markdown Task 3 Imports ###Code import numpy as np import pandas as pd from joblib import dump, load from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt plt.style.use('ggplot') plt.rcParams.update({'font.size': 16, 'axes.labelweight': 'bold', 'figure.figsize': (8,6)}) ###Output Matplotlib created a temporary config/cache directory at /tmp/matplotlib-zv8mqxbr because the default path (/home/jupyter-student85/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. ###Markdown Part 1: Recall as a final goal of this project. We want to build and deploy ensemble machine learning models in the cloud, where features are outputs of different climate models and the target is the actual rainfall observation. In this milestone, you'll actually build these ensemble machine learning models in the cloud. **Your tasks:**1. Read the data CSV from your s3 bucket. 2. Drop rows with nans. 3. Split the data into train (80%) and test (20%) portions with `random_state=123`. 4. Carry out EDA of your choice on the train split. 5. Train ensemble machine learning model using `RandomForestRegressor` and evaluate with metric of your choice (e.g., `RMSE`) by considering `Observed` as the target column. 6. Discuss your results. Are you getting better results with ensemble models compared to the individual climate models? > Recall that individual columns in the data are predictions of different climate models. ###Code # Step 1: Read the data CSV from our s3 bucket aws_credentials ={"key": " ","secret": " "} df = pd.read_csv('s3://mds-s3-student85/output/ml_data_SYD.csv', storage_options=aws_credentials) # Step 2: Drop rows with nans df = df.dropna() df.head() # Step 3: Split the data into train (80%) and test (20%) portions with random_state=123 X = df.drop(columns = ['observed_rainfall', 'time']) y = df['observed_rainfall'] X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=123 ) # Step 4: Carry out EDA of our choice on the train split X_train.describe() y_train.describe() # Step 5: Train ensemble machine learning model using RandomForestRegressor # We chose to evalutate on RMSE metric model = RandomForestRegressor(criterion='mse', random_state=123, n_jobs=-1) model.fit(X_train, y_train) pred = model.predict(X_test) pred rmse = mean_squared_error(y_test, pred, squared=False) rmse # Step 6: Discuss results # RMSE for the individual climate models rmse = {} for col in X_test.columns: rmse[col] = mean_squared_error(y_test, X_test[col], squared=False) rmse ###Output _____no_output_____ ###Markdown Yes, we are getting better results with ensemble models compared to the individual climate models as we can see that individual climate models have greater RMSEs than the ensemble model. Part 2: Preparation for deploying model next week Complete task 4 from the milestone3 before coming here We’ve found ```n_estimators=100, max_depth=5``` to be the best hyperparameter settings with MLlib (from the task 4 from milestone3), here we then use the same hyperparameters to train a scikit-learn model. ###Code model = RandomForestRegressor(n_estimators=100, max_depth=5, bootstrap=True) model.fit(X_train, y_train) print(f"Train RMSE: {mean_squared_error(y_train, model.predict(X_train), squared=False):.2f}") print(f" Test RMSE: {mean_squared_error(y_test, model.predict(X_test), squared=False):.2f}") # ready to deploy dump(model, "model.joblib") ###Output _____no_output_____ ###Markdown Task 3 Imports ###Code import numpy as np import pandas as pd from joblib import dump, load from sklearn.metrics import mean_squared_error from sklearn.ensemble import RandomForestRegressor from PIL import Image from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt plt.style.use('ggplot') plt.rcParams.update({'font.size': 16, 'axes.labelweight': 'bold', 'figure.figsize': (8,6)}) ## add any other additional packages that you need. You are free to use any packages for vizualization. ###Output _____no_output_____ ###Markdown Part 1: Recall as a final goal of this project. We want to build and deploy ensemble machine learning models in the cloud, where features are outputs of different climate models and the target is the actual rainfall observation. In this milestone, you'll actually build these ensemble machine learning models in the cloud. **Your tasks:**1. Read the data CSV from your s3 bucket. 2. Drop rows with nans. 3. Split the data into train (80%) and test (20%) portions with `random_state=123`. 4. Carry out EDA of your choice on the train split. 5. Train ensemble machine learning model using `RandomForestRegressor` and evaluate with metric of your choice (e.g., `RMSE`) by considering `Observed` as the target column. 6. Discuss your results. Are you getting better results with ensemble models compared to the individual climate models? > Recall that individual columns in the data are predictions of different climate models. ###Code ## Depending on the permissions that you provided to your bucket you might need to provide your aws credentials ## to read from the bucket, if so provide with your credentials and pass as storage_options=aws_credentials # aws_credentials = {"key": "","secret": "","token":""} # df = pd.read_csv("s3://xxxx/ml_data_SYD.csv", index_col=0, parse_dates=True) ## Use your ML skills to get from step 1 to step 6 import sys !{sys.executable} -m pip install s3fs ###Output Requirement already satisfied: s3fs in c:\programdata\anaconda3\lib\site-packages (2022.3.0) Requirement already satisfied: fsspec==2022.3.0 in c:\programdata\anaconda3\lib\site-packages (from s3fs) (2022.3.0) Requirement already satisfied: aiobotocore~=2.2.0 in c:\programdata\anaconda3\lib\site-packages (from s3fs) (2.2.0) Requirement already satisfied: aiohttp<=4 in c:\programdata\anaconda3\lib\site-packages (from s3fs) (3.8.1) Requirement already satisfied: wrapt>=1.10.10 in c:\programdata\anaconda3\lib\site-packages (from aiobotocore~=2.2.0->s3fs) (1.11.2) Requirement already satisfied: aioitertools>=0.5.1 in c:\programdata\anaconda3\lib\site-packages (from aiobotocore~=2.2.0->s3fs) (0.10.0) Requirement already satisfied: botocore<1.24.22,>=1.24.21 in c:\programdata\anaconda3\lib\site-packages (from aiobotocore~=2.2.0->s3fs) (1.24.21) Requirement already satisfied: asynctest==0.13.0; python_version < "3.8" in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (0.13.0) Requirement already satisfied: frozenlist>=1.1.1 in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (1.3.0) Requirement already satisfied: attrs>=17.3.0 in c:\users\kylea\appdata\roaming\python\python37\site-packages (from aiohttp<=4->s3fs) (21.2.0) Requirement already satisfied: typing-extensions>=3.7.4; python_version < "3.8" in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (3.10.0.2) Requirement already satisfied: multidict<7.0,>=4.5 in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (6.0.2) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (4.0.2) Requirement already satisfied: yarl<2.0,>=1.0 in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (1.7.2) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in c:\users\kylea\appdata\roaming\python\python37\site-packages (from aiohttp<=4->s3fs) (2.0.7) Requirement already satisfied: aiosignal>=1.1.2 in c:\programdata\anaconda3\lib\site-packages (from aiohttp<=4->s3fs) (1.2.0) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in c:\programdata\anaconda3\lib\site-packages (from botocore<1.24.22,>=1.24.21->aiobotocore~=2.2.0->s3fs) (0.9.4) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\lib\site-packages (from botocore<1.24.22,>=1.24.21->aiobotocore~=2.2.0->s3fs) (2.8.2) Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\lib\site-packages (from botocore<1.24.22,>=1.24.21->aiobotocore~=2.2.0->s3fs) (1.26.7) Requirement already satisfied: idna>=2.0 in c:\programdata\anaconda3\lib\site-packages (from yarl<2.0,>=1.0->aiohttp<=4->s3fs) (2.8) Requirement already satisfied: six>=1.5 in c:\programdata\anaconda3\lib\site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.24.22,>=1.24.21->aiobotocore~=2.2.0->s3fs) (1.16.0) ###Markdown 1. Read the data from s3 bucket ###Code # Read the data from s3 bucket aws_credentials ={"key": "ASIAQ3IZ36HLQQRFMB7V", "secret": "jPND4gV1MGaVmHb5Z5jAQpXknIfblo7GAhCmHFAG", "token": "FwoGZXIvYXdzEPv//////////wEaDMDwXmEHSTeHEsSt4SLCAddLY6d+dwZH+dBe8s34+Irl47EP+bu1P1r9RdF0jAogr1mas6zO6r7OCgML//Wrc27Mx8I9NSBR0mdo+l0g83TLuZJvg0QOl3viZjqKTpzFbRUGAcSCx+LXf1XvsDKxlDqclToeQPDegfQBIKeap1o42FaLsivsTYTboWCiZO2eN9wjzqxNhf/s++3aPVfuHG8+AEyHN+sd/mYMRJqZqev5Nb3dkP91WZNOgMHdZl9p33hyrAuIcb0m/l0cMxpN7djYKJXU5pIGMi3ixpNAz7l/Y3+KF+J9Wns+tR+RlQBiRlT5sGCd2lmhHBIrFe0ypNRGfCm+F84="} df = pd.read_csv("s3://mds-s3-17/output/ml_data_SYD.csv", storage_options=aws_credentials, parse_dates=True) # check data df.head() ###Output _____no_output_____ ###Markdown 2. Drop rows with nans ###Code # check for nans df.isnull().sum() # drop nans df = df.dropna() # final check df.isnull().sum() ###Output _____no_output_____ ###Markdown 3. Split the data ###Code df_train, df_test = train_test_split(df.dropna(), test_size=0.2, random_state=123) df_train.head() ###Output _____no_output_____ ###Markdown 4. Carry out EDA ###Code # EDA 1. check basic stats df_train.describe() # EDA 2. reorganize the df to create a bar chart df_bar =df_train.reset_index(drop=True) df_bar = pd.DataFrame(df.unstack()).reset_index() df_bar = df_bar.drop(columns=['level_1']) df_bar = df_bar.rename(columns = {'level_0':'models', 0:'rain'}) df_bar.head() df_rank = pd.DataFrame(df_bar.groupby(["models"])["rain"].agg(sum)).reset_index() df_rank.head() # pip install altair # create a bar chart that shows total rain fall for each model in descending order import altair as alt alt.Chart(df_rank).mark_bar().encode( x=alt.X('models', sort='-y'), y=alt.Y('rain'), color=alt.Color('rain') ).transform_window( rank='rank(rain)', sort=[alt.SortField('rain', order='descending')] ).properties( width= 300, height=400 ) Image.open("img/rainfall_barchart.png") ###Output _____no_output_____ ###Markdown 5. Train ensemble model ###Code X_train = df_train.drop(columns=["observed_rainfall"]) y_train = df_train["observed_rainfall"] X_test = df_test.drop(columns=["observed_rainfall"]) y_test = df_test["observed_rainfall"] # train ensemble model model = RandomForestRegressor().fit(X_train, y_train) # predict y_pred = model.predict(X_train) # results rmse = mean_squared_error(y_train, y_pred, squared=False) rmse ###Output _____no_output_____ ###Markdown 6. Discussion ###Code models = X_train.columns.to_list() scores = {} # generate scores for each model for model in models: X = pd.DataFrame(X_train[model]) rf = RandomForestRegressor().fit(X, y_train) y_preds = rf.predict(X) scores[model] = mean_squared_error(y_train, y_preds, squared=False) # save the scores in dataframe for comparison df_scores = pd.DataFrame(data = scores.values(), index = scores.keys()) df_scores ###Output _____no_output_____ ###Markdown As you can see from above results, all the individual models are performing worse than the ensemble model's score of 3.122 Part 2: Preparation for deploying model next week ***NOTE: Complete task 4 from the milestone3 before coming here*** We’ve found the best hyperparameter settings with MLlib (from the task 4 from milestone3), here we then use the same hyperparameters to train a scikit-learn model. ###Code model = RandomForestRegressor(n_estimators=100, max_depth=5, bootstrap=True) model.fit(X_train, y_train) print(f"Train RMSE: {mean_squared_error(y_train, model.predict(X_train), squared=False):.2f}") print(f" Test RMSE: {mean_squared_error(y_test, model.predict(X_test), squared=False):.2f}") # ready to deploy dump(model, "model.joblib") ###Output _____no_output_____ ###Markdown ***Upload model.joblib to s3 under output folder. You choose how you want to upload it (using CLI, SDK, or web console).*** ###Code Image.open("img/525_m3_3.png") ###Output _____no_output_____
notebooks/lucas/00 - pyspark-ml-tutorial-for-beginners.ipynb
###Markdown Predicting House Prices with Apache Spark LINEAR REGRESSIONIn this we'll make use of the [California Housing](http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html) data set. Note, of course, that this is actually 'small' data and that using Spark in this context might be overkill; This notebook is for educational purposes only and is meant to give us an idea of how we can use PySpark to build a machine learning model. 1. Understanding the Data SetThe California Housing data set appeared in a 1997 paper titled *Sparse Spatial Autoregressions*, written by Pace, R. Kelley and Ronald Barry and published in the Statistics and Probability Letters journal. The researchers built this data set by using the 1990 California census data.The data contains one row per census block group. A block group is the smallest geographical unit for which the U.S. Census Bureau publishes sample data (a block group typically has a population of 600 to 3,000 people). In this sample a block group on average includes 1425.5 individuals living in a geographically compact area.These spatial data contain 20,640 observations on housing prices with 9 economic variables:Longitude:refers to the angular distance of a geographic place north or south of the earth’s equator for each block groupLatitude :refers to the angular distance of a geographic place east or west of the earth’s equator for each block groupHousing Median Age:is the median age of the people that belong to a block group. Note that the median is the value that lies at the midpoint of a frequency distribution of observed valuesTotal Rooms:is the total number of rooms in the houses per block groupTotal Bedrooms:is the total number of bedrooms in the houses per block groupPopulation:is the number of inhabitants of a block groupHouseholds:refers to units of houses and their occupants per block groupMedian Income:is used to register the median income of people that belong to a block groupMedian House Value:is the dependent variable and refers to the median house value per block groupWhat's more, we also learn that all the block groups have zero entries for the independent and dependent variables have been excluded from the data.The Median house value is the dependent variable and will be assigned the role of the target variable in our ML model. ###Code !pip install pyspark import os import pandas as pd import numpy as np from pyspark import SparkConf, SparkContext from pyspark.sql import SparkSession, SQLContext from pyspark.sql.types import * import pyspark.sql.functions as F from pyspark.sql.functions import udf, col from pyspark.ml.regression import LinearRegression from pyspark.mllib.evaluation import RegressionMetrics from pyspark.ml.tuning import ParamGridBuilder, CrossValidator, CrossValidatorModel from pyspark.ml.feature import VectorAssembler, StandardScaler from pyspark.ml.evaluation import RegressionEvaluator import seaborn as sns import matplotlib.pyplot as plt # Visualization from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" pd.set_option('display.max_columns', 200) pd.set_option('display.max_colwidth', 400) from matplotlib import rcParams sns.set(context='notebook', style='whitegrid', rc={'figure.figsize': (18,4)}) rcParams['figure.figsize'] = 18,4 %matplotlib inline %config InlineBackend.figure_format = 'retina' # setting random seed for notebook reproducability rnd_seed=23 np.random.seed=rnd_seed np.random.set_state=rnd_seed ###Output _____no_output_____ ###Markdown 2. Creating the Spark Session ###Code spark = SparkSession.builder.master("local[2]").appName("Linear-Regression-California-Housing").getOrCreate() spark sc = spark.sparkContext sc sqlContext = SQLContext(spark.sparkContext) sqlContext ###Output _____no_output_____ ###Markdown 3. Load The Data From a File Into a Dataframe ###Code HOUSING_DATA = '../../data/raw/cal_housing.data' ###Output _____no_output_____ ###Markdown Specifying the schema when loading data into a DataFrame will give better performance than schema inference. ###Code # define the schema, corresponding to a line in the csv data file. schema = StructType([ StructField("long", FloatType(), nullable=True), StructField("lat", FloatType(), nullable=True), StructField("medage", FloatType(), nullable=True), StructField("totrooms", FloatType(), nullable=True), StructField("totbdrms", FloatType(), nullable=True), StructField("pop", FloatType(), nullable=True), StructField("houshlds", FloatType(), nullable=True), StructField("medinc", FloatType(), nullable=True), StructField("medhv", FloatType(), nullable=True)] ) # Load housing data housing_df = spark.read.csv(path=HOUSING_DATA, schema=schema).cache() # Inspect first five rows housing_df.take(5) # Show first five rows housing_df.show(5) # show the dataframe columns housing_df.columns # show the schema of the dataframe housing_df.printSchema() ###Output root |-- long: float (nullable = true) |-- lat: float (nullable = true) |-- medage: float (nullable = true) |-- totrooms: float (nullable = true) |-- totbdrms: float (nullable = true) |-- pop: float (nullable = true) |-- houshlds: float (nullable = true) |-- medinc: float (nullable = true) |-- medhv: float (nullable = true) ###Markdown 4. Data Exploration ###Code # run a sample selection housing_df.select('pop','totbdrms').show(10) ###Output +------+--------+ | pop|totbdrms| +------+--------+ | 322.0| 129.0| |2401.0| 1106.0| | 496.0| 190.0| | 558.0| 235.0| | 565.0| 280.0| | 413.0| 213.0| |1094.0| 489.0| |1157.0| 687.0| |1206.0| 665.0| |1551.0| 707.0| +------+--------+ only showing top 10 rows ###Markdown 4.1 Distribution of the median age of the people living in the area: ###Code # group by housingmedianage and see the distribution result_df = housing_df.groupBy("medage").count().sort("medage", ascending=True) result_df.show(10) result_df.toPandas().plot.bar(x='medage',figsize=(14, 6)) ###Output _____no_output_____ ###Markdown Most of the residents are either in their youth or they settle here during their senior years. Some data are showing median age < 10 which seems to be out of place. 4.2 Summary Statistics:Spark DataFrames include some built-in functions for statistical processing. The describe() function performs summary statistics calculations on all numeric columns and returns them as a DataFrame. ###Code (housing_df.describe().select( "summary", F.round("medage", 4).alias("medage"), F.round("totrooms", 4).alias("totrooms"), F.round("totbdrms", 4).alias("totbdrms"), F.round("pop", 4).alias("pop"), F.round("houshlds", 4).alias("houshlds"), F.round("medinc", 4).alias("medinc"), F.round("medhv", 4).alias("medhv")) .show()) ###Output +-------+-------+---------+--------+---------+--------+-------+-----------+ |summary| medage| totrooms|totbdrms| pop|houshlds| medinc| medhv| +-------+-------+---------+--------+---------+--------+-------+-----------+ | count|20640.0| 20640.0| 20640.0| 20640.0| 20640.0|20640.0| 20640.0| | mean|28.6395|2635.7631| 537.898|1425.4767|499.5397| 3.8707|206855.8169| | stddev|12.5856|2181.6153|421.2479|1132.4621|382.3298| 1.8998|115395.6159| | min| 1.0| 2.0| 1.0| 3.0| 1.0| 0.4999| 14999.0| | max| 52.0| 39320.0| 6445.0| 35682.0| 6082.0|15.0001| 500001.0| +-------+-------+---------+--------+---------+--------+-------+-----------+ ###Markdown Look at the minimum and maximum values of all the (numerical) attributes. We see that multiple attributes have a wide range of values: we will need to normalize your dataset. 5. Data PreprocessingWith all this information that we gathered from our small exploratory data analysis, we know enough to preprocess our data to feed it to the model.+ we shouldn't care about missing values; all zero values have been excluded from the data set.+ We should probably standardize our data, as we have seen that the range of minimum and maximum values is quite big.+ There are possibly some additional attributes that we could add, such as a feature that registers the number of bedrooms per room or the rooms per household.+ Our dependent variable is also quite big; To make our life easier, we'll have to adjust the values slightly. 5.1 Preprocessing The Target ValuesFirst, let's start with the `medianHouseValue`, our dependent variable. To facilitate our working with the target values, we will express the house values in units of 100,000. That means that a target such as `452600.000000` should become `4.526`: ###Code # Adjust the values of `medianHouseValue` housing_df = housing_df.withColumn("medhv", col("medhv")/100000) # Show the first 2 lines of `df` housing_df.show(2) ###Output +-------+-----+------+--------+--------+------+--------+------+-----+ | long| lat|medage|totrooms|totbdrms| pop|houshlds|medinc|medhv| +-------+-----+------+--------+--------+------+--------+------+-----+ |-122.23|37.88| 41.0| 880.0| 129.0| 322.0| 126.0|8.3252|4.526| |-122.22|37.86| 21.0| 7099.0| 1106.0|2401.0| 1138.0|8.3014|3.585| +-------+-----+------+--------+--------+------+--------+------+-----+ only showing top 2 rows ###Markdown We can clearly see that the values have been adjusted correctly when we look at the result of the show() method: 6. Feature EngineeringNow that we have adjusted the values in medianHouseValue, we will now add the following columns to the data set:+ Rooms per household which refers to the number of rooms in households per block group;+ Population per household, which basically gives us an indication of how many people live in households per block group; And+ Bedrooms per room which will give us an idea about how many rooms are bedrooms per block group;As we're working with DataFrames, we can best use the `select()` method to select the columns that we're going to be working with, namely `totalRooms`, `households`, and `population`. Additionally, we have to indicate that we're working with columns by adding the `col()` function to our code. Otherwise, we won't be able to do element-wise operations like the division that we have in mind for these three variables: ###Code housing_df.columns # Add the new columns to `df` housing_df = (housing_df.withColumn("rmsperhh", F.round(col("totrooms")/col("houshlds"), 2)) .withColumn("popperhh", F.round(col("pop")/col("houshlds"), 2)) .withColumn("bdrmsperrm", F.round(col("totbdrms")/col("totrooms"), 2))) # Inspect the result housing_df.show(5) ###Output +-------+-----+------+--------+--------+------+--------+------+-----+--------+--------+----------+ | long| lat|medage|totrooms|totbdrms| pop|houshlds|medinc|medhv|rmsperhh|popperhh|bdrmsperrm| +-------+-----+------+--------+--------+------+--------+------+-----+--------+--------+----------+ |-122.23|37.88| 41.0| 880.0| 129.0| 322.0| 126.0|8.3252|4.526| 6.98| 2.56| 0.15| |-122.22|37.86| 21.0| 7099.0| 1106.0|2401.0| 1138.0|8.3014|3.585| 6.24| 2.11| 0.16| |-122.24|37.85| 52.0| 1467.0| 190.0| 496.0| 177.0|7.2574|3.521| 8.29| 2.8| 0.13| |-122.25|37.85| 52.0| 1274.0| 235.0| 558.0| 219.0|5.6431|3.413| 5.82| 2.55| 0.18| |-122.25|37.85| 52.0| 1627.0| 280.0| 565.0| 259.0|3.8462|3.422| 6.28| 2.18| 0.17| +-------+-----+------+--------+--------+------+--------+------+-----+--------+--------+----------+ only showing top 5 rows ###Markdown We can see that, for the first row, there are about 6.98 rooms per household, the households in the block group consist of about 2.5 people and the amount of bedrooms is quite low with 0.14: Since we don't want to necessarily standardize our target values, we'll want to make sure to isolate those in our data set. Note also that this is the time to leave out variables that we might not want to consider in our analysis. In this case, let's leave out variables such as longitude, latitude, housingMedianAge and totalRooms.In this case, we will use the `select()` method and passing the column names in the order that is more appropriate. In this case, the target variable medianHouseValue is put first, so that it won't be affected by the standardization. ###Code # Re-order and select columns housing_df = housing_df.select("medhv", "totbdrms", "pop", "houshlds", "medinc", "rmsperhh", "popperhh", "bdrmsperrm") ###Output _____no_output_____ ###Markdown 6.1 Feature ExtractionNow that we have re-ordered the data, we're ready to normalize the data. We will choose the features to be normalized. ###Code featureCols = ["totbdrms", "pop", "houshlds", "medinc", "rmsperhh", "popperhh", "bdrmsperrm"] ###Output _____no_output_____ ###Markdown **Use a VectorAssembler to put features into a feature vector column:** ###Code # put features into a feature vector column assembler = VectorAssembler(inputCols=featureCols, outputCol="features") assembled_df = assembler.transform(housing_df) assembled_df.show(10, truncate=False) ###Output +-----+--------+------+--------+------+--------+--------+----------+-------------------------------------------------------+ |medhv|totbdrms|pop |houshlds|medinc|rmsperhh|popperhh|bdrmsperrm|features | +-----+--------+------+--------+------+--------+--------+----------+-------------------------------------------------------+ |4.526|129.0 |322.0 |126.0 |8.3252|6.98 |2.56 |0.15 |[129.0,322.0,126.0,8.325200080871582,6.98,2.56,0.15] | |3.585|1106.0 |2401.0|1138.0 |8.3014|6.24 |2.11 |0.16 |[1106.0,2401.0,1138.0,8.301400184631348,6.24,2.11,0.16]| |3.521|190.0 |496.0 |177.0 |7.2574|8.29 |2.8 |0.13 |[190.0,496.0,177.0,7.257400035858154,8.29,2.8,0.13] | |3.413|235.0 |558.0 |219.0 |5.6431|5.82 |2.55 |0.18 |[235.0,558.0,219.0,5.643099784851074,5.82,2.55,0.18] | |3.422|280.0 |565.0 |259.0 |3.8462|6.28 |2.18 |0.17 |[280.0,565.0,259.0,3.8461999893188477,6.28,2.18,0.17] | |2.697|213.0 |413.0 |193.0 |4.0368|4.76 |2.14 |0.23 |[213.0,413.0,193.0,4.036799907684326,4.76,2.14,0.23] | |2.992|489.0 |1094.0|514.0 |3.6591|4.93 |2.13 |0.19 |[489.0,1094.0,514.0,3.65910005569458,4.93,2.13,0.19] | |2.414|687.0 |1157.0|647.0 |3.12 |4.8 |1.79 |0.22 |[687.0,1157.0,647.0,3.119999885559082,4.8,1.79,0.22] | |2.267|665.0 |1206.0|595.0 |2.0804|4.29 |2.03 |0.26 |[665.0,1206.0,595.0,2.080399990081787,4.29,2.03,0.26] | |2.611|707.0 |1551.0|714.0 |3.6912|4.97 |2.17 |0.2 |[707.0,1551.0,714.0,3.691200017929077,4.97,2.17,0.2] | +-----+--------+------+--------+------+--------+--------+----------+-------------------------------------------------------+ only showing top 10 rows ###Markdown All the features have transformed into a Dense Vector. 6.2 StandardizationNext, we can finally scale the data using `StandardScaler`. The input columns are the `features`, and the output column with the rescaled that will be included in the scaled_df will be named `"features_scaled"`: ###Code # Initialize the `standardScaler` standardScaler = StandardScaler(inputCol="features", outputCol="features_scaled") # Fit the DataFrame to the scaler scaled_df = standardScaler.fit(assembled_df).transform(assembled_df) # Inspect the result scaled_df.select("features", "features_scaled").show(10, truncate=False) ###Output +-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ |features |features_scaled | +-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ |[129.0,322.0,126.0,8.325200080871582,6.98,2.56,0.15] |[0.30623297630686513,0.2843362208866199,0.3295584480852433,4.38209543579743,2.8211223886115664,0.24648542140099877,2.5828740130262697]| |[1106.0,2401.0,1138.0,8.301400184631348,6.24,2.11,0.16]|[2.6255323394991694,2.1201592122632746,2.9764882057222772,4.36956799913841,2.522034914747303,0.20315790592035446,2.755065613894688] | |[190.0,496.0,177.0,7.257400035858154,8.29,2.8,0.13] |[0.451040817816313,0.4379837439744208,0.4629511532626037,3.820042673324032,3.3505880518037077,0.2695934296573424,2.238490811289434] | |[235.0,558.0,219.0,5.643099784851074,5.82,2.55,0.18] |[0.557866274667545,0.4927317119712234,0.5728039692910182,2.970331231769803,2.3522825647162344,0.2455225877236511,3.099448815631524] | |[280.0,565.0,259.0,3.8461999893188477,6.28,2.18,0.17] |[0.664691731518777,0.4989129341644108,0.6774256988418891,2.024505748166202,2.538201805226452,0.20989774166178804,2.9272572147631064] | |[213.0,413.0,193.0,4.036799907684326,4.76,2.14,0.23] |[0.5056404957624983,0.364692109398056,0.5047998450829521,2.124830908428931,1.9238599670187757,0.20604640695239743,3.960406819973614] | |[489.0,1094.0,514.0,3.65910005569458,4.93,2.13,0.19] |[1.1608366311167213,0.9660367256210006,1.344389224728691,1.9260228580003875,1.9925692515551605,0.20508357327504975,3.271640416499942] | |[687.0,1157.0,647.0,3.119999885559082,4.8,1.79,0.22] |[1.6308686412621423,1.021667725359687,1.6922564754853369,1.6422593001231023,1.9400268574979251,0.1723472282452296,3.788215219105196] | |[665.0,1206.0,595.0,2.080399990081787,4.29,2.03,0.26] |[1.5786428623570954,1.0649362807119989,1.5562482270692046,1.0950501144251168,1.7338990038887707,0.19545523650157323,4.476981622578868]| |[707.0,1551.0,714.0,3.691200017929077,4.97,2.17,0.2] |[1.678346622084912,1.3695822316619488,1.8674978724830456,1.9429191603871925,2.00873614203431,0.20893490798444037,3.44383201736836] | +-------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+ only showing top 10 rows ###Markdown 7. Building A Machine Learning Model With Spark MLWith all the preprocessing done, it's finally time to start building our Linear Regression model! Just like always, we first need to split the data into training and test sets. Luckily, this is no issue with the `randomSplit()` method: ###Code # Split the data into train and test sets train_data, test_data = scaled_df.randomSplit([.8,.2], seed=rnd_seed) ###Output _____no_output_____ ###Markdown We pass in a list with two numbers that represent the size that we want your training and test sets to have and a seed, which is needed for reproducibility reasons.**Note** that the argument `elasticNetParam` corresponds to $\alpha$ or the vertical intercept and that the `regParam` or the regularization paramater corresponds to $\lambda$. ###Code train_data.columns ###Output _____no_output_____ ###Markdown **Create an ElasticNet model:**ElasticNet is a linear regression model trained with L1 and L2 prior as regularizer. This combination allows for learning a sparse model where few of the weights are non-zero like Lasso, while still maintaining the regularization properties of Ridge. We control the convex combination of L1 and L2 using the l1_ratio parameter.Elastic-net is useful when there are multiple features which are correlated with one another. Lasso is likely to pick one of these at random, while elastic-net is likely to pick both.A practical advantage of trading-off between Lasso and Ridge is it allows Elastic-Net to inherit some of Ridge’s stability under rotation.The objective function to minimize is in this case:\begin{align}min_w\frac{1}{2n_{samples}}{\parallel{X_w - y}\parallel}^2_2 + \alpha\lambda{\parallel{X_w - y}\parallel}_1 + \frac{\alpha(1-\lambda)}{2}{\parallel{w}\parallel}^2_2\end{align}http://scikit-learn.org/stable/modules/linear_model.htmlelastic-net ###Code # Initialize `lr` lr = (LinearRegression(featuresCol='features_scaled', labelCol="medhv", predictionCol='predmedhv', maxIter=10, regParam=0.3, elasticNetParam=0.8, standardization=False)) # Fit the data to the model linearModel = lr.fit(train_data) ###Output _____no_output_____ ###Markdown 8. Evaluating the ModelWith our model in place, we can generate predictions for our test data: use the `transform()` method to predict the labels for our `test_data`. Then, we can use RDD operations to extract the predictions as well as the true labels from the DataFrame. 8.1 Inspect the Model Co-efficients ###Code # Coefficients for the model linearModel.coefficients featureCols # Intercept for the model linearModel.intercept coeff_df = pd.DataFrame({"Feature": ["Intercept"] + featureCols, "Co-efficients": np.insert(linearModel.coefficients.toArray(), 0, linearModel.intercept)}) coeff_df = coeff_df[["Feature", "Co-efficients"]] coeff_df ###Output _____no_output_____ ###Markdown 8.2 Generating Predictions ###Code # Generate predictions predictions = linearModel.transform(test_data) # Extract the predictions and the "known" correct labels predandlabels = predictions.select("predmedhv", "medhv") predandlabels.show() ###Output +------------------+-----+ | predmedhv|medhv| +------------------+-----+ |1.5977678077735522|0.269| |1.3402962575651638|0.275| |1.7478926681617617|0.283| |1.5026315463850333|0.325| |1.5840068859455108|0.344| |1.4744173855604754|0.379| |1.5274954532293994|0.388| |1.3578228236744827|0.394| |1.6929041021688493| 0.4| | 2.010874171848204| 0.4| |1.3656308740705367| 0.41| |1.4496919091430263|0.421| | 1.380970081002033|0.425| |1.3394379493101451| 0.43| | 1.722973408950696|0.435| |1.5529131147882111|0.439| | 1.323489602290725| 0.44| |1.4030651812673915|0.444| |1.5111871672959283|0.446| |1.5996783060975408| 0.45| +------------------+-----+ only showing top 20 rows ###Markdown 8.3 Inspect the MetricsLooking at predicted values is one thing, but another and better thing is looking at some metrics to get a better idea of how good your model actually is.**Using the `LinearRegressionModel.summary` attribute:**Next, we can also use the `summary` attribute to pull up the `rootMeanSquaredError` and the `r2`. ###Code # Get the RMSE print("RMSE: {0}".format(linearModel.summary.rootMeanSquaredError)) print("MAE: {0}".format(linearModel.summary.meanAbsoluteError)) # Get the R2 print("R2: {0}".format(linearModel.summary.r2)) ###Output R2: 0.42213332730120356 ###Markdown + The RMSE measures how much error there is between two datasets comparing a predicted value and an observed or known value. The smaller an RMSE value, the closer predicted and observed values are.+ The R2 ("R squared") or the coefficient of determination is a measure that shows how close the data are to the fitted regression line. This score will always be between 0 and a 100% (or 0 to 1 in this case), where 0% indicates that the model explains none of the variability of the response data around its mean, and 100% indicates the opposite: it explains all the variability. That means that, in general, the higher the R-squared, the better the model fits our data. **Using the RegressionEvaluator from pyspark.ml package:** ###Code evaluator = RegressionEvaluator(predictionCol="predmedhv", labelCol='medhv', metricName='rmse') print("RMSE: {0}".format(evaluator.evaluate(predandlabels))) evaluator = RegressionEvaluator(predictionCol="predmedhv", labelCol='medhv', metricName='mae') print("MAE: {0}".format(evaluator.evaluate(predandlabels))) evaluator = RegressionEvaluator(predictionCol="predmedhv", labelCol='medhv', metricName='r2') print("R2: {0}".format(evaluator.evaluate(predandlabels))) ###Output R2: 0.40877519027090536 ###Markdown **Using the RegressionMetrics from pyspark.mllib package:** ###Code # mllib is old so the methods are available in rdd metrics = RegressionMetrics(predandlabels.rdd) print("RMSE: {0}".format(metrics.rootMeanSquaredError)) print("MAE: {0}".format(metrics.meanAbsoluteError)) print("R2: {0}".format(metrics.r2)) ###Output R2: 0.40877519027090536 ###Markdown There's definitely some improvements needed to our model! If we want to continue with this model, we can play around with the parameters that we passed to your model, the variables that we included in your original DataFrame. ###Code spark.stop() ###Output _____no_output_____
student-notebooks/02.01-Pose-Basics.ipynb
###Markdown Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: ###Code NAME = "" COLLABORATORS = "" ###Output _____no_output_____ ###Markdown --- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Pose BasicsKeywords: pose_from_pdb(), sequence(), cleanATOM, annotated_sequence() In this lab, we will get practice working with the `Pose` class in PyRosetta. We will load in a protein from a PDB files, use the `Pose` class to learn about the geometry of the protein, make changes to this geometry, and visualize the changes easily with `PyMOL` and PyRosetta's `PyMOLMover`. On the corresponding `Pose` lab found on the PyRosetta website, you will find various useful commands to interrogate poses; these may come in handy for the exercises.**PyRosetta Installation:**The following two lines will load in the PyRosetta library and load in database files. If this does not work, please notify the professor or the TA. ###Code # Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.setup() print ("Notebook is set for PyRosetta use in Colab. Have fun!") from pyrosetta import * init() ###Output _____no_output_____ ###Markdown Loading in a PDB File Protein Data Bank (PDB) is a text file format for describing 3D molecular structures and other information. Rosetta can read in PDB files and can output them as well. In addition to PDB, mmTF and mmCIF are a couple other file formats that are used with Rosetta.We will spend some time today looking at the crystal structure for the protein **PafA** (PDB ID: 5tj3) using Pyrosetta. PafA is an alkaline phosphatase, which removes a phosphate group from a phosphate monoester. In this structure, a modified amino acid, phosphothreonine, is used to mimic the substrate in the active site. Let's load in this structure with PyRosetta (make sure that you have the PDB file located in your current directory): `cd google_drive/My\ Drive/student-notebooks/``pose = pose_from_pdb("5tj3.pdb")`Here we are inputting the PDB file using the `pose_from_pdb` method. However, we can also load this structure from the internet with `pose_from_rcsb("5TJ3")`. ###Code # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown What is a Pose? The Pose class includes various types of information that describe a structure. Some of the core components include the Energies, PDBInfo, and Conformation. See the Rosetta3 paper to learn more: https://www.sciencedirect.com/science/article/pii/B9780123812704000196As an example, let's use our pose to look at the sequence of 5TJ3:`pose.sequence()` ###Code # print out the sequence of the pose # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Sometimes PDB files do not conform to standards and need to be cleaned to be loaded successfully with PyRosetta. One way to make sure the file is loaded successfully is to only include the ATOM lines from the PDB file. Alternatively, you could use the cleanATOM function in pyrosetta.toolbox to achieve the same: ###Code from pyrosetta.toolbox import cleanATOM cleanATOM("inputs/5tj3.pdb") ###Output _____no_output_____ ###Markdown This method will create a cleaned 5tj3.clean.pdb file for you. Lets load this into PyRosetta as well: ###Code pose_clean = pose_from_pdb("inputs/5tj3.clean.pdb") ###Output _____no_output_____ ###Markdown In our case, we could load in the PDB file for 5tj3 without cleaning it. In fact, we've lost some residues when cleaning the PDB file with cleanATOM. What is the difference in the `sequence` of the `pose_clean` now, compared to before? ###Code # print out the sequence of the pose_clean # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown With the function `annotated_sequence` below, we can start to see in more detail what the differences are. Note that non-canonical amino acids and hetatms are spelled out more explicitly now. ###Code pose.annotated_sequence() pose_clean.annotated_sequence() ###Output _____no_output_____ ###Markdown Before you turn this problem in, make sure everything runs as expected. First, **restart the kernel** (in the menubar, select Kernel$\rightarrow$Restart) and then **run all cells** (in the menubar, select Cell$\rightarrow$Run All).Make sure you fill in any place that says `YOUR CODE HERE` or "YOUR ANSWER HERE", as well as your name and collaborators below: ###Code NAME = "" COLLABORATORS = "" ###Output _____no_output_____ ###Markdown --- *This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Pose BasicsKeywords: pose_from_pdb(), sequence(), cleanATOM, annotated_sequence() In this lab, we will get practice working with the `Pose` class in PyRosetta. We will load in a protein from a PDB files, use the `Pose` class to learn about the geometry of the protein, make changes to this geometry, and visualize the changes easily with `PyMOL` and PyRosetta's `PyMOLMover`. On the corresponding `Pose` lab found on the PyRosetta website, you will find various useful commands to interrogate poses; these may come in handy for the exercises.**PyRosetta Installation:**The following two lines will load in the PyRosetta library and load in database files. If this does not work, please notify the professor or the TA. ###Code # Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.mount_pyrosetta_install() print ("Notebook is set for PyRosetta use in Colab. Have fun!") from pyrosetta import * init() ###Output _____no_output_____ ###Markdown Loading in a PDB File Protein Data Bank (PDB) is a text file format for describing 3D molecular structures and other information. Rosetta can read in PDB files and can output them as well. In addition to PDB, mmTF and mmCIF are a couple other file formats that are used with Rosetta.We will spend some time today looking at the crystal structure for the protein **PafA** (PDB ID: 5tj3) using Pyrosetta. PafA is an alkaline phosphatase, which removes a phosphate group from a phosphate monoester. In this structure, a modified amino acid, phosphothreonine, is used to mimic the substrate in the active site. Let's load in this structure with PyRosetta (make sure that you have the PDB file located in your current directory): `cd google_drive/My\ Drive/student-notebooks/``pose = pose_from_pdb("5tj3.pdb")`Here we are inputting the PDB file using the `pose_from_pdb` method. However, we can also load this structure from the internet with `pose_from_rcsb("5TJ3")`. ###Code # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown What is a Pose? The Pose class includes various types of information that describe a structure. Some of the core components include the Energies, PDBInfo, and Conformation. See the Rosetta3 paper to learn more: https://www.sciencedirect.com/science/article/pii/B9780123812704000196As an example, let's use our pose to look at the sequence of 5TJ3:`pose.sequence()` ###Code # print out the sequence of the pose # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown Sometimes PDB files do not conform to standards and need to be cleaned to be loaded successfully with PyRosetta. One way to make sure the file is loaded successfully is to only include the ATOM lines from the PDB file. Alternatively, you could use the cleanATOM function in pyrosetta.toolbox to achieve the same: ###Code from pyrosetta.toolbox import cleanATOM cleanATOM("inputs/5tj3.pdb") ###Output _____no_output_____ ###Markdown This method will create a cleaned 5tj3.clean.pdb file for you. Lets load this into PyRosetta as well: ###Code pose_clean = pose_from_pdb("inputs/5tj3.clean.pdb") ###Output _____no_output_____ ###Markdown In our case, we could load in the PDB file for 5tj3 without cleaning it. In fact, we've lost some residues when cleaning the PDB file with cleanATOM. What is the difference in the `sequence` of the `pose_clean` now, compared to before? ###Code # print out the sequence of the pose_clean # YOUR CODE HERE raise NotImplementedError() ###Output _____no_output_____ ###Markdown With the function `annotated_sequence` below, we can start to see in more detail what the differences are. Note that non-canonical amino acids and hetatms are spelled out more explicitly now. ###Code pose.annotated_sequence() pose_clean.annotated_sequence() ###Output _____no_output_____
docs/examples/writing_drivers/Creating-Instrument-Drivers.ipynb
###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output Logging hadn't been started. Activating auto-logging. Current session state plus future input saved. Filename : C:\Users\jenielse\.qcodes\logs\command_history.log Mode : append Output logging : True Raw input log : False Timestamping : True State : active Qcodes Logfile : C:\Users\jenielse\.qcodes\logs\200122-8548-qcodes.log ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `instrument`, the instrument this parameter belongs to, if any. * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `max_val_age`: Max time (in seconds) to trust a value stored in cache. If the parameter has not been set or measured more recently than this, an additional measurement will be performed in order to update the cached value. If it is ``None``, this behavior is disabled. ``max_val_age`` should not be used for a parameter that does not have a get function. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.In most cases you will probably be adding parameters via the `add_parameter` method on the instrument class as shown in the example below. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.add_parameter('attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float) # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.add_parameter('volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel, '{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V') self.add_parameter('curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A') self.add_parameter('mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.') self.add_parameter('output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}) self.add_parameter('nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25)) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.add_parameter('display_settext', set_cmd=self._display_settext, vals=vals.Strings()) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10_MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown It's very typical for DLL based instruments to only be supported on Windows. In such a driver care should be taken to ensure that the driver raises a clear error message if it is initialized on a different platform. This is typically best done by by checking `sys.platform` as below. In this example we are using `ctypes.windll` to interact with the DLL. `windll` is only defined on on Windows.QCoDeS is automatically typechecked with MyPy, this may give some complications for drivers that are not compatible with multiple OSes as there is no supported way to disabling the typecheck on a per platform basis for a specific submodule. Specifically MyPy will correctly notice that `self.dll` does not exist on non Windows platforms unless we add the line `self.dll: Any = None` to the example below. By giving `self.dll` the type `Any` we effectively disable any typecheck related to `self.dll` on non Windows platforms which is exactly what we want. This works because MyPy knows how to interprete the `sys.platform` check and allows `self.dll` to have different types on different OSes. ###Code class SomeDLLInstrument(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, dll_path=None, **kwargs): super().__init__(name, **kwargs) if sys.platform != 'win32': self.dll: Any = None raise OSError("SomeDLLInsrument only works on Windows") else: self.dll = ctypes.windll.LoadLibrary(dll_path) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr'): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A')) self._measured_param = measured_param self._instrument = c_amp_ins def get_raw(self): volt = self._measured_param.get() current = (self._instrument.sens.get() * self._instrument.sens_factor.get()) * volt if self._instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Custom Parameter classesWhen you call:```self.add_parameter(name, **kwargs)```you create a `Parameter`. But with the `parameter_class` kwarg you can invoke any class you want:```self.add_parameter(name, parameter_class=OtherClass, **kwargs)```- `Parameter` handles most common instrument settings and measurements. - Accepts get and/or set commands as either strings for the instrument's `ask` and `write` methods, or functions/methods. The set and get commands may also be set to `False` and `None`. `False` corresponds to "no get/set method available" (example: the reading of a voltmeter is not settable, so we set `set_cmd=False`). `None` corresponds to a manually updated parameter (example: an instrument with no remote interface). - Has options for translating between instrument codes and more meaningful data values - Supports software-controlled ramping- Any other parameter class may be used in `add_parameter`, if it accepts `name` and `instrument` as constructor kwargs. Generally these should subclasses of `Parameter`, `ParameterWithSetpoints`, `ArrayParameter`, or `MultiParameter`. `ParameterWithSetpoints` is specifically designed to handle the situations where the instrument returns an array of data with assosiated setpoints. An example of how to use it can be found in the notebook [Simple Example of ParameterWithSetpoints](../Parameters/Simple-Example-of-ParameterWithSetpoints.ipynb)`ArrayParameter` is an older alternative that does the same thing. However, it is significantly less flexible and much harder to use correct but used in a significant number of drivers. **It is not recommended for any new driver.**`MultiParameter` is designed to for the situation where multiple different types of data is captured from the same instrument command. On/Off parametersFrequently, an instrument has parameters which can be expressed in terms of "something is on or off". Moreover, usually it is not easy to translate the lingo of the instrument to something that can have simply the value of `True` or `False` (which are typical in software). Even further, it may be difficult to find consensus between users on a convention: is it `on`/`off`, or `ON`/`OFF`, or python `True`/`False`, or `1`/`0`, or else?This case becomes even more complex if the instrument's API (say, corresponding VISA command) uses unexpected values for such a parameter, for example, turning an output "on" corresponds to a VISA command `DEV:CH:BLOCK 0` which means "set blocking of the channel to 0 where 0 has the meaning of the boolean value False, and alltogether this command actually enables the output on this channel".This results in inconsistency among instrument drivers where for some instrument, say, a `display` parameter has 'on'/'off' values for input, while for a different instrument a similar `display` parameter has `'ON'`/`'OFF'` values or `1`/`0`.Note that this particular example of a `display` parameter is trivial because the ambiguity and inconsistency for "this kind" of parameters can be solved by having the name of the parameter be `display_enabled` and the allowed input values to be python `bool` `True`/`False`.Anyway, when defining parameters where the solution does not come trivially, please, consider setting `val_mapping` of a parameter to the output of `create_on_off_val_mapping(on_val=, off_val=)` function from `qcodes.utils.helpers` package. The function takes care of creating a `val_mapping` dictionary that maps given instrument-side values of `on_val` and `off_val` to `True`/`False`, `'ON'`/`'OFF'`, `'on'`/`'off'`, and other commonly used ones. Note that when getting a value of such a parameter, the user will not get `'ON'` or `'off'` or `'oFF'` - instead, `True`/`False` will be returned. Dynamically adding and removing parametersSometimes when conditions change (for example, the mode of operation of the instrument is changed from current to voltage measurement) you want different parameters to be available.To delete existing parameters:```del self.parameters[name_to_delete]```And to add more, do the same thing as you did initially:```self.add_parameter(new_name, **kwargs)``` Handling interruption of measurements A QCoDeS driver should be prepared for interruptions of the measurement triggered by a KeyboardInterrupt from the enduser. If an interrupt happens at an unfortunate time i.e. while communicating with the instrument or writing results of a measurement this may leave the program in an inconsistent state e.g. with a command in the output buffer of a VISA instrument. To protect against this QCoDeS ships with a context manager that intercepts KeyBoardInterrupts and delays them until it is safe to stop the program. By default QCoDeS protects writing to the database and communicating with VISA instruments in this way. However, there may be situations where a driver needs additional protection around a critical piece of code. The following example shows how a critical piece of code can be protected. The reader is encouraged to experiment with this using the `interrupt the kernel` button in this notebook. Note how the first KeyBoardInterrupt triggers a message to the screen and then executes the code within the context manager but not the code outside. Furthermore 2 KeyBoardInterrupts rapidly after each other will trigger an immediate interrupt that does not complete the code within the context manager. The context manager can therefore be wrapped around any piece of code that the end user should not normally be allowed to interrupt. ###Code from qcodes.utils.delaykeyboardinterrupt import DelayedKeyboardInterrupt import time with DelayedKeyboardInterrupt(): for i in range(10): time.sleep(0.2) print(i) print("Loop completed") print("Executing code after context manager") ###Output 0 1 2 3 4 5 6 7 8 9 Loop completed Executing code after context manager ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output _____no_output_____ ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.In most cases you will probably be adding parameters via the `add_parameter` method on the instrument class as shown in the example below. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.add_parameter('attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float) # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.add_parameter('volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel, '{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V') self.add_parameter('curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A') self.add_parameter('mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.') self.add_parameter('output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}) self.add_parameter('nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25)) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.add_parameter('display_settext', set_cmd=self._display_settext, vals=vals.Strings()) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10_MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr'): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A')) self._measured_param = measured_param self._instrument = c_amp_ins def get_raw(self): volt = self._measured_param.get() current = (self._instrument.sens.get() * self._instrument.sens_factor.get()) * volt if self._instrument.invert.get(): current *= -1 value = (volt, current) self._save_val(value) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes.instrument import ( Instrument, VisaInstrument, ManualParameter, MultiParameter, InstrumentChannel, InstrumentModule, ) from qcodes.utils import validators as vals ###Output _____no_output_____ ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `instrument`, the instrument this parameter belongs to, if any. * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `max_val_age`: Max time (in seconds) to trust a value stored in cache. If the parameter has not been set or measured more recently than this, an additional measurement will be performed in order to update the cached value. If it is ``None``, this behavior is disabled. ``max_val_age`` should not be used for a parameter that does not have a get function. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.There are two different ways of adding parameters to instruments. They are almost equivalent but comes with some trade-offs. We will show both below.You may either declare the parameter as an attribute directly on the instrument or add it via the via the `add_parameter` method on the instrument class.Declaring a parameter as an attribute directly on the instrument enables Sphinx, IDEs such as VSCode and static tools such as Mypy to work more fluently with the parameter than if it is created via `add_parameter` however you must take care to remember to pass `instrument=self` to the parameter such that theparameter will know which instrument it belongs to. Instrument.add_parameter is better suited for when you want to dynamically or programmatically add a parameter to an instrument. For historical reasons most instruments currently use `add_parameter`. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's an InstrumentModule, then?An `InstrumentModule` is a submodule of the instrument holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels (see Keithley example below)or when it makes sense to group a particular set of parameters into their own module (such as a trigger module containing trigger related settings) `InstrumentChannel` is a subclass of `InstrumentModule` which behaves identically to `InstrumentModule` you should chose either one depending on if you are implementing a module or a channel. As a rule of thumb you should use `InstrumentChannel` for something that the instrument has more than one of. LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator="\r", **kwargs) self.attenuation = Parameter( "attenuation", unit="dB", # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd="ATTN ALL {:02.0f}", get_cmd="ATTN? 1", # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float, instrument=self, ) """Control the attenuation""" # The docstring below the Parameter declaration makes Sphinx document the attribute and it is therefore # possible to see from the documentation that the instrument has this parameter. It is strongly encouraged to # add a short docstring like this. # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ["smua", "smub"]: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.volt = Parameter( "volt", get_cmd=f"{channel}.measure.v()", get_parser=float, set_cmd=f"{channel}.source.levelv={{:.12f}}", # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label="Voltage", unit="V", instrument=self, ) self.curr = Parameter( "curr", get_cmd=f"{channel}.measure.i()", get_parser=float, set_cmd=f"{channel}.source.leveli={{:.12f}}", label="Current", unit="A", instrument=self, ) self.mode = Parameter( "mode", get_cmd=f"{channel}.source.func", get_parser=float, set_cmd=f"{channel}.source.func={{:d}}", val_mapping={"current": 0, "voltage": 1}, docstring="Selects the output source.", instrument=self, ) self.output = Parameter( "output", get_cmd=f"{channel}.source.output", get_parser=float, set_cmd=f"{channel}.source.output={{:d}}", val_mapping={"on": 1, "off": 0}, instrument=self, ) self.nplc = Parameter( "nplc", label="Number of power line cycles", set_cmd=f"{channel}.measure.nplc={{:.4f}}", get_cmd=f"{channel}.measure.nplc", get_parser=float, vals=vals.Numbers(0.001, 25), instrument=self, ) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator="\n", **kwargs) model = self.ask("localnode.model") knownmodels = [ "2601B", "2602B", "2604B", "2611B", "2612B", "2614B", "2635B", "2636B", ] if model not in knownmodels: kmstring = ("{}, " * (len(knownmodels) - 1)).format(*knownmodels[:-1]) kmstring += "and {}.".format(knownmodels[-1]) raise ValueError("Unknown model. Known model are: " + kmstring) # Add the channel to the instrument for ch in ["a", "b"]: ch_name = f"smu{ch}" channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.display_settext = Parameter( "display_settext", set_cmd=self._display_settext, vals=vals.Strings(), instrument=self, ) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = "C:\\WINDOWS\\System32\\ATSApi" def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception( f"AlazarTech_ATS not found at " f"system {system_id}, board {board_id}" ) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter( name="clock_source", parameter_class=AlazarParameter, label="Clock Source", unit=None, value="INTERNAL_CLOCK", byte_to_value_dict={ 1: "INTERNAL_CLOCK", 4: "SLOW_EXTERNAL_CLOCK", 5: "EXTERNAL_CLOCK_AC", 7: "EXTERNAL_CLOCK_10MHz_REF", }, ) # etc... ###Output _____no_output_____ ###Markdown It's very typical for DLL based instruments to only be supported on Windows. In such a driver care should be taken to ensure that the driver raises a clear error message if it is initialized on a different platform. This is typically best done by by checking `sys.platform` as below. In this example we are using `ctypes.windll` to interact with the DLL. `windll` is only defined on on Windows.QCoDeS is automatically typechecked with MyPy, this may give some complications for drivers that are not compatible with multiple OSes as there is no supported way to disabling the typecheck on a per platform basis for a specific submodule. Specifically MyPy will correctly notice that `self.dll` does not exist on non Windows platforms unless we add the line `self.dll: Any = None` to the example below. By giving `self.dll` the type `Any` we effectively disable any typecheck related to `self.dll` on non Windows platforms which is exactly what we want. This works because MyPy knows how to interprete the `sys.platform` check and allows `self.dll` to have different types on different OSes. ###Code class SomeDLLInstrument(Instrument): dll_path = "C:\\WINDOWS\\System32\\ATSApi" def __init__(self, name, dll_path=None, **kwargs): super().__init__(name, **kwargs) if sys.platform != "win32": self.dll: Any = None raise OSError("SomeDLLInsrument only works on Windows") else: self.dll = ctypes.windll.LoadLibrary(dll_path) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name="curr", **kwargs): p_name = measured_param.name p_label = getattr(measured_param, "label", None) p_unit = getattr(measured_param, "units", None) super().__init__( name=name, names=(p_name + "_raw", name), shapes=((), ()), labels=(p_label, "Current"), units=(p_unit, "A"), instrument=instrument, **kwargs, ) self._measured_param = measured_param def get_raw(self): volt = self._measured_param.get() current = ( self.instrument.sens.get() * self.instrument.sens_factor.get() ) * volt if self.instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter( "sens", parameter_class=ManualParameter, initial_value=1e-8, label="Sensitivity", units="A/V", vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3), ) self.add_parameter( "invert", parameter_class=ManualParameter, initial_value=True, label="Inverted output", vals=vals.Bool(), ) self.add_parameter( "sens_factor", parameter_class=ManualParameter, initial_value=1, label="Sensitivity factor", units=None, vals=vals.Enum(0.1, 1, 10), ) self.add_parameter( "suppression", parameter_class=ManualParameter, initial_value=1e-7, label="Suppression", units="A", vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3), ) self.add_parameter( "risetime", parameter_class=ManualParameter, initial_value=0.3, label="Rise Time", units="msec", vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000), ) def get_idn(self): return { "vendor": "Ithaco (DL Instruments)", "model": "1211", "serial": None, "firmware": None, } ###Output _____no_output_____ ###Markdown Custom Parameter classesWhen you call:```self.add_parameter(name, **kwargs)```you create a `Parameter`. But with the `parameter_class` kwarg you can invoke any class you want:```self.add_parameter(name, parameter_class=OtherClass, **kwargs)```- `Parameter` handles most common instrument settings and measurements. - Accepts get and/or set commands as either strings for the instrument's `ask` and `write` methods, or functions/methods. The set and get commands may also be set to `False` and `None`. `False` corresponds to "no get/set method available" (example: the reading of a voltmeter is not settable, so we set `set_cmd=False`). `None` corresponds to a manually updated parameter (example: an instrument with no remote interface). - Has options for translating between instrument codes and more meaningful data values - Supports software-controlled ramping- Any other parameter class may be used in `add_parameter`, if it accepts `name` and `instrument` as constructor kwargs. Generally these should subclasses of `Parameter`, `ParameterWithSetpoints`, `ArrayParameter`, or `MultiParameter`. `ParameterWithSetpoints` is specifically designed to handle the situations where the instrument returns an array of data with assosiated setpoints. An example of how to use it can be found in the notebook [Simple Example of ParameterWithSetpoints](../Parameters/Simple-Example-of-ParameterWithSetpoints.ipynb)`ArrayParameter` is an older alternative that does the same thing. However, it is significantly less flexible and much harder to use correct but used in a significant number of drivers. **It is not recommended for any new driver.**`MultiParameter` is designed to for the situation where multiple different types of data is captured from the same instrument command.It is important that parameters subclass forwards the `name`, `label(s)`, `unit(s)` and `instrument` along with any unknown `**kwargs` to the superclasss. On/Off parametersFrequently, an instrument has parameters which can be expressed in terms of "something is on or off". Moreover, usually it is not easy to translate the lingo of the instrument to something that can have simply the value of `True` or `False` (which are typical in software). Even further, it may be difficult to find consensus between users on a convention: is it `on`/`off`, or `ON`/`OFF`, or python `True`/`False`, or `1`/`0`, or else?This case becomes even more complex if the instrument's API (say, corresponding VISA command) uses unexpected values for such a parameter, for example, turning an output "on" corresponds to a VISA command `DEV:CH:BLOCK 0` which means "set blocking of the channel to 0 where 0 has the meaning of the boolean value False, and alltogether this command actually enables the output on this channel".This results in inconsistency among instrument drivers where for some instrument, say, a `display` parameter has 'on'/'off' values for input, while for a different instrument a similar `display` parameter has `'ON'`/`'OFF'` values or `1`/`0`.Note that this particular example of a `display` parameter is trivial because the ambiguity and inconsistency for "this kind" of parameters can be solved by having the name of the parameter be `display_enabled` and the allowed input values to be python `bool` `True`/`False`.Anyway, when defining parameters where the solution does not come trivially, please, consider setting `val_mapping` of a parameter to the output of `create_on_off_val_mapping(on_val=, off_val=)` function from `qcodes.utils.helpers` package. The function takes care of creating a `val_mapping` dictionary that maps given instrument-side values of `on_val` and `off_val` to `True`/`False`, `'ON'`/`'OFF'`, `'on'`/`'off'`, and other commonly used ones. Note that when getting a value of such a parameter, the user will not get `'ON'` or `'off'` or `'oFF'` - instead, `True`/`False` will be returned. Dynamically adding and removing parametersSometimes when conditions change (for example, the mode of operation of the instrument is changed from current to voltage measurement) you want different parameters to be available.To delete existing parameters:```del self.parameters[name_to_delete]```And to add more, do the same thing as you did initially:```self.add_parameter(new_name, **kwargs)``` Handling interruption of measurements A QCoDeS driver should be prepared for interruptions of the measurement triggered by a KeyboardInterrupt from the enduser. If an interrupt happens at an unfortunate time i.e. while communicating with the instrument or writing results of a measurement this may leave the program in an inconsistent state e.g. with a command in the output buffer of a VISA instrument. To protect against this QCoDeS ships with a context manager that intercepts KeyBoardInterrupts and delays them until it is safe to stop the program. By default QCoDeS protects writing to the database and communicating with VISA instruments in this way. However, there may be situations where a driver needs additional protection around a critical piece of code. The following example shows how a critical piece of code can be protected. The reader is encouraged to experiment with this using the `interrupt the kernel` button in this notebook. Note how the first KeyBoardInterrupt triggers a message to the screen and then executes the code within the context manager but not the code outside. Furthermore 2 KeyBoardInterrupts rapidly after each other will trigger an immediate interrupt that does not complete the code within the context manager. The context manager can therefore be wrapped around any piece of code that the end user should not normally be allowed to interrupt. ###Code from qcodes.utils.delaykeyboardinterrupt import DelayedKeyboardInterrupt import time with DelayedKeyboardInterrupt(): for i in range(10): time.sleep(0.2) print(i) print("Loop completed") print("Executing code after context manager") ###Output 0 1 2 3 4 5 6 7 8 9 Loop completed Executing code after context manager ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output Logging hadn't been started. Activating auto-logging. Current session state plus future input saved. Filename : C:\Users\jenielse\.qcodes\logs\command_history.log Mode : append Output logging : True Raw input log : False Timestamping : True State : active Qcodes Logfile : C:\Users\jenielse\.qcodes\logs\200122-8548-qcodes.log ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `instrument`, the instrument this parameter belongs to, if any. * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `max_val_age`: Max time (in seconds) to trust a value stored in cache. If the parameter has not been set or measured more recently than this, an additional measurement will be performed in order to update the cached value. If it is ``None``, this behavior is disabled. ``max_val_age`` should not be used for a parameter that does not have a get function. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.In most cases you will probably be adding parameters via the `add_parameter` method on the instrument class as shown in the example below. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.add_parameter('attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float) # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.add_parameter('volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel, '{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V') self.add_parameter('curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A') self.add_parameter('mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.') self.add_parameter('output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}) self.add_parameter('nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25)) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.add_parameter('display_settext', set_cmd=self._display_settext, vals=vals.Strings()) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown It's very typical for DLL based instruments to only be supported on Windows. In such a driver care should be taken to ensure that the driver raises a clear error message if it is initialized on a different platform. This is typically best done by by checking `sys.platform` as below. In this example we are using `ctypes.windll` to interact with the DLL. `windll` is only defined on on Windows.QCoDeS is automatically typechecked with MyPy, this may give some complications for drivers that are not compatible with multiple OSes as there is no supported way to disabling the typecheck on a per platform basis for a specific submodule. Specifically MyPy will correctly notice that `self.dll` does not exist on non Windows platforms unless we add the line `self.dll: Any = None` to the example below. By giving `self.dll` the type `Any` we effectively disable any typecheck related to `self.dll` on non Windows platforms which is exactly what we want. This works because MyPy knows how to interprete the `sys.platform` check and allows `self.dll` to have different types on different OSes. ###Code class SomeDLLInstrument(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, dll_path=None, **kwargs): super().__init__(name, **kwargs) if sys.platform != 'win32': self.dll: Any = None raise OSError("SomeDLLInsrument only works on Windows") else: self.dll = ctypes.windll.LoadLibrary(dll_path) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr'): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A')) self._measured_param = measured_param self._instrument = c_amp_ins def get_raw(self): volt = self._measured_param.get() current = (self._instrument.sens.get() * self._instrument.sens_factor.get()) * volt if self._instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Custom Parameter classesWhen you call:```self.add_parameter(name, **kwargs)```you create a `Parameter`. But with the `parameter_class` kwarg you can invoke any class you want:```self.add_parameter(name, parameter_class=OtherClass, **kwargs)```- `Parameter` handles most common instrument settings and measurements. - Accepts get and/or set commands as either strings for the instrument's `ask` and `write` methods, or functions/methods. The set and get commands may also be set to `False` and `None`. `False` corresponds to "no get/set method available" (example: the reading of a voltmeter is not settable, so we set `set_cmd=False`). `None` corresponds to a manually updated parameter (example: an instrument with no remote interface). - Has options for translating between instrument codes and more meaningful data values - Supports software-controlled ramping- Any other parameter class may be used in `add_parameter`, if it accepts `name` and `instrument` as constructor kwargs. Generally these should subclasses of `Parameter`, `ParameterWithSetpoints`, `ArrayParameter`, or `MultiParameter`. `ParameterWithSetpoints` is specifically designed to handle the situations where the instrument returns an array of data with assosiated setpoints. An example of how to use it can be found in the notebook [Simple Example of ParameterWithSetpoints](../Parameters/Simple-Example-of-ParameterWithSetpoints.ipynb)`ArrayParameter` is an older alternative that does the same thing. However, it is significantly less flexible and much harder to use correct but used in a significant number of drivers. **It is not recommended for any new driver.**`MultiParameter` is designed to for the situation where multiple different types of data is captured from the same instrument command. On/Off parametersFrequently, an instrument has parameters which can be expressed in terms of "something is on or off". Moreover, usually it is not easy to translate the lingo of the instrument to something that can have simply the value of `True` or `False` (which are typical in software). Even further, it may be difficult to find consensus between users on a convention: is it `on`/`off`, or `ON`/`OFF`, or python `True`/`False`, or `1`/`0`, or else?This case becomes even more complex if the instrument's API (say, corresponding VISA command) uses unexpected values for such a parameter, for example, turning an output "on" corresponds to a VISA command `DEV:CH:BLOCK 0` which means "set blocking of the channel to 0 where 0 has the meaning of the boolean value False, and alltogether this command actually enables the output on this channel".This results in inconsistency among instrument drivers where for some instrument, say, a `display` parameter has 'on'/'off' values for input, while for a different instrument a similar `display` parameter has `'ON'`/`'OFF'` values or `1`/`0`.Note that this particular example of a `display` parameter is trivial because the ambiguity and inconsistency for "this kind" of parameters can be solved by having the name of the parameter be `display_enabled` and the allowed input values to be python `bool` `True`/`False`.Anyway, when defining parameters where the solution does not come trivially, please, consider setting `val_mapping` of a parameter to the output of `create_on_off_val_mapping(on_val=, off_val=)` function from `qcodes.utils.helpers` package. The function takes care of creating a `val_mapping` dictionary that maps given instrument-side values of `on_val` and `off_val` to `True`/`False`, `'ON'`/`'OFF'`, `'on'`/`'off'`, and other commonly used ones. Note that when getting a value of such a parameter, the user will not get `'ON'` or `'off'` or `'oFF'` - instead, `True`/`False` will be returned. Dynamically adding and removing parametersSometimes when conditions change (for example, the mode of operation of the instrument is changed from current to voltage measurement) you want different parameters to be available.To delete existing parameters:```del self.parameters[name_to_delete]```And to add more, do the same thing as you did initially:```self.add_parameter(new_name, **kwargs)``` Handling interruption of measurements A QCoDeS driver should be prepared for interruptions of the measurement triggered by a KeyboardInterrupt from the enduser. If an interrupt happens at an unfortunate time i.e. while communicating with the instrument or writing results of a measurement this may leave the program in an inconsistent state e.g. with a command in the output buffer of a VISA instrument. To protect against this QCoDeS ships with a context manager that intercepts KeyBoardInterrupts and delays them until it is safe to stop the program. By default QCoDeS protects writing to the database and communicating with VISA instruments in this way. However, there may be situations where a driver needs additional protection around a critical piece of code. The following example shows how a critical piece of code can be protected. The reader is encouraged to experiment with this using the `interrupt the kernel` button in this notebook. Note how the first KeyBoardInterrupt triggers a message to the screen and then executes the code within the context manager but not the code outside. Furthermore 2 KeyBoardInterrupts rapidly after each other will trigger an immediate interrupt that does not complete the code within the context manager. The context manager can therefore be wrapped around any piece of code that the end user should not normally be allowed to interrupt. ###Code from qcodes.utils.delaykeyboardinterrupt import DelayedKeyboardInterrupt import time with DelayedKeyboardInterrupt(): for i in range(10): time.sleep(0.2) print(i) print("Loop completed") print("Executing code after context manager") ###Output 0 1 2 3 4 5 6 7 8 9 Loop completed Executing code after context manager ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output _____no_output_____ ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.In most cases you will probably be adding parameters via the `add_parameter` method on the instrument class as shown in the example below. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.add_parameter('attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float) # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.add_parameter('volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel, '{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V') self.add_parameter('curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A') self.add_parameter('mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.') self.add_parameter('output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}) self.add_parameter('nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25)) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.add_parameter('display_settext', set_cmd=self._display_settext, vals=vals.Strings()) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10_MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr'): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A')) self._measured_param = measured_param self._instrument = c_amp_ins def get_raw(self): volt = self._measured_param.get() current = (self._instrument.sens.get() * self._instrument.sens_factor.get()) * volt if self._instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Custom Parameter classesWhen you call:```self.add_parameter(name, **kwargs)```you create a `Parameter`. But with the `parameter_class` kwarg you can invoke any class you want:```self.add_parameter(name, parameter_class=OtherClass, **kwargs)```- `Parameter` handles most common instrument settings and measurements. - Accepts get and/or set commands as either strings for the instrument's `ask` and `write` methods, or functions/methods. The set and get commands may also be set to `False` and `None`. `False` corresponds to "no get/set method available" (example: the reading of a voltmeter is not settable, so we set `set_cmd=False`). `None` corresponds to a manually updated parameter (example: an instrument with no remote interface). - Has options for translating between instrument codes and more meaningful data values - Supports software-controlled ramping- Any other parameter class may be used in `add_parameter`, if it accepts `name` and `instrument` as constructor kwargs. Generally these should subclasses of `Parameter`, `ParameterWithSetpoints`, `ArrayParameter`, or `MultiParameter`. `ParameterWithSetpoints` is specifically designed to handle the situations where the instrument returns an array of data with assosiated setpoints. An example of how to use it can be found in the notebook [Simple Example of ParameterWithSetpoints](../Parameters/Simple-Example-of-ParameterWithSetpoints.ipynb)`ArrayParameter` is an older alternative that does the same thing. However, it is significantly less flexible and much harder to use correct but used in a significant number of drivers. **It is not recommended for any new driver.**`MultiParameter` is designed to for the situation where multiple different types of data is captured from the same instrument command. On/Off parametersFrequently, an instrument has parameters which can be expressed in terms of "something is on or off". Moreover, usually it is not easy to translate the lingo of the instrument to something that can have simply the value of `True` or `False` (which are typical in software). Even further, it may be difficult to find consensus between users on a convention: is it `on`/`off`, or `ON`/`OFF`, or python `True`/`False`, or `1`/`0`, or else?This case becomes even more complex if the instrument's API (say, corresponding VISA command) uses unexpected values for such a parameter, for example, turning an output "on" corresponds to a VISA command `DEV:CH:BLOCK 0` which means "set blocking of the channel to 0 where 0 has the meaning of the boolean value False, and alltogether this command actually enables the output on this channel".This results in inconsistency among instrument drivers where for some instrument, say, a `display` parameter has 'on'/'off' values for input, while for a different instrument a similar `display` parameter has `'ON'`/`'OFF'` values or `1`/`0`.Note that this particular example of a `display` parameter is trivial because the ambiguity and inconsistency for "this kind" of parameters can be solved by having the name of the parameter be `display_enabled` and the allowed input values to be python `bool` `True`/`False`.Anyway, when defining parameters where the solution does not come trivially, please, consider setting `val_mapping` of a parameter to the output of `create_on_off_val_mapping(on_val=, off_val=)` function from `qcodes.utils.helpers` package. The function takes care of creating a `val_mapping` dictionary that maps given instrument-side values of `on_val` and `off_val` to `True`/`False`, `'ON'`/`'OFF'`, `'on'`/`'off'`, and other commonly used ones. Note that when getting a value of such a parameter, the user will not get `'ON'` or `'off'` or `'oFF'` - instead, `True`/`False` will be returned. Dynamically adding and removing parametersSometimes when conditions change (for example, the mode of operation of the instrument is changed from current to voltage measurement) you want different parameters to be available.To delete existing parameters:```del self.parameters[name_to_delete]```And to add more, do the same thing as you did initially:```self.add_parameter(new_name, **kwargs)``` Handling interruption of measurements A QCoDeS driver should be prepared for interruptions of the measurement triggered by a KeyboardInterrupt from the enduser. If an interrupt happens at an unfortunate time i.e. while communicating with the instrument or writing results of a measurement this may leave the program in an inconsistent state e.g. with a command in the output buffer of a VISA instrument. To protect against this QCoDeS ships with a context manager that intercepts KeyBoardInterrupts and delays them until it is safe to stop the program. By default QCoDeS protects writing to the database and communicating with VISA instruments in this way. However, there may be situations where a driver needs additional protection around a critical piece of code. The following example shows how a critical piece of code can be protected. The reader is encouraged to experiment with this using the `interrupt the kernel` button in this notebook. Note how the first KeyBoardInterrupt triggers a message to the screen and then executes the code within the context manager but not the code outside. Furthermore 2 KeyBoardInterrupts rapidly after each other will trigger an immediate interrupt that does not complete the code within the context manager. The context manager can therefore be wrapped around any piece of code that the end user should not normally be allowed to interrupt. ###Code from qcodes.utils.delaykeyboardinterrupt import DelayedKeyboardInterrupt import time with DelayedKeyboardInterrupt(): for i in range(10): time.sleep(0.2) print(i) print("Loop completed") print("Executing code after context manager") ###Output 0 1 2 3 4 5 6 7 8 9 Loop completed Executing code after context manager ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output _____no_output_____ ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.In most cases you will probably be adding parameters via the `add_parameter` method on the instrument class as shown in the example below. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.add_parameter('attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float) # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.add_parameter('volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel, '{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V') self.add_parameter('curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A') self.add_parameter('mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.') self.add_parameter('output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}) self.add_parameter('nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25)) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.add_parameter('display_settext', set_cmd=self._display_settext, vals=vals.Strings()) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10_MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr'): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A')) self._measured_param = measured_param self._instrument = c_amp_ins def get_raw(self): volt = self._measured_param.get() current = (self._instrument.sens.get() * self._instrument.sens_factor.get()) * volt if self._instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Creating QCoDeS instrument drivers ###Code # most of the drivers only need a couple of these... moved all up here for clarity below from time import sleep, time import numpy as np import ctypes # only for DLL-based instrument import qcodes as qc from qcodes import (Instrument, VisaInstrument, ManualParameter, MultiParameter, validators as vals) from qcodes.instrument.channel import InstrumentChannel ###Output Logging hadn't been started. Activating auto-logging. Current session state plus future input saved. Filename : C:\Users\jenielse\.qcodes\logs\command_history.log Mode : append Output logging : True Raw input log : False Timestamping : True State : active Qcodes Logfile : C:\Users\jenielse\.qcodes\logs\210816-9804-qcodes.log ###Markdown Base ClassesThere are 3 available:- `VisaInstrument` - for most instruments that communicate over a text channel (ethernet, GPIB, serial, USB...) that do not have a custom DLL or other driver to manage low-level commands.- `IPInstrument` - a deprecated driver just for ethernet connections. Do not use this; use `VisaInstrument` instead.- `Instrument` - superclass of both `VisaInstrument` and `IPInstrument`, use this if you do not communicate over a text channel, for example: - PCI cards with their own DLLs - Instruments with only manual controls. If possible, please use a `VisaInstrument`, as this allows for the creation of a simulated instrument. (See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook) Parameters and ChannelsBroadly speaking, a QCoDeS instrument driver is nothing but an object that holds a connection handle to the physical instrument and has some sub-objects that represent the state of the physical instrument. These sub-objects are the `Parameters`. Writing a driver basically boils down to adding a ton of `Parameters`. What's a Parameter?A parameter represents a single value of a single feature of an instrument, e.g. the frequency of a function generator, the mode of a multimeter (resistance, current, or voltage), or the input impedance of an oscilloscope channel. Each `Parameter` can have the following attributes: * `name`, the name used internally by QCoDeS, e.g. 'input_impedance' * `instrument`, the instrument this parameter belongs to, if any. * `label`, the label to use for plotting this parameter * `unit`, the physical unit. ALWAYS use SI units if a unit is applicable * `set_cmd`, the command to set the parameter. Either a SCPI string with a single '{}', or a function taking one argument (see examples below) * `get_cmd`, the command to get the parameter. Follows the same scheme as `set_cmd` * `vals`, a validator (from `qcodes.utils.validators`) to reject invalid values before they are sent to the instrument. Since there is no standard for how an instrument responds to an out-of-bound value (e.g. a 10 kHz function generator receiving 12e9 for its frequency), meaning that the user can expect anything from silent failure to the instrument breaking or suddenly outputting random noise, it is MUCH better to catch invalid values in software. Therefore, please provide a validator if at all possible. * `val_mapping`, a dictionary mapping human-readable values like 'High Impedance' to the instrument's internal representation like '372'. Not always needed. If supplied, a validator is automatically constructed. * `max_val_age`: Max time (in seconds) to trust a value stored in cache. If the parameter has not been set or measured more recently than this, an additional measurement will be performed in order to update the cached value. If it is ``None``, this behavior is disabled. ``max_val_age`` should not be used for a parameter that does not have a get function. * `get_parser`, a parser of the raw return value. Since all VISA instruments return strings, but users usually want numbers, `int` and `float` are popular `get_parsers` * `docstring` A short string describing the function of the parameter Golden rule: if a `Parameter` is settable, it must always accept its own output as input.There are two different ways of adding parameters to instruments. They are almost equivalent but comes with some trade-offs. We will show both below.You may either declare the parameter as an attribute directly on the instrument or add it via the via the `add_parameter` method on the instrument class.Declaring a parameter as an attribute directly on the instrument enables Sphinx, IDEs such as VSCode and static tools such as Mypy to work more fluently with the parameter than if it is created via `add_parameter` however you must take care to remember to pass `instrument=self` to the parameter such that theparameter will know which instrument it belongs to. Instrument.add_parameter is better suited for when you want to dynamically or programmatically add a parameter to an instrument. For historical reasons most instruments currently use `add_parameter`. FunctionsSimilar to parameters QCoDeS instruments implement the concept of functions that can be added to the instrument via `add_function`. They are meant to implement simple actions on the instrument such as resetting it. However, the functions do not add any value over normal python methods in the driver Class and we are planning to eventually remove them from QCoDeS. **We therefore encourage any driver developer to not use function in any new driver**. What's a Channel, then?A `Channel` is a submodule holding `Parameter`s. It sometimes makes sense to group `Parameter`s, for instance when an oscilloscope has four identical input channels. (see Keithley example below) LoggingEvery QCoDeS module should have its own logger that is named with the name of the module. So to create a logger put a line at the top of the module like this:```log = logging.getLogger(__name__)```Use this logger only to log messages that are not originating from an `Instrument` instance. For messages from within an instrument instance use the `log` member of the `Instrument` class, e.g```self.log.info(f"Could not connect at {address}")```This way the instrument name will be prepended to the log message and the log messages can be filtered according to the instrument they originate from. See the example notebook of the logger module for more info ([offline](../logging/logging_example.ipynb),[online](https://nbviewer.jupyter.org/github/QCoDeS/Qcodes/tree/master/docs/examples/logging/logging_example.ipynb)).When creating a nested `Instrument`, like e.g. something like the `InstrumentChannel` class, that has a `_parent` property, make sure that this property gets set before calling the `super().__init__` method, so that the full name of the instrument gets resolved correctly for the logging. VisaInstrument: Simple exampleThe Weinschel 8320 driver is about as basic a driver as you can get. It only defines one parameter, "attenuation". All the comments here are my additions to describe what's happening. ###Code class Weinschel_8320(VisaInstrument): """ QCoDeS driver for the stepped attenuator Weinschel is formerly known as Aeroflex/Weinschel """ # all instrument constructors should accept **kwargs and pass them on to # super().__init__ def __init__(self, name, address, **kwargs): # supplying the terminator means you don't need to remove it from every response super().__init__(name, address, terminator='\r', **kwargs) self.attenuation = Parameter( 'attenuation', unit='dB', # the value you set will be inserted in this command with # regular python string substitution. This instrument wants # an integer zero-padded to 2 digits. For robustness, don't # assume you'll get an integer input though - try to allow # floats (as opposed to {:0=2d}) set_cmd='ATTN ALL {:02.0f}', get_cmd='ATTN? 1', # setting any attenuation other than 0, 2, ... 60 will error. vals=vals.Enum(*np.arange(0, 60.1, 2).tolist()), # the return value of get() is a string, but we want to # turn it into a (float) number get_parser=float, instrument=self ) """Control the attenuation""" # The docstring below the Parameter declaration makes Sphinx document the attribute and it is therefore # possible to see from the documentation that the instrument has this parameter. It is strongly encouraged to # add a short docstring like this. # it's a good idea to call connect_message at the end of your constructor. # this calls the 'IDN' parameter that the base Instrument class creates for # every instrument (you can override the `get_idn` method if it doesn't work # in the standard VISA form for your instrument) which serves two purposes: # 1) verifies that you are connected to the instrument # 2) gets the ID info so it will be included with metadata snapshots later. self.connect_message() # instantiating and using this instrument (commented out because I can't actually do it!) # # from qcodes.instrument_drivers.weinschel.Weinschel_8320 import Weinschel_8320 # weinschel = Weinschel_8320('w8320_1', 'TCPIP0::172.20.2.212::inst0::INSTR') # weinschel.attenuation(40) ###Output _____no_output_____ ###Markdown VisaInstrument: a more involved exampleThe Keithley 2600 sourcemeter driver uses two channels. The actual driver is quite long, so here we show an abridged version that has:- A class defining a `Channel`. All the `Parameter`s of the `Channel` go here. - A nifty way to look up the model number, allowing it to be a driver for many different Keithley models ###Code class KeithleyChannel(InstrumentChannel): """ Class to hold the two Keithley channels, i.e. SMUA and SMUB. """ def __init__(self, parent: Instrument, name: str, channel: str) -> None: """ Args: parent: The Instrument instance to which the channel is to be attached. name: The 'colloquial' name of the channel channel: The name used by the Keithley, i.e. either 'smua' or 'smub' """ if channel not in ['smua', 'smub']: raise ValueError('channel must be either "smub" or "smua"') super().__init__(parent, name) self.model = self._parent.model vranges = self._parent._vranges iranges = self._parent._iranges self.volt = Parameter( 'volt', get_cmd='{}.measure.v()'.format(channel), get_parser=float, set_cmd='{}.source.levelv={}'.format(channel,'{:.12f}'), # note that the set_cmd is either the following format string #'smua.source.levelv={:.12f}' or 'smub.source.levelv={:.12f}' # depending on the value of `channel` label='Voltage', unit='V', instrument=self ) self.curr = Parameter( 'curr', get_cmd='{}.measure.i()'.format(channel), get_parser=float, set_cmd='{}.source.leveli={}'.format(channel, '{:.12f}'), label='Current', unit='A', instrument=self ) self.mode = Parameter( 'mode', get_cmd='{}.source.func'.format(channel), get_parser=float, set_cmd='{}.source.func={}'.format(channel, '{:d}'), val_mapping={'current': 0, 'voltage': 1}, docstring='Selects the output source.', instrument=self ) self.output = Parameter( 'output', get_cmd='{}.source.output'.format(channel), get_parser=float, set_cmd='{}.source.output={}'.format(channel, '{:d}'), val_mapping={'on': 1, 'off': 0}, instrument=self ) self.nplc = Parameter( 'nplc', label='Number of power line cycles', set_cmd='{}.measure.nplc={}'.format(channel, '{:.4f}'), get_cmd='{}.measure.nplc'.format(channel), get_parser=float, vals=vals.Numbers(0.001, 25), instrument=self ) self.channel = channel class Keithley_2600(VisaInstrument): """ This is the qcodes driver for the Keithley_2600 Source-Meter series, tested with Keithley_2614B """ def __init__(self, name: str, address: str, **kwargs) -> None: """ Args: name: Name to use internally in QCoDeS address: VISA ressource address """ super().__init__(name, address, terminator='\n', **kwargs) model = self.ask('localnode.model') knownmodels = ['2601B', '2602B', '2604B', '2611B', '2612B', '2614B', '2635B', '2636B'] if model not in knownmodels: kmstring = ('{}, '*(len(knownmodels)-1)).format(*knownmodels[:-1]) kmstring += 'and {}.'.format(knownmodels[-1]) raise ValueError('Unknown model. Known model are: ' + kmstring) # Add the channel to the instrument for ch in ['a', 'b']: ch_name = 'smu{}'.format(ch) channel = KeithleyChannel(self, ch_name, ch_name) self.add_submodule(ch_name, channel) # display parameter # Parameters NOT specific to a channel still belong on the Instrument object # In this case, the Parameter controls the text on the display self.display_settext = Parameter( 'display_settext', set_cmd=self._display_settext, vals=vals.Strings(), instrument=self ) self.connect_message() ###Output _____no_output_____ ###Markdown VisaInstruments: Simulating the instrumentAs mentioned above, drivers subclassing `VisaInstrument` have the nice property that they may be connected to a simulated version of the physical instrument. See the [Creating Simulated PyVISA Instruments](Creating-Simulated-PyVISA-Instruments.ipynb) notebook for more information. If you are writing a `VisaInstrument` driver, please consider spending 20 minutes to also add a simulated instrument and a test. DLL-based instrumentsThe Alazar cards use their own DLL. C interfaces tend to need a lot of boilerplate, so I'm not going to include it all. The key is: use `Instrument` directly, load the DLL, and have parameters interact with it. ###Code class AlazarTech_ATS(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, system_id=1, board_id=1, dll_path=None, **kwargs): super().__init__(name, **kwargs) # connect to the DLL self._ATS_dll = ctypes.cdll.LoadLibrary(dll_path or self.dll_path) self._handle = self._ATS_dll.AlazarGetBoardBySystemID(system_id, board_id) if not self._handle: raise Exception('AlazarTech_ATS not found at ' 'system {}, board {}'.format(system_id, board_id)) self.buffer_list = [] # the Alazar driver includes its own parameter class to hold values # until later config is called, and warn if you try to read a value # that hasn't been sent to config. self.add_parameter(name='clock_source', parameter_class=AlazarParameter, label='Clock Source', unit=None, value='INTERNAL_CLOCK', byte_to_value_dict={1: 'INTERNAL_CLOCK', 4: 'SLOW_EXTERNAL_CLOCK', 5: 'EXTERNAL_CLOCK_AC', 7: 'EXTERNAL_CLOCK_10MHz_REF'}) # etc... ###Output _____no_output_____ ###Markdown It's very typical for DLL based instruments to only be supported on Windows. In such a driver care should be taken to ensure that the driver raises a clear error message if it is initialized on a different platform. This is typically best done by by checking `sys.platform` as below. In this example we are using `ctypes.windll` to interact with the DLL. `windll` is only defined on on Windows.QCoDeS is automatically typechecked with MyPy, this may give some complications for drivers that are not compatible with multiple OSes as there is no supported way to disabling the typecheck on a per platform basis for a specific submodule. Specifically MyPy will correctly notice that `self.dll` does not exist on non Windows platforms unless we add the line `self.dll: Any = None` to the example below. By giving `self.dll` the type `Any` we effectively disable any typecheck related to `self.dll` on non Windows platforms which is exactly what we want. This works because MyPy knows how to interprete the `sys.platform` check and allows `self.dll` to have different types on different OSes. ###Code class SomeDLLInstrument(Instrument): dll_path = 'C:\\WINDOWS\\System32\\ATSApi' def __init__(self, name, dll_path=None, **kwargs): super().__init__(name, **kwargs) if sys.platform != 'win32': self.dll: Any = None raise OSError("SomeDLLInsrument only works on Windows") else: self.dll = ctypes.windll.LoadLibrary(dll_path) # etc... ###Output _____no_output_____ ###Markdown Manual instrumentsA totally manual instrument (like the ithaco 1211) will contain only `ManualParameter`s. Some instruments may have a mix of manual and standard parameters. Here we also define a new `CurrentParameter` class that uses the ithaco parameters to convert a measured voltage to a current. When subclassing a parameter class (`Parameter`, `MultiParameter`, ...), the functions for setting and getting should be called `get_raw` and `set_raw`, respectively. ###Code class CurrentParameter(MultiParameter): """ Current measurement via an Ithaco preamp and a measured voltage. To be used when you feed a current into the Ithaco, send the Ithaco's output voltage to a lockin or other voltage amplifier, and you have the voltage reading from that amplifier as a qcodes parameter. ``CurrentParameter.get()`` returns ``(voltage_raw, current)`` Args: measured_param (Parameter): a gettable parameter returning the voltage read from the Ithaco output. c_amp_ins (Ithaco_1211): an Ithaco instance where you manually maintain the present settings of the real Ithaco amp. name (str): the name of the current output. Default 'curr'. Also used as the name of the whole parameter. """ def __init__(self, measured_param, c_amp_ins, name='curr', **kwargs): p_name = measured_param.name p_label = getattr(measured_param, 'label', None) p_unit = getattr(measured_param, 'units', None) super().__init__(name=name, names=(p_name+'_raw', name), shapes=((), ()), labels=(p_label, 'Current'), units=(p_unit, 'A'), instrument=instrument, **kwargs, ) self._measured_param = measured_param def get_raw(self): volt = self._measured_param.get() current = (self.instrument.sens.get() * self.instrument.sens_factor.get()) * volt if self.instrument.invert.get(): current *= -1 value = (volt, current) return value class Ithaco_1211(Instrument): """ This is the qcodes driver for the Ithaco 1211 Current-preamplifier. This is a virtual driver only and will not talk to your instrument. """ def __init__(self, name, **kwargs): super().__init__(name, **kwargs) # ManualParameter has an "initial_value" kwarg, but if you use this # you must be careful to check that it's correct before relying on it. # if you don't set initial_value, it will start out as None. self.add_parameter('sens', parameter_class=ManualParameter, initial_value=1e-8, label='Sensitivity', units='A/V', vals=vals.Enum(1e-11, 1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('invert', parameter_class=ManualParameter, initial_value=True, label='Inverted output', vals=vals.Bool()) self.add_parameter('sens_factor', parameter_class=ManualParameter, initial_value=1, label='Sensitivity factor', units=None, vals=vals.Enum(0.1, 1, 10)) self.add_parameter('suppression', parameter_class=ManualParameter, initial_value=1e-7, label='Suppression', units='A', vals=vals.Enum(1e-10, 1e-09, 1e-08, 1e-07, 1e-06, 1e-05, 1e-4, 1e-3)) self.add_parameter('risetime', parameter_class=ManualParameter, initial_value=0.3, label='Rise Time', units='msec', vals=vals.Enum(0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300, 1000)) def get_idn(self): return {'vendor': 'Ithaco (DL Instruments)', 'model': '1211', 'serial': None, 'firmware': None} ###Output _____no_output_____ ###Markdown Custom Parameter classesWhen you call:```self.add_parameter(name, **kwargs)```you create a `Parameter`. But with the `parameter_class` kwarg you can invoke any class you want:```self.add_parameter(name, parameter_class=OtherClass, **kwargs)```- `Parameter` handles most common instrument settings and measurements. - Accepts get and/or set commands as either strings for the instrument's `ask` and `write` methods, or functions/methods. The set and get commands may also be set to `False` and `None`. `False` corresponds to "no get/set method available" (example: the reading of a voltmeter is not settable, so we set `set_cmd=False`). `None` corresponds to a manually updated parameter (example: an instrument with no remote interface). - Has options for translating between instrument codes and more meaningful data values - Supports software-controlled ramping- Any other parameter class may be used in `add_parameter`, if it accepts `name` and `instrument` as constructor kwargs. Generally these should subclasses of `Parameter`, `ParameterWithSetpoints`, `ArrayParameter`, or `MultiParameter`. `ParameterWithSetpoints` is specifically designed to handle the situations where the instrument returns an array of data with assosiated setpoints. An example of how to use it can be found in the notebook [Simple Example of ParameterWithSetpoints](../Parameters/Simple-Example-of-ParameterWithSetpoints.ipynb)`ArrayParameter` is an older alternative that does the same thing. However, it is significantly less flexible and much harder to use correct but used in a significant number of drivers. **It is not recommended for any new driver.**`MultiParameter` is designed to for the situation where multiple different types of data is captured from the same instrument command.It is important that parameters subclass forwards the `name`, `label(s)`, `unit(s)` and `instrument` along with any unknown `**kwargs` to the superclasss. On/Off parametersFrequently, an instrument has parameters which can be expressed in terms of "something is on or off". Moreover, usually it is not easy to translate the lingo of the instrument to something that can have simply the value of `True` or `False` (which are typical in software). Even further, it may be difficult to find consensus between users on a convention: is it `on`/`off`, or `ON`/`OFF`, or python `True`/`False`, or `1`/`0`, or else?This case becomes even more complex if the instrument's API (say, corresponding VISA command) uses unexpected values for such a parameter, for example, turning an output "on" corresponds to a VISA command `DEV:CH:BLOCK 0` which means "set blocking of the channel to 0 where 0 has the meaning of the boolean value False, and alltogether this command actually enables the output on this channel".This results in inconsistency among instrument drivers where for some instrument, say, a `display` parameter has 'on'/'off' values for input, while for a different instrument a similar `display` parameter has `'ON'`/`'OFF'` values or `1`/`0`.Note that this particular example of a `display` parameter is trivial because the ambiguity and inconsistency for "this kind" of parameters can be solved by having the name of the parameter be `display_enabled` and the allowed input values to be python `bool` `True`/`False`.Anyway, when defining parameters where the solution does not come trivially, please, consider setting `val_mapping` of a parameter to the output of `create_on_off_val_mapping(on_val=, off_val=)` function from `qcodes.utils.helpers` package. The function takes care of creating a `val_mapping` dictionary that maps given instrument-side values of `on_val` and `off_val` to `True`/`False`, `'ON'`/`'OFF'`, `'on'`/`'off'`, and other commonly used ones. Note that when getting a value of such a parameter, the user will not get `'ON'` or `'off'` or `'oFF'` - instead, `True`/`False` will be returned. Dynamically adding and removing parametersSometimes when conditions change (for example, the mode of operation of the instrument is changed from current to voltage measurement) you want different parameters to be available.To delete existing parameters:```del self.parameters[name_to_delete]```And to add more, do the same thing as you did initially:```self.add_parameter(new_name, **kwargs)``` Handling interruption of measurements A QCoDeS driver should be prepared for interruptions of the measurement triggered by a KeyboardInterrupt from the enduser. If an interrupt happens at an unfortunate time i.e. while communicating with the instrument or writing results of a measurement this may leave the program in an inconsistent state e.g. with a command in the output buffer of a VISA instrument. To protect against this QCoDeS ships with a context manager that intercepts KeyBoardInterrupts and delays them until it is safe to stop the program. By default QCoDeS protects writing to the database and communicating with VISA instruments in this way. However, there may be situations where a driver needs additional protection around a critical piece of code. The following example shows how a critical piece of code can be protected. The reader is encouraged to experiment with this using the `interrupt the kernel` button in this notebook. Note how the first KeyBoardInterrupt triggers a message to the screen and then executes the code within the context manager but not the code outside. Furthermore 2 KeyBoardInterrupts rapidly after each other will trigger an immediate interrupt that does not complete the code within the context manager. The context manager can therefore be wrapped around any piece of code that the end user should not normally be allowed to interrupt. ###Code from qcodes.utils.delaykeyboardinterrupt import DelayedKeyboardInterrupt import time with DelayedKeyboardInterrupt(): for i in range(10): time.sleep(0.2) print(i) print("Loop completed") print("Executing code after context manager") ###Output _____no_output_____
tutorials/1. Introspecting models/Introspecting models.ipynb
###Markdown Introspecting models To help investigators work with models and simulations, [BioSimulators-utils](https://github.com/biosimulators/Biosimulators_utils) provides a method, `get_parameters_variables_outputs_for_simulation` for introspecting model/simulation files. This method can be helpful for programmatically constructing SED-ML files and COMBINE archives from other formats.This method can extract several types of information about model/simulation files:* Inputs to simulations: Boolean and numeric-valued attributes of models/simulations such as for constants and initial conditions.* Outputs of simulations: Possible variables which can be observed such as values of objectives, concentrations of species, fluxes of reactions, and sizes of compartments.* Simulations: Settings for simulations such as start and stop times, algorithms, and algorithm parameters.* Plots: Settings for plots of simulation results, such as the obserables that should be painted on each curve of a 2D line plot.This method currently supports the following formats:* [BioNetGen Language (BNGL)](https://www.bionetgen.org/)* [CellML](https://www.cellml.org/)* [GINsim Markup Language (GINML, ZGINML)](http://ginsim.org/)* [NeuroML](https://neuroml.org/)/[Low Entropy Model Specification (LEMS)](https://lems.github.io/LEMS/)* [Resource Balance Analysis (RBA)](https://sysbioinra.github.io/RBApy/)* [Smoldyn](http://www.smoldyn.org/)* [Systems Biology Markup Language (SBML)](http://sbml.org)* [SBML flux balance constraints (FBC) package](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/fbc)* [SBML qualitative models (qual) package](http://sbml.org/Documents/Specifications/SBML_Level_3/Packages/qual)* [SBML Mass Action Stoichiometric Simulation (MASS) schema](https://masspy.readthedocs.io/en/stable/tutorials/reading_writing_models.html)* [XPP ODE format](http://www.math.pitt.edu/~bard/xpp/help/xppodes.html)This tutorial illustrates how to use BioSimulators utils to introspect a model. 1. Install BioSimulators-utils with options for the model formats that you would like to introspect For example, install BioSimulators-utils with the `sbml` option to enable introspection of models encoded in SBML. More information about the available installation options is available at [https://docs.biosimulators.org/](https://docs.biosimulators.org/Biosimulators_utils/). ###Code !pip install biosimulators-utils[sbml] ###Output Invalid -W option ignored: invalid module name: 'biosimulators_utils.warnings' Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: biosimulators-utils[sbml] in /usr/local/lib/python3.9/site-packages (0.1.130) Requirement already satisfied: appdirs in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.4.4) Requirement already satisfied: cement in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (3.0.4) Requirement already satisfied: kisao>=2.29 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.29) Requirement already satisfied: pyomexmeta>=1.2.13 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.2.13) Requirement already satisfied: openpyxl in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (3.0.9) Requirement already satisfied: validators in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (0.18.2) Requirement already satisfied: requests-cache in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (0.8.2.dev1) Requirement already satisfied: yamldown in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (0.1.8) Requirement already satisfied: matplotlib in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (3.2.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (57.5.0) Requirement already satisfied: biopython in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.79) Requirement already satisfied: pyyaml in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (6.0b1) Requirement already satisfied: rdflib in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (6.0.1) Requirement already satisfied: pandas in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.3.3) Requirement already satisfied: evalidate in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (0.7.8) Requirement already satisfied: pronto>=2.4 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.4.3) Requirement already satisfied: python-libcombine>=0.2.11 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (0.2.13) Requirement already satisfied: natsort in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (7.1.1) Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.19.3) Requirement already satisfied: simplejson in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (3.17.5) Requirement already satisfied: python-dateutil in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.8.2) Requirement already satisfied: requests in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.26.0) Requirement already satisfied: python-libsedml>=2.0.16 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.0.26) Requirement already satisfied: mpmath in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.2.1) Requirement already satisfied: h5py in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (3.4.0) Requirement already satisfied: networkx>=2.6 in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (2.6.3) Requirement already satisfied: termcolor in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (1.1.0) Requirement already satisfied: lxml in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (4.6.3) Requirement already satisfied: python-libsbml in /usr/local/lib/python3.9/site-packages (from biosimulators-utils[sbml]) (5.19.0) Requirement already satisfied: fastobo~=0.10.0 in /usr/local/lib/python3.9/site-packages (from pronto>=2.4->biosimulators-utils[sbml]) (0.10.2.post1) Requirement already satisfied: chardet<5.0,>=3.0 in /usr/local/lib/python3.9/site-packages (from pronto>=2.4->biosimulators-utils[sbml]) (4.0.0) Requirement already satisfied: pydot>=1.4.1 in /usr/local/lib/python3.9/site-packages (from pyomexmeta>=1.2.13->biosimulators-utils[sbml]) (1.4.2) Requirement already satisfied: graphviz>=0.15 in /usr/local/lib/python3.9/site-packages (from pyomexmeta>=1.2.13->biosimulators-utils[sbml]) (0.17) Requirement already satisfied: pyparsing>=2.1.4 in /usr/local/lib/python3.9/site-packages (from pydot>=1.4.1->pyomexmeta>=1.2.13->biosimulators-utils[sbml]) (3.0.0rc2) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.9/site-packages (from python-dateutil->biosimulators-utils[sbml]) (1.16.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.9/site-packages (from matplotlib->biosimulators-utils[sbml]) (1.3.2) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.9/site-packages (from matplotlib->biosimulators-utils[sbml]) (0.10.0) Requirement already satisfied: et-xmlfile in /usr/local/lib/python3.9/site-packages (from openpyxl->biosimulators-utils[sbml]) (1.1.0) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.9/site-packages (from pandas->biosimulators-utils[sbml]) (2021.3) Requirement already satisfied: isodate in /usr/local/lib/python3.9/site-packages (from rdflib->biosimulators-utils[sbml]) (0.6.0) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.9/site-packages (from requests->biosimulators-utils[sbml]) (1.26.7) Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.9/site-packages (from requests->biosimulators-utils[sbml]) (2.0.6) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.9/site-packages (from requests->biosimulators-utils[sbml]) (2021.5.30) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.9/site-packages (from requests->biosimulators-utils[sbml]) (3.2) Requirement already satisfied: url-normalize<2.0,>=1.4 in /usr/local/lib/python3.9/site-packages (from requests-cache->biosimulators-utils[sbml]) (1.4.3) Requirement already satisfied: attrs<22.0,>=21.2 in /usr/local/lib/python3.9/site-packages (from requests-cache->biosimulators-utils[sbml]) (21.2.0) Requirement already satisfied: cattrs<2.0,>=1.8 in /usr/local/lib/python3.9/site-packages (from requests-cache->biosimulators-utils[sbml]) (1.8.0) Requirement already satisfied: decorator>=3.4.0 in /usr/local/lib/python3.9/site-packages (from validators->biosimulators-utils[sbml]) (5.1.0) ###Markdown 2. Import `get_parameters_variables_outputs_for_simulation` ###Code from biosimulators_utils.sedml.model_utils import get_parameters_variables_outputs_for_simulation ###Output _____no_output_____ ###Markdown 3. Import `ModelLanguage` to describe the format that should be introspected ###Code from biosimulators_utils.sedml.data_model import ModelLanguage ###Output _____no_output_____ ###Markdown 4. Import an additional class to describe the default type of simulation that should be introspected from the model file if the file does not describe any simulations ###Code from biosimulators_utils.sedml.data_model import UniformTimeCourseSimulation ###Output _____no_output_____ ###Markdown 5. Execute `get_parameters_variables_outputs_for_simulation` on a model file Execute the following code to introspect the Systems Biology Markup Language (SBML) file for the [Ciliberto et al. morphogenesis checkpoint model](../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml). ###Code inputs, simulations, outputs, plots = get_parameters_variables_outputs_for_simulation( model_filename='../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml', model_language=ModelLanguage.SBML, simulation_type=UniformTimeCourseSimulation) ###Output _____no_output_____ ###Markdown The first output, `inputs`, describes the inputs to simulations of the model. This captures all of the attributes of a simulation that could be modified. This output is a list of instances of `biosimulators_utils.sedml.data_model.ModelAttributeChange`. Each instance captures a suggested id and name for a change for for the corresponding model component, the address of the component within the model, and its default value encoded into a string. ###Code import yaml print(yaml.dump({input.id: {'target': input.target, 'default': input.new_value} for input in inputs})) ###Output init_conc_species_BE: default: '2.429618e-4' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='BE']/@initialConcentration init_conc_species_Cdc20: default: '1.1722378' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdc20']/@initialConcentration init_conc_species_Cdc20a: default: '1.4384692' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdc20a']/@initialConcentration init_conc_species_Cdh1: default: '0.99263656' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdh1']/@initialConcentration init_conc_species_Clb: default: '0.18453673' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Clb']/@initialConcentration init_conc_species_Cln: default: '0.053600963' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cln']/@initialConcentration init_conc_species_IE: default: '0.52220768' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='IE']/@initialConcentration init_conc_species_Mcm: default: '0.93289256' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Mcm']/@initialConcentration init_conc_species_Mih1a: default: '0.80809075' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Mih1a']/@initialConcentration init_conc_species_PClb: default: '3.020305e-5' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PClb']/@initialConcentration init_conc_species_PSwe1: default: '2.050078e-4' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PSwe1']/@initialConcentration init_conc_species_PSwe1M: default: '0.013336782' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PSwe1M']/@initialConcentration init_conc_species_PTrim: default: '1.402314e-5' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PTrim']/@initialConcentration init_conc_species_SBF: default: '0.12405464' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='SBF']/@initialConcentration init_conc_species_Sic: default: '0.0035491784' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Sic']/@initialConcentration init_conc_species_Swe1: default: '3.158858e-4' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Swe1']/@initialConcentration init_conc_species_Swe1M: default: '0.018360127' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Swe1M']/@initialConcentration init_conc_species_Trim: default: '0.084410675' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Trim']/@initialConcentration init_conc_species_mass: default: '0.80224854' target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='mass']/@initialConcentration init_size_compartment_compartment: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfCompartments/sbml:compartment[@id='compartment']/@size value_parameter_BUD: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='BUD']/@value value_parameter_Cdh1in: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Cdh1in']/@value value_parameter_Cdh1tot: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Cdh1tot']/@value value_parameter_IEin: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='IEin']/@value value_parameter_IEtot: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='IEtot']/@value value_parameter_Jamih: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Jamih']/@value value_parameter_Jawee: default: '0.05' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Jawee']/@value value_parameter_Jiwee: default: '0.05' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Jiwee']/@value value_parameter_Jm: default: '10' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Jm']/@value value_parameter_Kacdh_doubleprime: default: '10' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Kacdh_doubleprime']/@value value_parameter_Kacdh_prime: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Kacdh_prime']/@value value_parameter_Mcmin: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mcmin']/@value value_parameter_Mcmtot: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mcmtot']/@value value_parameter_Mih: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mih']/@value value_parameter_Mih1: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mih1']/@value value_parameter_Mih1tot: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mih1tot']/@value value_parameter_Mih_ast: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Mih_ast']/@value value_parameter_SBFin: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='SBFin']/@value value_parameter_SBFtot: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='SBFtot']/@value value_parameter_Swe1T: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Swe1T']/@value value_parameter_Vamih: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Vamih']/@value value_parameter_Vawee: default: '0.3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Vawee']/@value value_parameter_Vimih: default: '0.3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Vimih']/@value value_parameter_Viwee: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='Viwee']/@value value_parameter_eps: default: '0.5' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='eps']/@value value_parameter_flag: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='flag']/@value value_parameter_jacdc20: default: '1.000000e-3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jacdc20']/@value value_parameter_jacdh: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jacdh']/@value value_parameter_jaie: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jaie']/@value value_parameter_jamcm: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jamcm']/@value value_parameter_jasbf: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jasbf']/@value value_parameter_jicdc20: default: '1.000000e-3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jicdc20']/@value value_parameter_jicdh: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jicdh']/@value value_parameter_jiie: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jiie']/@value value_parameter_jimcm: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jimcm']/@value value_parameter_jimih: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jimih']/@value value_parameter_jisbf: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jisbf']/@value value_parameter_jscdc20: default: '0.3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='jscdc20']/@value value_parameter_kacdc20: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kacdc20']/@value value_parameter_kaie: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kaie']/@value value_parameter_kamcm: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kamcm']/@value value_parameter_kasbf_doubleprime: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kasbf_doubleprime']/@value value_parameter_kasbf_prime: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kasbf_prime']/@value value_parameter_kass: default: '300' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kass']/@value value_parameter_kdbud: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdbud']/@value value_parameter_kdcdc20: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdcdc20']/@value value_parameter_kdclb_doubleprime: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdclb_doubleprime']/@value value_parameter_kdclb_prime: default: '0.015' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdclb_prime']/@value value_parameter_kdclb_tripleprime: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdclb_tripleprime']/@value value_parameter_kdcln: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdcln']/@value value_parameter_kdiss: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdiss']/@value value_parameter_kdsic: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdsic']/@value value_parameter_kdsic_doubleprime: default: '3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdsic_doubleprime']/@value value_parameter_kdsic_prime: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdsic_prime']/@value value_parameter_kdswe_doubleprime: default: '0.05' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdswe_doubleprime']/@value value_parameter_kdswe_prime: default: '0.007' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kdswe_prime']/@value value_parameter_khsl1: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='khsl1']/@value value_parameter_khsl1r: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='khsl1r']/@value value_parameter_kicdc20: default: '0.25' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kicdc20']/@value value_parameter_kicdh: default: '35' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kicdh']/@value value_parameter_kicdh_prime: default: '2' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kicdh_prime']/@value value_parameter_kiie: default: '0.04' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kiie']/@value value_parameter_kimcm: default: '0.15' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kimcm']/@value value_parameter_kisbf_doubleprime: default: '2' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kisbf_doubleprime']/@value value_parameter_kisbf_prime: default: '1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kisbf_prime']/@value value_parameter_kmih: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kmih']/@value value_parameter_kmih_doubleprime: default: '0.5' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kmih_doubleprime']/@value value_parameter_kmih_prime: default: '5' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kmih_prime']/@value value_parameter_ksbud: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='ksbud']/@value value_parameter_kscdc20_doubleprime: default: '0.3' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kscdc20_doubleprime']/@value value_parameter_kscdc20_prime: default: '0.005' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kscdc20_prime']/@value value_parameter_ksclb: default: '0.015' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='ksclb']/@value value_parameter_kscln: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kscln']/@value value_parameter_kssic: default: '0.1' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kssic']/@value value_parameter_ksswe: default: '0.0025' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='ksswe']/@value value_parameter_kssweC: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kssweC']/@value value_parameter_kswe: default: '0' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kswe']/@value value_parameter_kswe_doubleprime: default: '0.01' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kswe_doubleprime']/@value value_parameter_kswe_prime: default: '2' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kswe_prime']/@value value_parameter_kswe_tripleprime: default: '0.2' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='kswe_tripleprime']/@value value_parameter_mu: default: '0.005' target: /sbml:sbml/sbml:model/sbml:listOfParameters/sbml:parameter[@id='mu']/@value ###Markdown The second output, `simulations`, describes the simulations specified in the model file. This output is a list of instances of `biosimulators_utils.sedml.data_model.Simulation`. If no simulations were described in the file, `get_parameters_variables_outputs_for_simulation` returns an array with a single instance of the default type of simulation with a recommended simulation algorithm. ###Code for i_simulation, simulation in enumerate(simulations): print(f'Simulation {i_simulation + 1}') print(f' Type: {simulation.__class__.__name__}') print(f' Initial time: {simulation.initial_time}') print(f' Output start time: {simulation.output_start_time}') print(f' Output end time: {simulation.output_end_time}') print(f' Number of steps: {simulation.number_of_steps}') print(f' Algorithm: {simulation.algorithm.kisao_id}') ###Output Simulation 1 Type: UniformTimeCourseSimulation Initial time: 0.0 Output start time: 0.0 Output end time: 1.0 Number of steps: 10 Algorithm: KISAO_0000019 ###Markdown The third output, `outputs`, describes the possible outputs that could be recorded from simulations of the model. This output is a list of instances of `biosimulators_utils.sedml.data_model.Variable`. Each instance captures a suggested id and name for and observable for the corresponding model component and its address within the model. ###Code import yaml print(yaml.dump({output.id: {'target': output.target, 'symbol': output.symbol} for output in outputs})) ###Output dynamics_species_BE: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='BE'] dynamics_species_Cdc20: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdc20'] dynamics_species_Cdc20a: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdc20a'] dynamics_species_Cdh1: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cdh1'] dynamics_species_Clb: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Clb'] dynamics_species_Cln: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Cln'] dynamics_species_IE: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='IE'] dynamics_species_Mcm: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Mcm'] dynamics_species_Mih1a: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Mih1a'] dynamics_species_PClb: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PClb'] dynamics_species_PSwe1: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PSwe1'] dynamics_species_PSwe1M: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PSwe1M'] dynamics_species_PTrim: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='PTrim'] dynamics_species_SBF: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='SBF'] dynamics_species_Sic: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Sic'] dynamics_species_Swe1: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Swe1'] dynamics_species_Swe1M: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Swe1M'] dynamics_species_Trim: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='Trim'] dynamics_species_mass: symbol: null target: /sbml:sbml/sbml:model/sbml:listOfSpecies/sbml:species[@id='mass'] time: symbol: !!python/object/apply:biosimulators_utils.sedml.data_model.Symbol - urn:sedml:symbol:time target: null ###Markdown The fourth output, `plots`, describes the plots specified in the model file. This output is a list of instances of `biosimulators_utils.sedml.data_model.Plot`. ###Code import yaml print(yaml.dump([plot.id for plot in plots])) ###Output [] ###Markdown All model formats contain information about parameters and variables. However, most model formats, such as SBML, do not contain any information about simulations and plots. 6. Getting the default value of each input in its native data type By default, `get_parameters_variables_outputs_for_simulation` returns the default value of each simulation input as a string because the [Simulation Experiment Description Model Language (SBML)](http://sed-ml.org/) uses strings to describe the values of model changes.As illustrated below, the `native_data_types` option can be used to return the default value of each simulation input in its native data type (e.g., `bool`, `float`, `int`). ###Code native_inputs, _, _, _ = get_parameters_variables_outputs_for_simulation( model_filename='../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml', model_language=ModelLanguage.SBML, simulation_type=UniformTimeCourseSimulation, native_data_types=True) print(f'SED-ML data type: {inputs[0].new_value.__class__}') print(f'Native data type: {native_inputs[0].new_value.__class__}') ###Output SED-ML data type: <class 'str'> Native data type: <class 'float'> ###Markdown 7. Getting the ids and names of the model components corresponding to each simulation input and output By default, `get_parameters_variables_outputs_for_simulation` returns suggested ids and names for model changes and observables for inputs and outputs. As illustrated below, the `native_ids` option can be used to retrieve the id and name of each corresponding model element. ###Code native_inputs, _, native_outputs, _ = get_parameters_variables_outputs_for_simulation( model_filename='../_data/Ciliberto-J-Cell-Biol-2003-morphogenesis-checkpoint-continuous.xml', model_language=ModelLanguage.SBML, simulation_type=UniformTimeCourseSimulation, native_ids=True) print(f'Id of a model component that can be changed: {native_inputs[0].id}') print(f'Suggested id for a change to this component: {inputs[0].id}') print(f'Id of a model component that can be observed: {native_outputs[1].id}') print(f'Suggested id for an observable for the model component: {outputs[1].id}') ###Output Id of a model component that can be observed: Trim Suggested id for an observable for the model component: dynamics_species_Trim
s/endemic_program_by_sympy.ipynb
###Markdown 卒研 ###Code import numpy as np import math as ma import sympy as sym # import scipy as sp # from scipy.integrate import solve_ivp # from scipy.integrate import odeint # import matplotlib.pyplot as plt # from numpy.typing import _128Bit, _16Bit """パラメータの定義""" LAMBDA,CONTACT_E,TOTAL,CONTACT_I,CONTACT_S1,ALPHA,MU,RATE_W,GAMMA ,THETA1,THETA2,SPAN1,SPAN2,SPAN12,S,E,I,R,S1 = sym.symbols( "LAMBDA,CONTACT_E,TOTAL,CONTACT_I,CONTACT_S1,ALPHA,MU,RATE_W,GAMMA ,THETA1,THETA2,SPAN1,SPAN2,SPAN12,S,E,I,R,S1" ) #S,E,I,R,S1 =sym.symbols("S,E,I,R,S1") """ LAMBDA = 16600 # per day #CONTACT = 0.03 CONTACT_E = 0.08 CONTACT_I = 0.12 CONTACT_S1 = 0.5 #(1/2)*(CONTACT_E + CONTACT_I )*(1-(85/100)) THETA1 = 4.45*10**(-5) THETA2 = 4.88*10**(-5) #SEVERE = 0.016 MU = 2.92*10**(-5) ALPHA = 1/7 RATE_W = 0.01 # per day GAMMA = 1/10 # per day SPAN1 = 1/14 SPAN2 = 1/10 SPAN12 = 1/11 TOTAL = 8800000 """ # 方程式の定義 eq1 = LAMBDA - (CONTACT_E*S*E/TOTAL+ CONTACT_I*S*I/TOTAL) - S*(SPAN1*THETA1+MU) eq2 = (CONTACT_E*S*E/TOTAL+ CONTACT_I*S*I/TOTAL+CONTACT_S1*S1*I/TOTAL+CONTACT_S1*S1*E/TOTAL) - E*(MU+ALPHA) eq3 = ALPHA*E - I*(RATE_W + MU + GAMMA) eq4 = S1*SPAN2*THETA2*SPAN12 + GAMMA*I - R*MU eq5 = S*SPAN1*THETA1- S1*(SPAN2*THETA2*SPAN12 + MU)-(CONTACT_S1*S1*I/TOTAL)-(CONTACT_S1*S1*E/TOTAL) eq1 ###Output _____no_output_____ ###Markdown 解を探す ###Code # その1sym.solveの利用 Sol1 = sym.solve([eq1,eq2,eq3,eq4,eq5],[S,E,I,R,S1]) print(Sol1) # その2 scipyの利用 """ X[0] = S X[1] = E X[2] = I X[3] = R X[4] = S1 """ """ def epidemic_model(X): return [LAMBDA - (CONTACT_E*X[0]*X[1]/TOTAL+ CONTACT_I*X[0]*X[2]/TOTAL) - X[0]*(SPAN1*THETA1+MU), (CONTACT_E*X[0]*X[1]/TOTAL+ CONTACT_I*X[0]*X[2]/TOTAL+CONTACT_S1*X[0]*X[2]/TOTAL+CONTACT_S1*X[4]*X[1]/TOTAL) - X[1]*(MU+ALPHA), ALPHA*X[1] - X[2]*(RATE_W + MU + GAMMA), X[4]*SPAN2*THETA2*SPAN12 + GAMMA*X[2] - X[3]*MU, X[0]*SPAN1*THETA1 - X[4]*(SPAN2*THETA2*SPAN12 + MU)-(CONTACT_S1*X[4]*X[2]/TOTAL)-(CONTACT_S1*X[4]*X[1]/TOTAL)] #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="hybr") result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="lm") #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="broyden1") #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="krylov") #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="anderson") #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="excitingmixing") #result = sp.optimize.root( epidemic_model, [3500000,2368,1856,5259708,36068], method="diagbroyden") print(result) """ ###Output _____no_output_____
machine_learning/notebooks/Random_sampling.ipynb
###Markdown BackgroundThis notebook explores the influence of using a reduced number of frames to aggregate the features used in the model (temporal_dct and temporal_gaussian).The experiment evaluates the F20 score for different number of samples frames ###Code path = '../../machine_learning/cloud_functions/data-large.csv' data = pd.read_csv(path) df = pd.DataFrame(data) columns = ['attack', 'dimension', 'size', 'title', 'temporal_dct-series', 'temporal_gaussian_mse-series'] df = df[columns] df = df.dropna() df['attack_ID'] = df.apply(lambda row: row['attack'] in ['1080p', '720p', '480p', '360p', '240p', '144p'] , axis=1) for column in columns: if 'series' in column: df[column] = df.apply(lambda row: np.fromstring(row[column].replace('[', '').replace(']', ''), dtype=np.float, sep=' '), axis=1) df['{}-len'.format(column)] = df.apply(lambda row: len(row[column]), axis=1) display(df.head()) df.describe() ###Output _____no_output_____ ###Markdown OCSVMWe will be conducting the experiments on the model with the best results achieved so far: One Class Support Vector Machine ###Code # Helper function to evaluate models with different data sets def evaluate_data_set(df, X_train_120): features = df.columns metric_processor = MetricProcessor(features,'UL', path) (X_train, X_test, X_attacks), (df_train, df_test, df_attacks) = metric_processor.split_test_and_train(df) # Scaling the data ss = StandardScaler() x_train = ss.fit_transform(X_train_120) x_test = ss.transform(X_test) x_attacks = ss.transform(X_attacks) # Dataframe to store results svm_results = pd.DataFrame(columns=['gamma', 'nu', 'n_components', 'TPR_test', 'TNR', 'model', 'auc', 'f_beta', 'projection']) # Train the models svm_results = evaluation.one_class_svm(x_train, x_test, x_attacks, svm_results) display(svm_results.sort_values('f_beta', ascending=False).head(1)) return svm_results.sort_values('f_beta', ascending=False).head(1) frame_nums = [1, 5, 10, 15, 30, 60, 90, 120] features = df.columns df_samples = df.copy() print(df_samples.shape) metric_processor = MetricProcessor(features,'UL', path) df_results = pd.DataFrame(columns=['frames','gamma', 'nu', 'n_components', 'TPR_test', 'TNR', 'model', 'auc', 'f_beta', 'projection']) for column in columns: if 'series' in column: df_samples[column] = df_samples.apply(lambda row: np.mean(row[column][:120]), axis=1) (X_train_120, X_test, X_attacks), (df_train, df_test, df_attacks) = metric_processor.split_test_and_train(df_samples) for frame_num in frame_nums: df_samples = df[df['temporal_dct-series-len']>100].copy() print('**********************************') print('Frame number:', frame_num) for column in columns: if 'series' in column: df_samples[column] = df_samples.apply(lambda row: np.mean(np.random.choice(row[column], frame_num)), axis=1) df_results = pd.concat([df_results, evaluate_data_set(df_samples, X_train_120)], axis=0, sort=False) df_results['frames'] = frame_nums display(df_results) ###Output _____no_output_____ ###Markdown Plot results ###Code list_dct = [] list_gaussian = [] n_frames = 100 for index, row in df.iterrows(): if len(row['temporal_dct-series'])>=n_frames: list_dct.append(row['temporal_dct-series'][:n_frames]) list_gaussian.append(row['temporal_gaussian_mse-series'][:n_frames]) print(len(list_gaussian)) df_dct = pd.DataFrame(data=list_dct) df_gaussian = pd.DataFrame(data=list_gaussian) df_gaussian.mean().plot(title='Mean gaussian') df_dct.mean().plot(title='Mean DCT') ###Output _____no_output_____
hw08/Chap08.ipynb
###Markdown 8장 딥러닝딥러닝은 층을 깊게 한 심층 신경망입니다. 심층 신경망은 지금까지 설명한 신경망을 바탕으로 뒷단에 층을 추가하기만 하면 만들 수 있지만, 커다란 문제가 몇 개 있습니다. 이번 장에서는 딥러닝의 특징과 과제, 그리고 가능성을 살펴봅니다. 또 오늘날의 첨단 딥러닝에 대한 설명도 준비했습니다. Copyrights1. https://github.com/WegraLee/deep-learning-from-scratch Customized by Gil-Jin Jang, May 6, 2021 파일 설명| 파일명 | 파일 용도 | 관련 절 | 페이지 ||:-- |:-- |:-- |:-- || awesome_net.py | 빈 파일입니다. 여기에 여러분만의 멋진 신경망을 구현해보세요! | | || deep_convnet.py | [그림 8-1]의 깊은 신경망을 구현한 소스입니다. | 8.1.1 더 깊은 신경망으로 | 262 || train_deepnet.py | deep_convnet.py의 신경망을 학습시킵니다. 몇 시간은 걸리기 때문에 다른 코드에서는 미리 학습된 가중치인 deep_convnet_params.pkl을 읽어서 사용합니다. | 8.1.1 더 깊은 신경망으로 | 262 || deep_convnet_params.pkl | deep_convnet.py용 학습된 가중치입니다. | | || misclassified_mnist.py | 이번 장에서 구현한 신경망이 인식에 실패한 손글씨 이미지들을 화면에 보여줍니다. | 8.1.1 더 깊은 신경망으로 | 263 || half_float_network.py | 수치 정밀도를 반정밀도(16비트)로 낮춰 계산하여 배정밀도(64비트)일 때와 정확도를 비교해본다. | 8.3.4 연산 정밀도와 비트 줄이기 | 278 | 목차```8.1 더 깊게 __8.1.1 더 깊은 네트워크로 __8.1.2 정확도를 더 높이려면 __8.1.3 깊게 하는 이유 8.2 딥러닝의 초기 역사 __8.2.1 이미지넷 __8.2.2 VGG __8.2.3 GoogLeNet __8.2.4 ResNet 8.3 더 빠르게(딥러닝 고속화) __8.3.1 풀어야 할 숙제 __8.3.2 GPU를 활용한 고속화 __8.3.3 분산 학습 __8.3.4 연산 정밀도와 비트 줄이기 8.4 딥러닝의 활용 __8.4.1 사물 검출 __8.4.2 분할 __8.4.3 사진 캡션 생성 8.5 딥러닝의 미래 __8.5.1 이미지 스타일(화풍) 변환 __8.5.2 이미지 생성 __8.5.3 자율 주행 __8.5.4 Deep Q-Network(강화학습) ``` 8.1 더 깊게 8.1.1 더 깊은 네트워크로 손글씨 심층 CNN ###Code # coding: utf-8 # deep_conv_net.py # [그림 8-1]의 깊은 신경망을 구현한 소스입니다. import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 import pickle import numpy as np from collections import OrderedDict from common.layers import * class DeepConvNet: """정확도 99% 이상의 고정밀 합성곱 신경망 네트워크 구성은 아래와 같음 conv - relu - conv- relu - pool - conv - relu - conv- relu - pool - conv - relu - conv- relu - pool - affine - relu - dropout - affine - dropout - softmax """ def __init__(self, input_dim=(1, 28, 28), conv_param_1 = {'filter_num':16, 'filter_size':3, 'pad':1, 'stride':1}, conv_param_2 = {'filter_num':16, 'filter_size':3, 'pad':1, 'stride':1}, conv_param_3 = {'filter_num':32, 'filter_size':3, 'pad':1, 'stride':1}, conv_param_4 = {'filter_num':32, 'filter_size':3, 'pad':2, 'stride':1}, conv_param_5 = {'filter_num':64, 'filter_size':3, 'pad':1, 'stride':1}, conv_param_6 = {'filter_num':64, 'filter_size':3, 'pad':1, 'stride':1}, hidden_size=50, output_size=10): # 가중치 초기화=========== # 각 층의 뉴런 하나당 앞 층의 몇 개 뉴런과 연결되는가(TODO: 자동 계산되게 바꿀 것) pre_node_nums = np.array([1*3*3, 16*3*3, 16*3*3, 32*3*3, 32*3*3, 64*3*3, 64*4*4, hidden_size]) wight_init_scales = np.sqrt(2.0 / pre_node_nums) # ReLU를 사용할 때의 권장 초깃값 self.params = {} pre_channel_num = input_dim[0] for idx, conv_param in enumerate([conv_param_1, conv_param_2, conv_param_3, conv_param_4, conv_param_5, conv_param_6]): self.params['W' + str(idx+1)] = wight_init_scales[idx] * np.random.randn(conv_param['filter_num'], pre_channel_num, conv_param['filter_size'], conv_param['filter_size']) self.params['b' + str(idx+1)] = np.zeros(conv_param['filter_num']) pre_channel_num = conv_param['filter_num'] self.params['W7'] = wight_init_scales[6] * np.random.randn(64*4*4, hidden_size) self.params['b7'] = np.zeros(hidden_size) self.params['W8'] = wight_init_scales[7] * np.random.randn(hidden_size, output_size) self.params['b8'] = np.zeros(output_size) # 계층 생성=========== self.layers = [] self.layers.append(Convolution(self.params['W1'], self.params['b1'], conv_param_1['stride'], conv_param_1['pad'])) self.layers.append(Relu()) self.layers.append(Convolution(self.params['W2'], self.params['b2'], conv_param_2['stride'], conv_param_2['pad'])) self.layers.append(Relu()) self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2)) self.layers.append(Convolution(self.params['W3'], self.params['b3'], conv_param_3['stride'], conv_param_3['pad'])) self.layers.append(Relu()) self.layers.append(Convolution(self.params['W4'], self.params['b4'], conv_param_4['stride'], conv_param_4['pad'])) self.layers.append(Relu()) self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2)) self.layers.append(Convolution(self.params['W5'], self.params['b5'], conv_param_5['stride'], conv_param_5['pad'])) self.layers.append(Relu()) self.layers.append(Convolution(self.params['W6'], self.params['b6'], conv_param_6['stride'], conv_param_6['pad'])) self.layers.append(Relu()) self.layers.append(Pooling(pool_h=2, pool_w=2, stride=2)) self.layers.append(Affine(self.params['W7'], self.params['b7'])) self.layers.append(Relu()) self.layers.append(Dropout(0.5)) self.layers.append(Affine(self.params['W8'], self.params['b8'])) self.layers.append(Dropout(0.5)) self.last_layer = SoftmaxWithLoss() def predict(self, x, train_flg=False): for layer in self.layers: if isinstance(layer, Dropout): x = layer.forward(x, train_flg) else: x = layer.forward(x) return x def loss(self, x, t): y = self.predict(x, train_flg=True) return self.last_layer.forward(y, t) def accuracy(self, x, t, batch_size=100): if t.ndim != 1 : t = np.argmax(t, axis=1) acc = 0.0 for i in range(int(x.shape[0] / batch_size)): tx = x[i*batch_size:(i+1)*batch_size] tt = t[i*batch_size:(i+1)*batch_size] y = self.predict(tx, train_flg=False) y = np.argmax(y, axis=1) acc += np.sum(y == tt) return acc / x.shape[0] def gradient(self, x, t): # forward self.loss(x, t) # backward dout = 1 dout = self.last_layer.backward(dout) tmp_layers = self.layers.copy() tmp_layers.reverse() for layer in tmp_layers: dout = layer.backward(dout) # 결과 저장 grads = {} for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 15, 18)): grads['W' + str(i+1)] = self.layers[layer_idx].dW grads['b' + str(i+1)] = self.layers[layer_idx].db return grads def save_params(self, file_name="params.pkl"): params = {} for key, val in self.params.items(): params[key] = val with open(file_name, 'wb') as f: pickle.dump(params, f) def load_params(self, file_name="params.pkl"): with open(file_name, 'rb') as f: params = pickle.load(f) for key, val in params.items(): self.params[key] = val for i, layer_idx in enumerate((0, 2, 5, 7, 10, 12, 15, 18)): self.layers[layer_idx].W = self.params['W' + str(i+1)] self.layers[layer_idx].b = self.params['b' + str(i+1)] # coding: utf-8 # train_deepnet.py # deep_convnet_params.pkl # deep_convnet.py의 신경망을 학습시킵니다. 몇 시간은 걸리기 때문에 다른 코드에서는 미리 학습된 가중치인 deep_convnet_params.pkl을 읽어서 사용합니다. | 8.1.1 더 깊은 신경망으로 | 262 | import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 import numpy as np import matplotlib.pyplot as plt from dataset.mnist import load_mnist from deep_convnet import DeepConvNet from common.trainer import Trainer (x_train, t_train), (x_test, t_test) = load_mnist(flatten=False) # # 시간이 오래 걸릴 경우 데이터를 줄인다. # x_train, t_train = x_train[:5000], t_train[:5000] # x_test, t_test = x_test[:1000], t_test[:1000] network = DeepConvNet() trainer = Trainer(network, x_train, t_train, x_test, t_test, epochs=20, mini_batch_size=100, optimizer='Adam', optimizer_param={'lr':0.001}, evaluate_sample_num_per_epoch=1000) trainer.train() # 매개변수 보관 network.save_params("deep_convnet_params.pkl") print("Saved Network Parameters!") # misclassified_mnist.py # 이번 장에서 구현한 신경망이 인식에 실패한 손글씨 이미지들을 화면에 보여줍니다. # coding: utf-8 import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 import numpy as np import matplotlib.pyplot as plt from deep_convnet import DeepConvNet from dataset.mnist import load_mnist (x_train, t_train), (x_test, t_test) = load_mnist(flatten=False) network = DeepConvNet() network.load_params("deep_convnet_params.pkl") print("calculating test accuracy ... ") sampled = 1000 x_test = x_test[:sampled] t_test = t_test[:sampled] classified_ids = [] acc = 0.0 batch_size = 100 for i in range(int(x_test.shape[0] / batch_size)): tx = x_test[i*batch_size:(i+1)*batch_size] tt = t_test[i*batch_size:(i+1)*batch_size] y = network.predict(tx, train_flg=False) y = np.argmax(y, axis=1) classified_ids.append(y) acc += np.sum(y == tt) acc = acc / x_test.shape[0] print("test accuracy:" + str(acc)) classified_ids = np.array(classified_ids) classified_ids = classified_ids.flatten() max_view = 20 current_view = 1 fig = plt.figure() fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.2, wspace=0.2) mis_pairs = {} for i, val in enumerate(classified_ids == t_test): if not val: ax = fig.add_subplot(4, 5, current_view, xticks=[], yticks=[]) ax.imshow(x_test[i].reshape(28, 28), cmap=plt.cm.gray_r, interpolation='nearest') mis_pairs[current_view] = (t_test[i], classified_ids[i]) current_view += 1 if current_view > max_view: break print("======= misclassified result =======") print("{view index: (label, inference), ...}") print(mis_pairs) plt.show() ###Output calculating test accuracy ... test accuracy:0.867 ======= misclassified result ======= {view index: (label, inference), ...} {1: (4, 6), 2: (7, 5), 3: (4, 6), 4: (5, 7), 5: (9, 5), 6: (4, 6), 7: (6, 4), 8: (3, 4), 9: (6, 0), 10: (3, 6), 11: (7, 9), 12: (6, 4), 13: (2, 4), 14: (2, 6), 15: (4, 2), 16: (4, 6), 17: (8, 6), 18: (9, 7), 19: (2, 4), 20: (6, 2)} ###Markdown 인식하지 못한(misclassified) 이미지들 8.1.2 정확도를 더 높이려면 MNIST rankingsData augmentation 8.1.3 깊게 하는 이유 5x5 convolutionMultiple 3x3 convoltions 8.2 딥러닝의 초기 역사 8.2.1 이미지넷 ImageNet samplesILSVRC rankings 8.2.2 VGG VGGNet 8.2.3 GoogLeNet GoogLeNetInception module in GoogLeNet 8.2.4 ResNet ResNet moduleRegNet 8.3 더 빠르게(딥러닝 고속화) 8.3.1 풀어야 할 숙제 AlextNet forward 처리시 각 층의 시간비율 8.3.2 GPU를 활용한 고속화 AlexNet 시간비교 8.3.3 분산 학습 완전연결 계층(Affine 계층)으로 이루어진 네트워크의 예 8.3.4 연산 정밀도와 비트 줄이기 ###Code # half_float_network.py # 수치 정밀도를 반정밀도(16비트)로 낮춰 계산하여 배정밀도(64비트)일 때와 정확도를 비교해본다. # coding: utf-8 import sys, os sys.path.append(os.pardir) # 부모 디렉터리의 파일을 가져올 수 있도록 설정 import numpy as np import matplotlib.pyplot as plt from deep_convnet import DeepConvNet from dataset.mnist import load_mnist (x_train, t_train), (x_test, t_test) = load_mnist(flatten=False) network = DeepConvNet() network.load_params("deep_convnet_params.pkl") sampled = 10000 # 고속화를 위한 표본추출 x_test = x_test[:sampled] t_test = t_test[:sampled] print("caluculate accuracy (float64) ... ") print(network.accuracy(x_test, t_test)) # float16(반정밀도)로 형변환 x_test = x_test.astype(np.float16) for param in network.params.values(): param[...] = param.astype(np.float16) print("caluculate accuracy (float16) ... ") print(network.accuracy(x_test, t_test)) ###Output caluculate accuracy (float64) ... 0.851 caluculate accuracy (float16) ... 0.851
hot_spots_count.ipynb
###Markdown Título: Algortimo para cálculo de focos de calor en cuencas Grupo 10 Descripción:Algoritmo para cálculo de focos de calor en cuencas desagregadas a nivel 2 o 5 según se requiera. Procedimiento general+ Se selecciona colección FIRMS+ Se selecciona las regiones de cuencas a nivel 2 o 5 definidas en el producto+ Se define fecha de inicio y final de análisis.+ Se filtra la colección FIRMS con las fecha definidas en el punto anterior+ Se selecciona una subcolección de un mes de duración+ Se reduce la subcolección a una escena tomando los valores máximos de los pixels+ Se cuenta la cantidad de focos de calor por cuencas en la escena reducida+ Se registra los valores computados+ Se selecciona la siguiente subcolección de un mes de duración. Si se termina la cantidad de subcolecciones se procede al siguiente punto.+ Se da formato a los datos para entregarlos al procesador de variables. Diagrama de flujo ![Imagen_algoritmo.jpg](data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAIBAQIBAQICAgICAgICAwUDAwMDAwYEBAMFBwYHBwcGBwcICQsJCAgKCAcHCg0KCgsMDAwMBwkODw0MDgsMDAz/2wBDAQICAgMDAwYDAwYMCAcIDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAz/wAARCAHiA8oDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9/KKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoor5/wD+Cpvxr8Qfs6fsA/Erxp4W17/hFte0GwhmtdX8m3m/s/dcwxvLtuEkhOEdv9YjKOpHFTOajFyfQaV3Y+gKK+J9N/aM0nSPCXgnUvBH7W+s/GnT9b+J+h+HL7ULO28J+IUQT+YG0x30y3so7aObcjPMRLPGEUxrhiDwc37fPxJ/Z2i8WzaV4W8T/FJtb/aTufBKwm9tZTpenSJabbS2+2X1sIpGLN5IyYEbeZNgIJcHzVPZ/L53opL/AMrR+56WcW86k1CPM/X5Wqv/ANxS+9a3ul+ilFeD+Av28bL4jftA+I/h5Y/Dn4kRX3gZ7SPxZq11FpkOk+GWudNh1CITTm9/fnZL5bC0Wco8ZLhYnilkx/hV/wAFM/C3xS8beCbT/hCviN4f8J/FCea28D+NNWsbNNC8WypG88aQCK6ku4PPgimmgN5bW6zJESpJZFakm2kutvx2/wDAvs/zdLlvRXfa/wArX/BavstXa59IUV5T+0N+1hY/AXxV4Y8MWfhXxb8QfG3jBbqfSvDfhpbIXs1taqhubp5b25trWKGMywqTJOpZpkVA5OB5f4t/4KveF/C+l67qsXw1+MGq+HPAlta3HjzV7XRrRIfADSwpcSQ30E11HdTS29vIk86WMN0Y42Vudygymn/X3v0T0b2T0buPld7Ld/1b1fRbtapWPqaivjj4+ftj2P7Pf7fX9t6zr+u3XgKP4QDUrbSNNkkuo9W1C4122tbT7PbBtkl1M00cMbcf60Asq5I7yb/gpR4Y8JeE/G11458G/ED4c+IvA8Wnzz+F9Zt7G71bVI9QmNtp7Wf9n3d1bT/abpJLdFE4ZZYyJBGuGLheVNVLb8yt1vGcoW837t7LWzWhN05uK2XLr096MZJ+S95K70ufRdFeAa9+33b+CvAtveeI/hX8V/Dvi3Vdfh8NaL4Mu7TTZdX8QXksJnQWs8F7Jpzx+Sk0jyNeKkQgk8woQAcnWP2q9R1n48fA2G7bxz8MbfxVP4jtdX8I674fsHe7ksrMyg3N6ty6wxxbDLHLZvPHPvALgAkJySTbei38tE/wur9rq9rj6pd9vvt+LTt3s7Xsz6Vor44+F37cWl/Fv9rrwV4ou7X4taF8N/iHpE2g/DbUL6zgtfC3im93SXct0RDfSXBkubW3VrQ31lbARW85jdmuAp6v4Cf8FSvC3x78Y+ArCLwD8UfC+i/FH7XD4U8Ra/p1lbadrV3awPPPaokd1JdI4jiuGV5YEhlW3dopZFKFqim7LrrddrPZ9nazs9dUrBLS/ZWd+6a3Xdbq+3uyeyufTlFeR/tHfte2P7O/jfwj4Xj8G+OvHXinx1FeyaLpXhm0tpJLg2nkGYSS3M8FvbgJOHDzyxxny2Xf5jRpJw+m/wDBTvwp4w8MeFD4T8GfELxj418Vz6pbL4G062sINe0l9Ln+z6n9sN3dwWcC21wUhZ2udsjyxiEyh1JlO+39d/u6vZdRtNK72/r87O3eztsz6Uor5r1f/gqL4Isvgh4F8aWXhj4h61P4+8VSeBbTw3ZaTENc0/XokuzLp93FJMkcLxy2U0LyeaYVbbIZBBmcQ/8ABSr9oLxp8LP+CY3xG8beGPC/jHRfGDeFbl4bWG40z+0/C0skDD7RM32z7OTbk5b7PNKcgeWJKJXUXKOttPnZNaryad1smns0VGF5qEtL/grtO68mmrb3TW6Ppqivk/wP+1bafs3+ENA+Hug/Cn9oHxv4xsvD58Tax4duNfsfEPiDw/ZSTyxxy31/qGsNFK00kc3lQwXk8m2NgsaquB0+tf8ABSbwrqVxoEPw+8I+P/jBPrnhe38ayReEbO036Xo1wStvdTi+ubXLSlJQlvD5lyxhkxDxzckk3Z3V7X+/8Pdlrt7sv5XbKEnKKk1bS/32tfzfNGy3fMrXur/RNFfDP7MP/BRb/hCdc8S6X4v8N/FfUtA1T4war4UtvGd5ax/2No1xdak8Wn6e4ubhL0LloIQYLaSCJ5kjZ0ZZFT3L9pP9umz/AGYtQv7jVvhx8TdV8H+H/szeIfF2n2VlHo+gJM6DzH+03UNzdJGsivI1jBchBuU/OrIJheVOnU/nSfo3GMrPtZSW/dPZoqXuznD+Vv5pScbrvdxf49j3SikVg6gjkHkEd6WgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACvJvEn7cHw18KfHN/hzdazqknie3mtLa9+yeH9RvNN0ie7x9lgvdQhgezs5psp5cdxNG7+bFhT5ibvWa/PD9qT4xaT+zH+2LrPiL4J/Fewl+LHjDxToun+MfgjrECzyeMwRbWjahYW5VL62ljsCJDewmSxZLMmVD5bupDWrGm+unndtL7td7Oztpa7RLSnKfZf8H+tV6n6H1wnwO/aE0b4/z+NE0e21O2PgXxPd+E7/7bGieddWyRO7xbHbMREy4LbW4OVHf80tE+PfxI1v8AbG1m11z4n/DjwL8UbH4sSWGnaT4u+POqaFcXWhLqCxW9lB4N+wNp93Hd6awENwsjyyTTrKJlkXYl74v/ABM8ZfDL4TfFCbw9fR6P4W1L9p3UrLxtqc/im78J2tjpJsYmBuNZtIJ7jTLd7tbSN7mJAQJNpkiVy4ilPm5ZvaULr1c6KTb7JVGn1unZNJNutFwk4LdSs/NKFZu3q6a5ddU1e13b9WqK/LHQfjpr+mfs6aG+t/GnwjN8A9Q+LUOm+IfFHgX4yX/jGbwdocmmtKLC98USQ215AkmrfZlM7SLLDBexx+cE213PxU+Mfw50zWfgtpt78dvGVl+yzrI8SlvG9z8Q9Q0yLUtZgkg+wae/iVZobqS0EZ1J4XN4wuGtVUyzCMLWltbL8enuqWu9naSS3TlpfWLku3zen+Jx02v8N32i09dbfdHg/wCOXhbx98UPGHgzSdU+1+JfARsxr1n9mmT7D9riM1v87IEk3xqT+7ZtuMNg8VV+C/x40r48N4ol0O01MaZ4Z1258PjUp0jFrqtxbEJctalXZmjin82BmdU/ewSgAqAx/PPxt+0b4r8NeEPj9N4N+I/xI1vwJosHw4jPiK5kuLnVNE8M3hK6vq0CtEGjlFgZZ2uVh34j887ivmVX8cfFaeDwv8cbT4DfGXx14n+Dkdt4NXT/ABfa+NrnxUdD8Q3OuJb39tp+sXkly8o+xfY3lt2lnhjaXBRPNdGmLbb07x7apRd32TXure8ny7oSd6al6Pys242Xdp2k+0Fe2un6l0V8n/sb6ZffBz9vT43fC208T+Ntf8H6N4Y8LeI9Ph8UeJL7xBdWV3evqsF15dzeyyzrE4sIG8rf5avvKKu4ivOvEfxc8LXf7fvxH0f4wfF/xl8PfFuieIdDh+FvhrS/Fd5pa61pctvbMJ7bSoW8rWWnvzfQz+db3XlLCF/dBc1dvfUF1V/xSsu7u9FpdJvSw5e6pX+zb8VzK/ZWer1s9Ndz71rhPAf7QmjfEL45ePvAFlbanFrPw6j02XUpp40FtOL+GSWHyWDlmwsTbtyrgkY3da/Om++P2o614lvbuP4qeN4P2u7H4xHSLf4ax+KbyO3OgrrQgWM+HDJ9kfTm0A/azftATubzhdBgoWt+3f8AEzxl8Mv2xPjVN4evo9H8Lal4p8B2XjbU5/FN34TtbHSTpOpsDcazaQT3GmW73a2kb3MSAgSbTJErlxk5/u4VP5ml5a8rTv295Xa6p2urNtxtVlT/AJea/f3ea9l58vu7Xuttbfq1RX5Y6D8dNf0z9nTQ31v40+EZvgHqHxah03xD4o8C/GS/8YzeDtDk01pRYXviiSG2vIEk1b7MpnaRZYYL2OPzgm2u5+Knxj+HOmaz8FtNvfjt4ysv2WdZHiUt43ufiHqGmRalrMEkH2DT38SrNDdSWgjOpPC5vGFw1qqmWYRha0trZfj091S13s7SSW6ctL6xcl2+b0/xOOm1/hu+0Wnrrb7oHxy8LH46H4a/2p/xWo0L/hJTp32ab/kH/aPs/nebs8r/AFvy7N+/vtxzVX4L/HjSvjw3iiXQ7TUxpnhnXbnw+NSnSMWuq3FsQly1qVdmaOKfzYGZ1T97BKACoDH88/2yf2jfFfw98X6AvgD4j/Ei/wDh3L8JtJvdV11pLiXULbQpfE9jbahroTygftMWlyTuLzyTIsYMuTjfXU+EPixFYeHf2qj8FPij4q+IHwY0T4XnU9K8SP41vPFUWjeKPs2ofaYNP1m4nmncrbx2EzxLcSLBI4K+U0jKcHWUMM8TJaJVdNnenKWr7Jxjyp3fNN202LpwdSt7Fd6fmrTjHRd2nPme1oK+u5+jVeM/tIft2+Bf2XfEz6P4hOr3WpRaBN4ilh062WYwwLcwWdvGxZ1Amu7u4jgt485lkD9FRmHgf7Lela18Ef25/hf4cj8d/EbxVp3xL+D+oeJvEKeKfE13rCXOqWl7pKx3cEUzmKyLLqFwrRWiQwkeX+7Gxa+tfGXwC8CfEXXpdU8QeCvCWu6nPBbW0l5qOj291PJFbXH2q2jLuhYrFcfvo1zhJPnXDc111qDpyipPS8k+/uynB2/7ejfXePZvTClVU02l0i1/28oz1/7dlbTZ91vwPhX/AIKAfDrxt8Wr3wRpUfxD1DXtL1g+H9QktPh7r91pWnX6hS8E2pxWbWCFN67nNxsXIJbHNQ+I/wDgo18JPDfw98G+J/7a8RavpfxBvrvTfDyaH4P1nWr7VLi0877QiWlpay3AEf2eYlmjC4QnOMGvGP8Agnf4f+Jlx8dvj3faZ4u8C2fw+j+MuuC+0W68I3VzrNy/2e03GLUV1KOGIElMBrKTG1uTuBX56+B2meLdX+DX7DMHgfWvDvh7xI3jTxt9mvtd0SbWbGICHWy4e2hurR3yuQCJ12kgkMBtPPGTlRpVOs4wk/8At6MZaeWrtq33V925Wq1IdIuaX/bkpLX7lfS3Y+4PHX/BUD4FfDL9myH4ua/48i0rwBJrA8PS6hPpV8JrDUTK0RtLq18n7TazJIrK6TxIYyPnC12P7U37Ynw3/Yr+GMHjH4meJofDXh27vodNt7n7JcXjXFxKGZI0it45JHO1HclVIVEZmIVSR+eN/wCGrD4i/sx6H4S8aWiv46v/ANp+0sPifYGdJbWXU7mUiR7VQqmOxns3tpIEcNIsUiq8ksgeRvOPjPdeKPjV+x58W/BvjC21fy/2MfBl/wCCpL67iAg8Q61NL9mtb9GIyzjRIoJT/dOsMMnGavD/ALyN/NPycfZ05y19aijGVteeN4qzu6jcJ8vqvPmdSpCOnXSm5TXTldpaq36kfGr9uD4a/s++ObHw34l1jVV1q90/+13ttL8PalrH9mWHmeX9tvms7eVbG13hh590Yov3cvzYjfbgeJv+Cg/hPw58f5Ph0uheL9T1hPE2k+FBdWNtbSWct5qFhPqI2sZw5S3tIDNP8m5EliYK4bI+d/8Agpb8SvDf7PHxWv8A4k+BPjRp3w1/aH0rwfb2dn4O1y0W7034rWiSzT2emJYuqXF5Obhpoo5dMlE8T3O1w4ZYz9n/AA48EaTf6RoviW78GaLoHie7gOpXSJaRG5sLy6ii+1DzQoYu2xEd+C4iXPQYKa5oc76Oz8/e0S9Ypp2d4y3STjclLZL7UdPJ8qu36Sei2lHaV07djRRRQUFFFFABXmH7Zn7OH/DXH7M/in4df2z/AMI//wAJLFDF/aH2T7X9m8ueObPlb03Z8vH3h1z2xXp9FAPVWZ5h+09+zh/w0fbeAo/7Z/sb/hCPGul+MM/ZPtH237FIz/Z/vps37sb/AJtuPumvIPHH/BNbU9X8DeJLbQfiBa6T4k1D4ux/FzR9QvPDxvbPT7mNoCtncWy3UT3ERWJwWSaFvnBGNuG+gfit8V/+FVN4aH/CNeLPEn/CSa7baH/xItO+2f2V5+7/AEy7+ZfKtI9v7yXnbuXg5rramnFRbnD+a/8A29+6l/7jpv5ecrqolJ8s+34fvI/+31F8/JW8b+B37Kc/wz+Inxc8Sa7r9r4guvi/d6fe6hBaaY+nwWb2+kWunSrHmeVikht2kALZQSBNzld7eXfAn/gmx4k+G7/Cbw94o+J9n4t+GnwFuTdeBdHg8Lf2ZqyvFbTWVj/ad+LqSO7+zWk8kY8i1tfMfa77sFT9EfCf4r/8LWXxGf8AhGvFnhv/AIRzXbrQ/wDie6d9j/tTyNv+mWvzN5tpJu/dy8btrcDFA+K+fjofA/8AwjXizjQv7c/4SD+zv+JEf9I8n7H9p3f8ff8Ay08rb/q/mz2qoS1i49VFr/t2LcX6xV2nuumo5fA4y2V0/m1FrvZ6J9H1ujz/APaU/Zb8R/En4w+CfiT8PvGOj+CvH3guz1DRxca14ek17S9T0y+8l57aa2iu7SXcJrW1lSRLhdpiYFXD8eRa9/wTF8e6hpnxJ0C3+Ntt/wAIj8dIoH+I8F74Kim1S+umsYtPv5tLuY7qKGwW6tIIU8ua2vPJZSyNk19T/GL4gf8ACpvhH4p8U/ZPt/8AwjWkXeq/ZfN8r7T5ELy+Xvw23dsxnacZzg9KrfAb4n/8Lt+Bvgzxn9h/sz/hLtCsda+x+d532T7TbpN5W/au/bvxu2jOM4HSlBJqUVqkkmulptu1npaTi21tdXethubjKLvZu7XrHlV/JpSST3s7J2ufPX7Xv/BJ/wANftda5dNqOtjT9G/4QW28H2WmSaVHfxW8trqltqVrcyiZylxCJLWNJLaRMSoXBcZrnfDP/BImzsfg54x8OtB8AfA+t6zcaXqeh678LfhAnhCbS9Q027F7aT3SNqF0L6NLmOFvK3RfKJV3ZkDJ9p1823n7d3iKH4g/Fz7D8K9Y8SeB/hFqdv4furrQbyW+8S6zqcsFhcMlppQtlje2jjv1LzG8DDypMREDNOndS5YPXWXnrLmcr9HzSWu92kuiJaUY3e2i+6Kilb/DHbaybfVkfxD/AGNvid8XfDfh3V/EXxa8NN8UfAviqLxP4V1jTvA7W2g6Vi1ks5rWTTmv5LmeKe3uLpZCb9W3SoyGMJtbWu/2N/EnxB8e/CvxT4/8f2fiXW/h/Nrst+LHw8NMs9UTU7R7UQW8YuJHtooUYY8yS4kfBzJzkfQFFTKMZRlCSupb/cl+i/psevNGS3W33t/g22ux8k/Az/gmt4k+HX/Cp/Dnij4oWfi34afAa4N14F0iDwt/ZmrK8VtNZWJ1O/F1JHd/ZrS4kjHkWtr5j7Xfdgqei8Cf8E8/+EJ8Lfs36Z/wl/2n/hn3U7nUvM/srZ/b/naXf2GzHnH7Pj7b5mcyZ8rbgbty+i/s0ftH/wDDRNz8Ro/7G/sf/hAPGt/4Pz9r+0fb/sscD/aPuL5e7zsbPmxt+8c8N/a2/aT/AOGWvh/oGu/2L/bv9ueL9B8K+R9r+y+R/amp29h5+7Y+7yvP8zZgb9m3cudw1jKU5QktXUcWv7zqW5W/OV46vXa+2kuyVRdIqcX5KPMpJdkve0Xm1ueW/tm/Cr4i+Pf2yfgbqnw81b/hGbvQdM8SmfWb3w4+t6NAZV09Vt72JZYG2yhZCnl3MEm+IEMyq8b4Hg7/AIJg+Jfg3N4J8V+BPijp1h8V/D0viKTXNb1zwmdT0fxMuvXq6hqCPp8V5bzQbbuK3eApeFo0i2SGfcWr7CorOmuTWO+uvXXdX7NbrZ9TSUnKPK9u3ps/VX0fTofMXhr/AIJuQeG/B3wrsv8AhMbm71bwL8SLn4n67qU2mqP+Em1O7i1EXYSJZFW0jaXUGdFXzNiRKh3kmSvW/wBrH4D/APDUP7NPjj4d/wBq/wBh/wDCZ6PcaT/aH2b7T9j81CvmeVvTfjPTcufUV6FVTXtYTw9oV7fyxXc8djA9w8VrbvcTyBFLFY40Bd3OMBVBJOAASaHNQg+kd/T3Yx+SUYxSS0SWiErymnvLb/yaUvm3KTbb1dz5R/bH/wCCVGiftO/tA2XxLt7T4Pan4gHh+Lw3e2nxM+GkXjjTGt4Z5JoJrWI3dpJazhppldllZJEZQUyitWkv7AfjD4aeNNL8TfCTxz8P/hjrFz4MsvBniSztfh4Z9CuYrOSWW1uNMsU1CL+z5IWurwIkkl1FsmQMjmPc3T+Gv2y/ES+J/gRoXiv4cyeFNd+NEeqz3Gnyax58/hlbS0a7jjlHkKJJnj2LIgKiJywDShct6p8Dvit/wu/4W6X4o/4RvxZ4Q/tTzf8AiUeJtP8A7P1S08uV4v30O5tm7ZvXk5R1Pemk4XprSzf3vmv/AOlPy+5Ec0Z676JfJctvu5Y29DxDWP8AgnQ+sfBaTwhJ43lkkk+K9r8T21CXSVMkhh1uHVTZsiyquW8nyjKu0DduEWBsryf9tv8A4In/APDZXxT+IWuXvi/wJ9n8exWggvPEfw6j8R+JPB5t4URYdH1KW8RbK0eSPzHhW3Lbp7lklR5A6fY3g/4r/wDCX/FDxh4Y/wCEa8WaX/wiBsx/a2oad5Gl639piMv+gzbj5/lY2S8Lscgc9a62im+RQUNo2a/8BjFf+Swivl3bvTd3Jvrv/wCBSl/6VKT+fawy3i8iBEznYoXPrin0UUCSSVkFFFFAwooooAKKKKACiiigAooooAKKKKACiiigAori/wBo7XvF/hf9n3xxqXw+0u11vx3p+g3tz4d0+6/1N9qCQO1vE/zLlWlCAjcuc/eXqPlj9kv9vzwx4S/Zz8W+L/FXxq8b/E3UfC8emWuveFvEfhKz0Xxn4a1q6IhTTW062tbN0FzPJFHAs0Jy4kYXUsZzGRfM5JfZSfrdtaJa76erS3aHJcsYyf2m191t+i3072fZn23RXw/+3H+3DqXin9iX412NloXxG+DPxM8E6fpeoNp2rXVnBqkdnd3qRw3tvc6ZeXMDRSNFcxHZPvVoZA6KCpb1n4zf8FHfDvwf8beMNOi8E/EPxZonwzW3fx14l0O2sW0vwassS3BNyJ7uG5nMVs6XEi2UFyyRuuRuIWj7Sj/X39F0bez0eoldtpbr79fLfzXda7H0PRXhmsft26UfjRqXhDw14F+Inj238N3NhaeJdf8ADtnaT6Z4amvESWFJlkuY7q4YQyxTOLOC48uOVC+3OByXxJ/4KCweI/DfxEt/CHhb4nx6J4dj1nRf+Fladoun6houmatZW0xmC20l2t1N5E0TRmR7YWjTRlDOBlqyq1VTg5vom/Oy9bb3STejbST1Lpw55KK6tK/TX0v0TbS1STbWjPqCivlrQ/8AgohDoXw48FafpHhD4n/HDxlP4B03xlrsfhjS9Ltbyys7iAGK7uori9gt45rl45ylrbSzSExSbEKqGMviX/gqx4Pf+yD4E8DfE/4tprnguD4gW0nhXTbRYxo8rTJ5rvf3VqscqmEj7OxEzFwEjcpKI96yVKcoSeza+7mbeuytGTu7fDLs7Z0G6sIzit0n9/Lbbq+aNl15lbRo+oKK8h1b9r+y1P4QeCfGPgPwV48+K1p8QdPi1fRrbwzaWsTvZSQpMs8s9/cWtpANskeElnWRyxCI+x9vnnws/wCCqnhb46fELw34W8FfD/4o+Jta13Q7bxFeRW9lYQReHbV9Qu9OuPts095HEstrdWcySRRPK7/egW4VXKHJJVPZNe9e1vNJt/ck7vZWdwUk4e0Xw2vfybSX3tqy3d9D6horzn9pL9pfSP2afDmi3F7peu+JNa8U6tFoXh/QNEihk1HXL6RHkEMPnyRQJtiimleSaWONEiYs44z5jqf/AAU38NaP4JhmuPAvxDh8czeL08Cf8IFKmlxa6mryWxvY4Gma9GmgPZAXKSfbdjowVWMp8qpj73w66pfNtL85R9OaN90U01q+zfySbv6e7LXyfZn0pRXzx4m/4KCN4e16x8PQ/Br4vaz44/sU+I9b8LaYmi3Go+FtPM8kEU93INSFrI0zQymOG0nuJ3ET4jyMVD8V/wDgpV4a+GPijxVb23gf4keKtA+HcNtceN/EmkWNmmneDEmhW5P2qO6uoLuVorV0uJUtLe4eONl3KHOyhauy/qzte/a+l9rpq90LW9uvbrrrtvezTtvZp7NH0bRXzD8a/wDgqj4T+Cfi7xxZTeA/ifr+h/DEWVx4v8TaTp9k2jaBZ3UENxHdvJNdxSTxrHMWdLaOaZFidmiCtGz9R+0b+3dY/s1Xd7d6n8Ovibq/g3QhbSa/4v0+yso9H0GOdkHmv9puobm6SNZFeRrGC5CDcp+dWQC1t5/18l5vQOl/K/8AX3rTezT2aPdqK8Tt/wBtaHxJ+0jrvw48LfDvx94xk8IXtlp/iXXtOm0e30zw7NdQJcRiZLu/gvJFEEscpe3tplIfapd1dFyP2q/24bT4PeLtX8CaL4K+IvjnxLaeFZvEurSeFba0kXwvYM0kMV1cefcwyOzvHMY4bRJ7hxbybYiQobKtVVOl7TfRtedlzfdbW+1tb2NaVJzqKHp+LSX4tK292luz6Dor41/ZI/ben8CfsE/svWuo6R4++LfxR+JPw903VItL0ma1uNX1NYdOtpL2/uLnULq3gVVeaIM81wGd50Ch2Y17RZ/tteFLv9jzxL8aPsHiGLQvB2l6rqGtaRLbRprGmzaZ5wvrF4/M8r7TFLbyxECUxsy5WRkIc9OLprDzqRm9INpv0bTfppr2em5hhm68YOC1nay83svXy7a7HsVFfIv/AA930yfxBcaNa/Az9oC71s+GY/G2maemj6XHLrWhMZFbUImk1BY4PLKIGtrtoLwmeILbsdwX0PxX/wAFA/Ctv4V+Gl74N0Dxd8UNU+LulHX/AAvonhuK0jvb3TVgimkvXe/uLW2giRZ4AfNmRi8yKqsxwIcWr3Wzt89f/kZJ9nGSesXZpptJPfX8E/ylF+akrbo93or5ruf+Cm/hrU9K8CReGPAvxD8ZeL/HrarHb+EbBNLstW02TSpVg1OK5e/vba0ElrOyxOkdxIzE74xJEDIPobw1qs+u+HbC9utMvdFubu3jml0+8aFrixdlBMMhhkkiLqTtJjkdMg7WYYJLO1+n5+ndabrS+g+tv6+fZ+T1L1FFFIAooooAKKKKACiivmT/AIKu/HvVv2d/2cvD2saZ8QP+FW2+peOdA0XV/FG3T/8AiU6ddX0cV1Luv4ZrWPETMd8sbBcZpN2cY/zOMfnJqK+V3r5DtpKXZN/KKbf5H03RXxV+zd+3VJ8PPAfxi8TeKPiDqXxn+EnhHU9MtvBPjq10uylvPGM94iRSadbSafFBZX7pfPHbpLbxIgeUxyNuhkYeiar/AMFMfDfgfwP46vvGvgX4jeBPEvgSHT7ibwjqkGnXWs6umoTG2082Rsby4tZzc3SyW6D7QCssbCQRqVY1bp6fe7aLu9dlfutGmJaq/nb8bXfZX6u3bc+kaK+EPhv+31qPh39tD4/a1438N/FPwnoPhXwx4EtE8Iay9rNNpt3qGp6ram4gWC7msXWXzLUvLbzvlYtjHzITGvvnxe/4KDeBfgh4y+Ieh65b+Ihe/DjR9E1a6FtZrP8A2q+sXV1aWFnZqr75LmS4tTHtZUUGWP58bihJW8/TX/h79LBb3pL+W34qL/8Abkv+Ae50V4Br37fdv4K8C2954j+FfxX8O+LdV1+Hw1ovgy7tNNl1fxBeSwmdBazwXsmnPH5KTSPI14qRCCTzChABydY/ar1HWfjx8DYbtvHPwxt/FU/iO11fwjrvh+wd7uSyszKDc3q3LrDHFsMsctm88c+8AuACRLkkm29Fv5aJ/hdX7XV7XDql32++34tO3eztezPpWivjj4XftxaX8W/2uvBXii7tfi1oXw3+IekTaD8NtQvrOC18LeKb3dJdy3REN9JcGS5tbdWtDfWVsBFbzmN2a4Cnq/gJ/wAFSvC3x78Y+ArCLwD8UfC+i/FH7XD4U8Ra/p1lbadrV3awPPPaokd1JdI4jiuGV5YEhlW3dopZFKFqim7LrrddrPZ9nazs9dUrBLS/ZWd+6a3Xdbq+3uyeyufTlFeR/tHfte2P7O/jfwj4Xj8G+OvHXinx1FeyaLpXhm0tpJLg2nkGYSS3M8FvbgJOHDzyxxny2Xf5jRpJw+m/8FO/CnjDwx4UPhPwZ8QvGPjXxXPqlsvgbTrawg17SX0uf7Pqf2w3d3BZwLbXBSFna52yPLGITKHUmU77f13+7q9l1G00rvb+vzs7d7O2zPpSivmvV/8AgqL4Isvgh4F8aWXhj4h61P4+8VSeBbTw3ZaTENc0/XokuzLp93FJMkcLxy2U0LyeaYVbbIZBBmcd18ff2vNK/Zi/Zh/4Wj418MeMdNsoH02G80O2tbfUdZspb27gtEiMVtNJFK6S3CbhBLJkK3l+YdoZ9Lrul83ZpLvo09OjXdByvmULavp13a+Wqa16p9metUV8ga9/wWI8P+C5vG8XiH4OfHHw/N8LpLaXxsLvTtJkj8K6dcxxywanNNDqDwz27I8hKWjz3CC3mLwIAN3Uf8FFvinrmn/DRdE8OeH/AIzalaKINf8AEWp+AYLS1ubXSLeXzpoI768v7FY5ZxEyH7LJNcJGXPlp5kb0pSUUpSdk+vRLq35JXbfRJ9U0OMXKXKt/z8l5u6t3vHo0z6Xor5ksP+Cr3ww1Pxnp/hext/FN74n8RSaJJ4V0hLWFbzxjp+qQ+dDqlgrzDfZxRx3TTvKY2g+xT70B8vzOc0P/AIKGW3wasPE58Ry/EH4nX+qfGfUPhx4f0/SvDNja3Vnc/Yzd29ggW5CS26rE6C8naM5k3TCKNGkWrPn9m1r+vNCPLbe95xVuj0dno81JOKkuv5cspc3+G0G79d1pdr6+or5O8Df8FdPC3i7V7GG9+GPxh8MWP/CZD4e65qesaZp8dn4W197g28FjdNHeyNN5ztBsnslubcfaoQ8qEsF6T44f8FLPDPwU8TeM4U8E/Efxf4c+GBhHjvxP4esbOfS/BvmRpO4uFluorq4aG2kjuJVsoLlo4nUsAxC0t0pLZ9emyd77JWlF32tKLvaSvdnzcnXt13atbq7xastbp9mfRtFeGaN+3NZeO/2gdV8B+DPAHjrxxF4aubC18QeI9Ln0eDStCe8t47qIyrd38F3Kn2eWKXfbW0ylXwhd1dF9zp2drv8Ar/gdn16Epp7BRRRSGFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAHzr/wUU8E61430/wCCo0XSNT1c6V8XPDmpXosrV7j7HaxTSGW4k2A7IkBBZ2wq55Ir5Y8T/sRto+lXvxM0z4a6v/wt2D9pOC9s9fXSriXWbPQZfE0cdybeQgyQ6bJYzXLyLHtt3SWWVgSzOf0xopYZexqqqtbS5vxou3/lFL0kwrfvIOD6x5fwqr/3K3/26j80vH/wU8SxeH/FsvivwH4v8Q/Ct/2ktR1zxp4ctvD91qM3iTQJNPWK2nFhHG0uo2SX5tJZIoopRIkDHYwjYVt+FPBH/CE/HDxF4q+DvwP8f6X4JtPglr1p4e8Oy2N94SF7enWPOSytd4WXSzOQzxQ7YJY1besScV+iVFZKjanGCfwx5f8Ayj7HXvp7y87p30s5NSk5PrLm/wDKvtfzfK3vZK1rO/5Hfsp/CDxZZfFH4mTeHPhxF4d8MeKvgbrFnqCeGfgdrfw4tNR11ZIjDbXMOoXE02o3yJPMEudieYJpQjSkSiP60+IX7NWt/FT/AIJEeBtAt7KfS/iJ4F8IaB4i8PxXkb282n69pVtb3NvHIpw0ZM0PkyAjIWSQEdRX17Wb4w8J2Pj3wlqmh6nHLLpus2ktjdxxTyQO8MqFHCyRsroSrHDIwYdQQea1rOfspqjpJqNn2cZVJxfXZ1NN2lFbsVKEPbRnV1Wt13Uo04yXTdU9VoveeyPl7/gk34iuv2kfh14q/aP1TSNS0G9+Pl/DqWl6ZqMardaVodnCLWwgbaSDv23F0MHH+m18+/Fr9g3TvBP7NX7aE3gr4M2WkeKPFXjOGLRpNE8Irb3+sad5OiTOtuYohJNb/aVuZCEynmLK33gxr9H/AAJ4H0n4Y+CNH8N6DZRaZoegWUOnafaRZ2WtvCgjjjXJJwqqBySeK1q2m6arSnRVo8rgu6V4O787Qs+92Yckp0VTqvXmUn2vaSt6Lm07WR+Z3if9iNtH0q9+JmmfDXV/+FuwftJwXtnr66VcS6zZ6DL4mjjuTbyEGSHTZLGa5eRY9tu6SyysCWZzxX/BW7wb43+J/wAZPii2h/CDzvGfh2LS5fA2tj4T674x1zU/KSG4a80nxBFcx2Hh8wyLIPswQyvJA8gSWS4SNv1morKgvZU6VNf8u7W+UKcP/bL9velpezXTVfPUnUf273+c5zt6e9bo9Fruj5y/4J2+Ctc8Gz/HZtb0jVNJOs/FrWdTsTe2j2/262kgswlxFuA3xMVYB1yp2nB4r4a+Iv7OC6t4tCap8EfiBrP7Q0f7Q+neI9W8b2vg+8MMvhtPE9rJazHWAgguNPj00WifYo5pjC8PmvBH9nkli/XKinQfsqlGov8Al2oJefJyW+/kV12b8motelUpv7cpyf8A2/zp/wDpbt5pPun+S+ifsx+Mn/bG1m88aTR+GPiX/wALYk1jSPGMH7PviTxNrsuknUFeyt4vFtneGxt9Pl08rZyQSxRxwRNOssXBkb7H/wCCr8mp3XwE8P6bbeAovHWjat4ltbbXBdeGdT8W6fo9n5crfarvQdOkjuNXh8xY4/s27YryxzMCIK+oaKz5f3MKPSLT9bcvz15dbPzVjTn/AH063WV/lfm26WTlomvJ3PyT/Yg/Yovfi18WPCvh/wCJnwpmufhZo3i/xpeaZo118P73wx4VWwubPQ5rTGjXMsy29pNcG5mjtLl3AnVy0aTRFI+9fwT4u+HP/BLjwj4I1L4W6lrujWvxL1vRbjTtc8I6x4ms/DXh6HWdUGnTT+HrJ47rVrEW6WcUVuD5SrLbz4ZIVr9L6KuWsYw6Lk+fLbV2tq7atWeuliI6X73k/Tm59Fe+icr2d0+VXuflZ/wTt+CnxA8N/tB/C5r7wNrWh+D/AA58S/Gt1pSWngK98JaLpelXegWrW8tvps8kzadaz3Mk5SGSXcJXlVlSXfElPV/2crUfsNfAy38c+HNebxP4UuPE88PhLxR8Cde+Jfhu9W41K4KyXum2UQNtdqDEba5eWNhHPcAJKkh2/q/RU1o+0cWnaySv10io37XdrvS2+gqC9lOU/wCZydunvO7Xe3Te/mfmTp/wX1P4i/Db9oT/AIWF4B8Z/BjSfGfhbwBHBpHhrwfe+J7Wwnt7Ry+mJY2Nu/8AaNjHIEt7q3jjCG3eSNzEG3L9Lf8ABKiym0X4KeJNNj+FvhL4baJY+IZV0q48O/D+6+H9p4qiMELNfnQrvN1ZOHLW589nMv2YSq2x0UfUFFbc/wC8qTt8d9O2qfq7Wt89U2kyIw5aUKf8v+TX9enmwooorM0CiiigAooooAKKKKACiiigAooooAKKKKACiiigDF+Iugar4q8C6rp2h69ceF9Yu7Z47LVoLWG6fT5iPkk8qZWjkAOMqw5GRlTgj5N+LH/BJm4/arsPiHqPxj8beG/EvjHxroul6DaXWg+DF0vR9NttNvjqNsJbC6u71rzddnMyzT+W8QEapFl3f7MoqeVX5uv5ea7PV6rUfM7W/rdPXvqkz4q0j/gkbCn7O/xM8CmH9nrwVcfEC1srVdV+GfwdHhKSFILkTsLpP7TuDdglVCDfF5ZLn592AftCf8EeND+Kn7SvjD4j6Xpn7P2s3fj17S61aD4ofCCLxvNa3NvbpbBrK5F/ZyQQvFFFuhfzV3oXUpvZT9q0VT1mpvdafLt+vrqKPu3t13PnLTP2LvGnwv8Ajz4o1/4a/EjQ/Bng34g6jp2r+JdBl8Hi/u0urW3htJX026+1Rw2iz2trbRMs1rdBTGWTaWwMLTP+Cf8A458IaT468FeHPixpWkfCLxnf65qq6G/g4XGs2E2rJPJcW41BrwRNafbLmW4CC0WbDCMXAA3V9VUVFSCqQcJ6prl+Wmnpokuy0VkVCTg046WafzV7P11d31vqfJmnf8E7/HPwh1rRdZ+FPxZ0bwprh+HukfD3xHLrngw65bapFpayCzv7WJb23a0uU+0XWQ8lxCwkQNETHubq/gZ/wTv0P9nW/toPDWuXo0LTvhrZ/DmztLyATXCJbzXMv2ySYMod3NwcoI0AKkggEKv0RRV1/wB85Sq6uV7/APbymn+E5LyvpaytNH90lGnolyr/AMBcXH1tyR37W2bv8U/EP/gkTN4t+AHwB8EL4s8D+Irf4H+Gk8OSab498BnxP4X8RlbSC3W/k0r7dAI7yPyG8qUzyeWtxOmG37q6T/gnp/wS3g/YE8Upf2vjNPEVtB4TXwrDbJoMWmLGi63qmqLKBFIY1AGpCHy0jRR5G5doYRp9ZUVo6snUdV/E22/WSkn96k/v8kTyL2apdEkvkmmvxSPCP2/P2F9E/bu+G/h3SdU/4R37f4P16HxHpKeIvD0XiHRJ7hIpYWivtOkeMXMDwzygqJYnVijrIrIM+faF/wAE39V8Kfsy6v8ADzSdL/ZH0y38R6ut7rNja/AV4vDOrWyInlpPpS60BJcrLGrid52G1UURAqHr64orJRSUkvtb/h/kjTmbcZdY7eW/+bPiv4Lf8Es/HH7LFxban8LvjFo+ga7qehN4f8RnVPBb6ppjW6X1zeWS6TajUIjpyWX226gt4pZbuJIDCjI/lZaD46f8Eb9G+JH7Q/ir4hafafATxHf+OBZT6x/wtX4PweNrqO6trZLXzLS4S9sWt45Iooi8JEiCRWZNgdlr7boqrvmUnuv1d7el+m23ZE7Xt1/r9Leh80fGT/gnXF8VvhP+0V4Vg8UQaLB8e9Lh0yN4dHDR+HFj0qLTxtjEyiZcRbwuY8A7c8bq8e/bM/4Ii/8ADXnxF8d6te+MfAj2/je2so4L7xH8OY/EXiTwgbWBI1i0fUpbxFsrR5I/NeFbdm3T3LJKjyB0++KKItqSmt0H2FDor/ja/wCSPlf9oD/gnprv7RP7SvhzxtqniL4X6Va+E9RsLrStV0r4eywePbK1tZRMbGPxAdSYJbzyGVJUWzCvBcTR7cuZK3fjr+xV4s8Z/tBa148+H/xJs/AU/jjwvbeEPFkN14ZXWZLm0tprmS3uLFzcRJbXiLeXSh54ruE7oyYDsIf6LoqHFOn7J/Dq7f4lyu/e67+u+pUZOMuZb2S+UXzL7nqj4n+I3/BGrQ/G3wK+AXhyS++H3ibX/gJ4YXwpZzePPAMXijw/rNq1rbQTSTaW9zE0c260hkjkjuQ0fzqfMVyK9Ys/2HIV/wCCfni74Hxf8K58It4v8P6voct14F8FDw9olm9/HNF9oh0sXUuColUsDcHzGVjuTdhfoCiqrP2yqRqaqd+bzvq/S7d9La6ipSdKUJQ0cLcvlbb7vM8L0/8AYt+w/Gyw8Y/8JLu+xfDJvhz9j/s/G/M8c32zf5vH3MeVt7538Yrxr4h/8EatE8a/Af4AeHJr/wCHvifxD8A/DC+FLSfx34Ai8T+HtatmtbaCaSXS3uomjmL2kMkckd0Gj+dSZFcivtminKTle/V3+d5y/OpP5StsklEIqFuXorfLlhH8oRXyvu238g+PP+Camq+K/wBmPw38MLSw/ZOtfD+mz3V7qGlXPwIkuNAe7kZ/KurHTl1qNbKZI5JAzmSZnZ2YNGCUr6Q+Afwqb4GfBLwn4NfXtc8Uv4X0q30xtY1m4Nxf6kYo1QzTyHlpGxkk+vU111FPmdpL+Zpv5Xt912vT0Q2k2n2vb56v77BRRRUjCiiigAooooAK8w/an/Zx/wCGl/DfhLT/AO2f7F/4Rfxjo3izzPsn2n7V/Z92lz9nxvTb5mzbvyduc7W6V6fRQtJRmt4tSXrFpp/JpA9YuD2aafo1Z/gz5P8AjV/wSm0D4y6P8cdBl1yCx8IfGe+0vxI2inRo7q30rxBZujvfskrmG5huWt7Mz2zRKJPKlJcmYleZ8P8A/BIe2svg54t8PLH+z/4F1zVbzSNW0TXPhh8H08JSaff6ZerfW0l5G2o3X22ITxRfug8JCeaA4Zw6fa1FEFyfBpa3yta1u1rJK2ySWyQ23L4tev37/f173d92fH2qf8EyfFnxGuvjDrfjj4t2mr+L/ivoeg6bDd6T4RXTNO8N3OjXl3d2UsFq91PJLD5txE0kU1w7syTYmVJEjhh8T/8ABLjxN8WdX+J3iTxz8WLTUfHHj+w8NDTr/RfCQ0zT/Cuo6Bf3V9p9zb20l3cSSxedPE0kM07sxSbEqJLGkH2PRTu78y30/DVNdn5rXzBtvmT+1a/na34e6tNvvZ80/EP9jb4nfF3w34d1fxF8WvDTfFHwL4qi8T+FdY07wO1toOlYtZLOa1k05r+S5nint7i6WQm/Vt0qMhjCbW1rv9jfxJ8QfHvwr8U+P/H9n4l1v4fza7Lfix8PDTLPVE1O0e1EFvGLiR7aKFGGPMkuJHwcyc5H0BRUSjGUZQkrqW/3Jfov6bFrzRkt1t97f4NtrsfJPwM/4JreJPh1/wAKn8OeKPihZ+Lfhp8Brg3XgXSIPC39masrxW01lYnU78XUkd39mtLiSMeRa2vmPtd92Cp6LwJ/wTz/AOEJ8Lfs36Z/wl/2n/hn3U7nUvM/srZ/b/naXf2GzHnH7Pj7b5mcyZ8rbgbty/SlFaOcnJyb1bu/N933fm9dF2QrKzXRrl+Wui7LV6La7e58uftm/Cr4i+Pf2yfgbqnw81b/AIRm70HTPEpn1m98OPrejQGVdPVbe9iWWBtsoWQp5dzBJviBDMqvG+B4O/4Jg+Jfg3N4J8V+BPijp1h8V/D0viKTXNb1zwmdT0fxMuvXq6hqCPp8V5bzQbbuK3eApeFo0i2SGfcWr7CoqKa5NY7669dd1fs1utn1LlJyjyvbt6bP1V9H06HzF4a/4JuQeG/B3wrsv+ExubvVvAvxIufifrupTaao/wCEm1O7i1EXYSJZFW0jaXUGdFXzNiRKh3kmSvTP2xf2bf8AhrP4FXHgr+2f7A8/WNH1b7Z9k+1bf7P1S1v/AC/L3p/rPs3l53fLv3YbG0+o0U1oklsmpLyaUUreSUIpLbTYV25c73tb8ZS+bvKTberufMnx8/4Jz/8AC8dI/aWtf+Ex/sv/AIaI8N2Ph7d/ZPnf8I/9ms57bzsecv2jd527b+7xtxk5yOV/bi/4JRf8Nl/FaLX7vxB4EvNNk8Ljwu+l+NvAEfi6PRP3kjPqOjCa6ii0++dZAHleG4V/s9vuQiLa32LRUSpxkrS8/wAU0/wbHGco/D0/Vxf5xX3dmz4j+Hv/AASA1PwNqvgfxS3xbvrn4jfCyz0PQvBurx6M8GnaRoljax297p0tgLsicahm4eaQyqwY2pUD7Ku7vbb/AIJvfZvGdhq//CZZ+w/Gi6+L3lf2R9/ztNmsf7P3edxjzt/n4527fLGdw+n6K2dSTqe1fxXv8+aE/wAZwjJ93dv4pXyVOKh7NbWt8uWUP/SZyiuytbZW+X9e/wCCbv8AbngjxPo3/CZ+V/wkfxj0/wCLXnf2Rn7P9lv7C8/s/b53zbvsWzzsjHm58s7cNnfGL/gnD4n8beJvi9p/hn4o2Xhn4b/H6SObxxod14VGpaoJGsotPvG0y/8AtUSWn2izt4UIntrvy3VnTbu2j6xorOKUaapL4UrW8uWEbPurU4aO/wAKe92aXfO6i3ve/nzSlddnectVqrtLTQ+TvjN/wTc1P4z/AB98I+Jptd+GGiaR4Fu9NOh6jpfw/mi8e6dY2TrILCPxC2pttgmbzUlUWe14LiaMrlzJX1jRRV80uXlb6t/N2u7762X59WTyq912S+S2XyuFFFFSMKKKKAOc+MPjmT4Y/CTxT4lit0u5fD2kXeppA7bVmaGF5AhPYHbjPvXyT8N/2g/2oPiJ8O9B8QJrPwFtU13TrfUFhbwrqzmETRLIFJ/tIZxuxnHavpn9rL/k1j4l/wDYqap/6SS14H+zL/ybd8Pv+xa07/0ljrhxtadNLkExn/C0v2o/+hh+Af8A4Serf/LKj/haX7Uf/Qw/AP8A8JPVv/llWt8SfjP4P+DVpa3Hi/xX4a8KQXrtHbSaxqcFilwwGSqGVlDEAjIHrXI/8NzfBP8A6LD8LP8Awq7D/wCO1w/W672f4CuzY/4Wl+1H/wBDD8A//CT1b/5ZUf8AC0v2o/8AoYfgH/4Serf/ACyrH/4bm+Cf/RYfhZ/4Vdh/8do/4bm+Cf8A0WH4Wf8AhV2H/wAdp/W6/wDSC7Nj/haX7Uf/AEMPwD/8JPVv/llR/wALS/aj/wChh+Af/hJ6t/8ALKsf/hub4J/9Fh+Fn/hV2H/x2j/hub4J/wDRYfhZ/wCFXYf/AB2j63X/AKQXZsf8LS/aj/6GH4B/+Enq3/yyo/4Wl+1H/wBDD8A//CT1b/5ZVj/8NzfBP/osPws/8Kuw/wDjtH/Dc3wT/wCiw/Cz/wAKuw/+O0fW6/8ASC7Nj/haX7Uf/Qw/AP8A8JPVv/llR/wtL9qP/oYfgH/4Serf/LKsf/hub4J/9Fh+Fn/hV2H/AMdo/wCG5vgn/wBFh+Fn/hV2H/x2j63X/pBdmx/wtL9qP/oYfgH/AOEnq3/yyo/4Wl+1H/0MPwD/APCT1b/5ZVj/APDc3wT/AOiw/Cz/AMKuw/8AjtH/AA3N8E/+iw/Cz/wq7D/47R9br/0guzY/4Wl+1H/0MPwD/wDCT1b/AOWVH/C0v2o/+hh+Af8A4Serf/LKsf8A4bm+Cf8A0WH4Wf8AhV2H/wAdo/4bm+Cf/RYfhZ/4Vdh/8do+t1/6QXZsf8LS/aj/AOhh+Af/AISerf8Ayyo/4Wl+1H/0MPwD/wDCT1b/AOWVY/8Aw3N8E/8AosPws/8ACrsP/jtH/Dc3wT/6LD8LP/CrsP8A47R9br/0guzY/wCFpftR/wDQw/AP/wAJPVv/AJZVz3xQ/av/AGjfgV4Xh8S65e/BPWdGtdU061vbOw8PanaXM0NzewWz+XK9/IqOBMSCyMMjoas/8NzfBP8A6LD8LP8Awq7D/wCO15L+2/8AtlfCDxH+zlqFpp/xV+G1/dNq2jOsNv4mspZCqatZu5CrIThVVmJ7BSTwKuniq7kk/wAgufpjRXj3/Dwz4A/9Fx+D/wD4Wenf/Hq7P4U/tAeA/jxFfP4H8beEfGaaYUW8bQtYt9RFoX3bBIYXbZu2tjOM7TjpXsFHXUUUUAFFFFABRRRQAUUUUAFFFeZeNf21vg38NfFN5ofiL4tfDLQNa09xHd6fqXiixtbq2YgMA8UkoZTgg4IHBFF7bgem0V47/wAPD/gB/wBFy+D3/hZ6b/8AHqP+Hh/wA/6Ll8Hv/Cz03/49S5l3HZnsVFeO/wDDw/4Af9Fy+D3/AIWem/8Ax6j/AIeH/AD/AKLl8Hv/AAs9N/8Aj1HMu4WZ7FRXjv8Aw8P+AH/Rcvg9/wCFnpv/AMeo/wCHh/wA/wCi5fB7/wALPTf/AI9RzLuFmexUV47/AMPD/gB/0XL4Pf8AhZ6b/wDHqP8Ah4f8AP8AouXwe/8ACz03/wCPUcy7hZnsVFeO/wDDw/4Af9Fy+D3/AIWem/8Ax6j/AIeH/AD/AKLl8Hv/AAs9N/8Aj1HMu4WZ7FRXjv8Aw8P+AH/Rcvg9/wCFnpv/AMeo/wCHh/wA/wCi5fB7/wALPTf/AI9RzLuFmexUV47/AMPD/gB/0XL4Pf8AhZ6b/wDHqP8Ah4f8AP8AouXwe/8ACz03/wCPUcy7hZnsVFeO/wDDw/4Af9Fy+D3/AIWem/8Ax6j/AIeH/AD/AKLl8Hv/AAs9N/8Aj1HMu4WZ7FRXjv8Aw8P+AH/Rcvg9/wCFnpv/AMeo/wCHh/wA/wCi5fB7/wALPTf/AI9RzLuFmexUVQ8L+KdM8b+HLLWNF1Gx1fSdThW5s76yuEuLa7iYZWSORCVdSCCCCQRV+mIKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigDz/APay/wCTWPiX/wBipqn/AKSS14H+zL/ybd8Pv+xa07/0ljr3z9rL/k1j4l/9ipqn/pJLXgf7Mv8Aybd8Pv8AsWtO/wDSWOvMzLaJMjz744aLZ69+3b8F4L60tr2H/hHPFTeXPEsi5DaRg4IIzXrv/CsvDf8A0L2h/wDgBF/8TXlnxa/5P3+C/wD2LXir/wBD0ivb681vRf11JMP/AIVl4b/6F7Q//ACL/wCJo/4Vl4b/AOhe0P8A8AIv/ia8H+DX7THx5+Pfwm8N+NvD/wAJfhHHoXizTYNW09dR+KOowXa286CSPzUTQJESTaw3KruAcgM3Wul/4T79pH/olHwQ/wDDr6p/8ztPlkv+HGep/wDCsvDf/QvaH/4ARf8AxNH/AArLw3/0L2h/+AEX/wATXln/AAn37SP/AESj4If+HX1T/wCZ2j/hPv2kf+iUfBD/AMOvqn/zO0csv6YHqf8AwrLw3/0L2h/+AEX/AMTR/wAKy8N/9C9of/gBF/8AE15Z/wAJ9+0j/wBEo+CH/h19U/8Amdo/4T79pH/olHwQ/wDDr6p/8ztHLL+mB6n/AMKy8N/9C9of/gBF/wDE0f8ACsvDf/QvaH/4ARf/ABNeWf8ACfftI/8ARKPgh/4dfVP/AJnaP+E+/aR/6JR8EP8Aw6+qf/M7Ryy/pgep/wDCsvDf/QvaH/4ARf8AxNH/AArLw3/0L2h/+AEX/wATXln/AAn37SP/AESj4If+HX1T/wCZ2j/hPv2kf+iUfBD/AMOvqn/zO0csv6YHqf8AwrLw3/0L2h/+AEX/AMTR/wAKy8N/9C9of/gBF/8AE15Z/wAJ9+0j/wBEo+CH/h19U/8Amdo/4T79pH/olHwQ/wDDr6p/8ztHLL+mB6n/AMKy8N/9C9of/gBF/wDE0f8ACsvDf/QvaH/4ARf/ABNeWf8ACfftI/8ARKPgh/4dfVP/AJnaP+E+/aR/6JR8EP8Aw6+qf/M7Ryy/pgep/wDCsvDf/QvaH/4ARf8AxNH/AArLw3/0L2h/+AEX/wATXln/AAn37SP/AESj4If+HX1T/wCZ2j/hPv2kf+iUfBD/AMOvqn/zO0csv6YHqf8AwrLw3/0L2h/+AEX/AMTWH+xnoVl4e/4KM/G6Gws7Wxhb4b+BnMdvCsalv7T8XjOFAGcAc+wriP8AhPv2kf8AolHwQ/8ADr6p/wDM7WR8JdV/aR+Fv7SPjb4g/wDCr/ghff8ACY+GtC8O/YP+FqapF9k/sy61i487zP8AhHjv83+1du3auz7Pnc2/C9OEahU5pvQaP0Kor4q+MX/BQ748/Ab4UeI/GviH4IfCOTQfCenT6tqK6d8WtRnuzbwoZJPJjfw9GjybVO1WkQE4BZetfatexCpGesXcoKKKKsAooooAKKKKACvy58MeEtK1j44/HKa70zT7qY/EnVh5k1ujtgCHAyRmv1Gr8yfA/wDyWj44/wDZStW/lDX83/SmnKPBkXF2/fw/9Jmff+GyTzZp/wAj/NGt/wAK+0H/AKAmkf8AgHH/AIUf8K+0H/oCaR/4Bx/4Vy/7Q/xf1b4RaFoH9g6Hp3iDW/EutwaJZW2oao+m2qvIkjl5J0gnZVCxN92JiSQOOowP+E0+Pf8A0TX4Q/8AhytR/wDlFX8AUMuzCtRjXjNKMr25qsIt230lNP52sfuE8RQjNwad12i3+SZ6P/wr7Qf+gJpH/gHH/hR/wr7Qf+gJpH/gHH/hXnH/AAmnx7/6Jr8If/Dlaj/8oqP+E0+Pf/RNfhD/AOHK1H/5RVr/AGTmH/P2H/g+l/8ALCfrND+V/wDgEv8A5E9H/wCFfaD/ANATSP8AwDj/AMKP+FfaD/0BNI/8A4/8K84/4TT49/8ARNfhD/4crUf/AJRUf8Jp8e/+ia/CH/w5Wo//ACio/snMP+fsP/B9L/5YH1mh/K//AACX/wAiej/8K+0H/oCaR/4Bx/4Uf8K+0H/oCaR/4Bx/4V5x/wAJp8e/+ia/CH/w5Wo//KKj/hNPj3/0TX4Q/wDhytR/+UVH9k5h/wA/Yf8Ag+l/8sD6zQ/lf/gEv/kT0f8A4V9oP/QE0j/wDj/wo/4V9oP/AEBNI/8AAOP/AArzj/hNPj3/ANE1+EP/AIcrUf8A5RUf8Jp8e/8Aomvwh/8ADlaj/wDKKj+ycw/5+w/8H0v/AJYH1mh/K/8AwCX/AMiej/8ACvtB/wCgJpH/AIBx/wCFH/CvtB/6Amkf+Acf+Fecf8Jp8e/+ia/CH/w5Wo//ACio/wCE0+Pf/RNfhD/4crUf/lFR/ZOYf8/Yf+D6X/ywPrND+V/+AS/+RPR/+FfaD/0BNI/8A4/8KP8AhX2g/wDQE0j/AMA4/wDCvOP+E0+Pf/RNfhD/AOHK1H/5RUf8Jp8e/wDomvwh/wDDlaj/APKKj+ycw/5+w/8AB9L/AOWB9Zofyv8A8Al/8iej/wDCvtB/6Amkf+Acf+FH/CvtB/6Amkf+Acf+Fecf8Jp8e/8Aomvwh/8ADlaj/wDKKj/hNPj3/wBE1+EP/hytR/8AlFR/ZOYf8/Yf+D6X/wAsD6zQ/lf/AIBL/wCRPR/+FfaD/wBATSP/AADj/wAKP+FfaD/0BNI/8A4/8K84/wCE0+Pf/RNfhD/4crUf/lFR/wAJp8e/+ia/CH/w5Wo//KKj+ycw/wCfsP8AwfS/+WB9Zofyv/wCX/yJ9t/8EkEEX/BNv4QqoCquhKAAMADzJK+jK/Nf9kn9qb49/stfs3+Efh9/wqH4Q67/AMIrYiy+3/8AC09RtftWGZt3l/8ACPvt+903Hp1r3P8AZl/4KF+Pvip+01o3w78c/DDwh4SXxDouo6tY6joXji5107rN7RXilim0uz2hhdAhlduUIK85r/Ufh7xO4WzOtRy3AY6FStJWUU3dtRu+nRJv5H84Y/h3MsPCWIr0XGC6vzZ9a0UUV+hnghRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUV8rft0f8FCfFf7Lfxo8M+CfB/wAOvD3ja/1zRLnXLmfWfFs2gw2kUU8UIRPK0+8MjM0pJyEAC9Tnjyb/AIe1fG7/AKIN8K//AA7V/wD/ADPV5WKzzAYap7KvVUZdmZyqwi7Nn6BUV+fv/D2r43f9EG+Ff/h2r/8A+Z6j/h7V8bv+iDfCv/w7V/8A/M9XN/rPlX/P+JP1in3P0Cor8/f+HtXxu/6IN8K//DtX/wD8z1H/AA9q+N3/AEQb4V/+Hav/AP5nqP8AWfKv+f8AEPrFPufoFRX5+/8AD2r43f8ARBvhX/4dq/8A/meo/wCHtXxu/wCiDfCv/wAO1f8A/wAz1H+s+Vf8/wCIfWKfc/QKivz9/wCHtXxu/wCiDfCv/wAO1f8A/wAz1H/D2r43f9EG+Ff/AIdq/wD/AJnqP9Z8q/5/xD6xT7n6BUV+fv8Aw9q+N3/RBvhX/wCHav8A/wCZ6j/h7V8bv+iDfCv/AMO1f/8AzPUf6z5V/wA/4h9Yp9z9AqK/P3/h7V8bv+iDfCv/AMO1f/8AzPUf8Pavjd/0Qb4V/wDh2r//AOZ6j/WfKv8An/EPrFPufoFRX5+/8Pavjd/0Qb4V/wDh2r//AOZ6j/h7V8bv+iDfCv8A8O1f/wDzPUf6z5V/z/iH1in3P0Cor8/f+HtXxu/6IN8K/wDw7V//APM9R/w9q+N3/RBvhX/4dq//APmeo/1nyr/n/EPrFPufoFRX5+/8Pavjd/0Qb4V/+Hav/wD5nq90/wCCf37cmvfth3PjzTvE/gfSPBGt+Bb20t5ItL8Rya5a3cdzB5yOJZLO0ZWGGBXyyOAQxzgdeDzrA4qp7LD1FKW9kVGrCTtFn0dRRRXpmgUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAef/tZf8msfEv/ALFTVP8A0klrwP8AZl/5Nu+H3/Ytad/6Sx175+1l/wAmsfEv/sVNU/8ASSWvA/2Zf+Tbvh9/2LWnf+ksdeZmW0SZHEfFr/k/f4L/APYteKv/AEPSK9vrxD4tf8n7/Bf/ALFrxV/6HpFe315stl/XUk8Q/wCCaP8Ayjy+CP8A2JGkf+kkVe314h/wTR/5R5fBH/sSNI/9JIqv/wDBQiVof2Bvjg6MUdPh/rzKynBUjTrjkU5K87eYz2CivBPDv/BND9nGfw/Yu/7P/wAEnd7eNmZvA2lksSoySfIq5/w7L/Zu/wCjffgh/wCELpf/AMYpe73/AK+8D2+ivEP+HZf7N3/RvvwQ/wDCF0v/AOMUf8Oy/wBm7/o334If+ELpf/xij3e/9feB7fRXiH/Dsv8AZu/6N9+CH/hC6X/8Yo/4dl/s3f8ARvvwQ/8ACF0v/wCMUe73/r7wPb6K8Q/4dl/s3f8ARvvwQ/8ACF0v/wCMUf8ADsv9m7/o334If+ELpf8A8Yo93v8A194Ht9FeIf8ADsv9m7/o334If+ELpf8A8Yo/4dl/s3f9G+/BD/whdL/+MUe73/r7wPb6K8Q/4dl/s3f9G+/BD/whdL/+MUf8Oy/2bv8Ao334If8AhC6X/wDGKPd7/wBfeB7fRXiH/Dsv9m7/AKN9+CH/AIQul/8Axij/AIdl/s3f9G+/BD/whdL/APjFHu9/6+8D2+ivEP8Ah2X+zd/0b78EP/CF0v8A+MUf8Oy/2bv+jffgh/4Qul//ABij3e/9feAv/BSr/lHt8bP+xJ1b/wBJJK+8q+DP+HZf7N3/AEb78EP/AAhdL/8AjFH/AA7L/Zu/6N9+CH/hC6X/APGK68Pio0k1uNM+86K/M749fsRfBf4JTfDXxH4M+EXww8I+IbL4peC0t9U0XwrY2F7AsniPT45Ak0USuoZHZGAPKsQeCa/TGvUoVlVjzIpBRRRWwBRRRQAV+ZPgf/ktHxx/7KVq38oa/TavzJ8D/wDJaPjj/wBlK1b+UNfzZ9Kj/kjI/wDX+H/pMz9A8Nf+Ru/8EvzRx37WH/IZ+En/AGP9l/6S3lev15B+1h/yGfhJ/wBj/Zf+kt5Xr9fwDj/9wwvpP/0tn7hQ/j1fl+QUV4H+0/8AC/w18Yv2l/hFofi7w7oXinRHttduW0/WLCK+tWlSK1CSGKVWXcoZsHGRuOOtbv8Aw75+An/REPhD/wCEbp3/AMZrT+zsBTw9Gria01KpFytGnGSSU5Q3dSN/hvt1J9vXlUnGnBWi7aya6J/yvv3PX6K8g/4d8/AT/oiHwh/8I3Tv/jNH/Dvn4Cf9EQ+EP/hG6d/8ZrP2GUf8/wCp/wCCo/8Ay4rnxX8kf/An/wDIHr9FeQf8O+fgJ/0RD4Q/+Ebp3/xmj/h3z8BP+iIfCH/wjdO/+M0ewyj/AJ/1P/BUf/lwc+K/kj/4E/8A5A9foryD/h3z8BP+iIfCH/wjdO/+M0f8O+fgJ/0RD4Q/+Ebp3/xmj2GUf8/6n/gqP/y4OfFfyR/8Cf8A8gev0V5B/wAO+fgJ/wBEQ+EP/hG6d/8AGaP+HfPwE/6Ih8If/CN07/4zR7DKP+f9T/wVH/5cHPiv5I/+BP8A+QPX6K8g/wCHfPwE/wCiIfCH/wAI3Tv/AIzR/wAO+fgJ/wBEQ+EP/hG6d/8AGaPYZR/z/qf+Co//AC4OfFfyR/8AAn/8gev0V5B/w75+An/REPhD/wCEbp3/AMZo/wCHfPwE/wCiIfCH/wAI3Tv/AIzR7DKP+f8AU/8ABUf/AJcHPiv5I/8AgT/+QPX6K8g/4d8/AT/oiHwh/wDCN07/AOM0f8O+fgJ/0RD4Q/8AhG6d/wDGaPYZR/z/AKn/AIKj/wDLg58V/JH/AMCf/wAgev039nr/AJSW/C3/ALFDxR/6M0ivIv8Ah3z8BP8AoiHwh/8ACN07/wCM0f8ADvn4Cf8AREPhD/4Runf/ABmvs/D7ifKOF+IMPnvNUq+x5vc5IxvzQlD4vaytbmvs9rHk57l2KzHA1MFaMea2t27Wae3Ku3c/WeivyY/4d8/AT/oiHwh/8I3Tv/jNetf8Eqvg54R+B37dXxQ0rwV4V8OeD9MvPAehXdxZ6JpkOnwTzf2hqq+Y6RKqs+0AbiM4GK/tvw8+kFl/FucwybD4SdOUoylzSlFr3Vfofj2e8DV8rwjxdSqpJNKyT6n6G0UUV/QZ8MFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAfnf/wU8/5SAeBv+yfah/6crWvM69M/4Kef8pAPA3/ZPtQ/9OVrXmdfh3HH/I2n6R/I8jF/xWFFfP6/AXwN8av2s/iRL4y8F+E/FsunadosVo+taRb37WqMl0zLGZUbaCeSBjJrrP8Ahhb4Jf8ARHPhX/4Sdh/8ar52dGhCynJ3sn8K6pP+bzMbJHqlFeV/8MLfBL/ojnwr/wDCTsP/AI1R/wAMLfBL/ojnwr/8JOw/+NVHLhv55f8AgK/+SF7p6pRXlf8Awwt8Ev8Aojnwr/8ACTsP/jVH/DC3wS/6I58K/wDwk7D/AONUcuG/nl/4Cv8A5IPdPVKK8r/4YW+CX/RHPhX/AOEnYf8Axqj/AIYW+CX/AERz4V/+EnYf/GqOXDfzy/8AAV/8kHunqlFeV/8ADC3wS/6I58K//CTsP/jVH/DC3wS/6I58K/8Awk7D/wCNUcuG/nl/4Cv/AJIPdPVKK8r/AOGFvgl/0Rz4V/8AhJ2H/wAao/4YW+CX/RHPhX/4Sdh/8ao5cN/PL/wFf/JB7p6pRXlf/DC3wS/6I58K/wDwk7D/AONUf8MLfBL/AKI58K//AAk7D/41Ry4b+eX/AICv/kg909Uoryv/AIYW+CX/AERz4V/+EnYf/GqP+GFvgl/0Rz4V/wDhJ2H/AMao5cN/PL/wFf8AyQe6eqV7f/wR2/5LJ+0F/wBf+hf+kD18ef8ADC3wS/6I58K//CTsP/jVH/DC3wS/6I58K/8Awk7D/wCNV7mQZthssxX1n3paNWslvbzfY2o1I05cx+2FFfif/wAMLfBL/ojnwr/8JOw/+NV9xf8ABDPw5p/gz9lHxdo2j2FnpWkaV8Qdbt7Kxs4FgtrOPfG2yONQFRdzMcKAMsT3r9RyHimlmlaVGnBxsr6+tj0KOIVR2SPs6iiivqToCiiigAooooAK+Kv+C8n7VHxD/ZF/Yt0rxF8M/E8vhDxFqXi6w0h9ShsLS9kjt5I7h3VUuopYskxKMlCQM4x1r7Vr86v+Dm//AJR/+GP+yg6X/wCiLyuHNKkqeCq1IOzUZNeqTPq+A8FQxnEuX4TFRUqdSvSjJPZxlOKafqnY/NX/AIfH/tdf9HC+Jv8AwlPDH/yro/4fH/tdf9HC+Jv/AAlPDH/yrr5yor8C/wBas3/5/wAj/XL/AIgF4ef9Cml9z/zPo3/h8f8Atdf9HC+Jv/CU8Mf/ACro/wCHx/7XX/Rwvib/AMJTwx/8q6+cqKP9as3/AOf8g/4gF4ef9Cml9z/zPo3/AIfH/tdf9HC+Jv8AwlPDH/yro/4fH/tdf9HC+Jv/AAlPDH/yrr5yoo/1qzf/AJ/yD/iAXh5/0KaX3P8AzPo3/h8f+11/0cL4m/8ACU8Mf/Kuj/h8f+11/wBHC+Jv/CU8Mf8Ayrr5yoo/1qzf/n/IP+IBeHn/AEKaX3P/ADPo3/h8f+11/wBHC+Jv/CU8Mf8Ayro/4fH/ALXX/Rwvib/wlPDH/wAq6+cqKP8AWrN/+f8AIP8AiAXh5/0KaX3P/M+jf+Hx/wC11/0cL4m/8JTwx/8AKuj/AIfH/tdf9HC+Jv8AwlPDH/yrr5yoo/1qzf8A5/yD/iAXh5/0KaX3P/M+jf8Ah8f+11/0cL4m/wDCU8Mf/Kuj/h8f+11/0cL4m/8ACU8Mf/KuvnKij/WrN/8An/IP+IBeHn/Qppfc/wDM+jf+Hx/7XX/Rwvib/wAJTwx/8q6P+Hx/7XX/AEcL4m/8JTwx/wDKuvnKij/WrN/+f8g/4gF4ef8AQppfc/8AM+jf+Hx/7XX/AEcL4m/8JTwx/wDKuj/h8f8Atdf9HC+Jv/CU8Mf/ACrr5yoo/wBas3/5/wAg/wCIBeHn/Qppfc/8z9SP+CHP/BR/49/tL/t6zeC/iV8UdS8ceGpvBep6qtneaHo9l5N1Bd6dHHIslnZwP9y4lBVmKnd0yAa/YSvwS/4N0f8AlKOP+yd63/6XaRX721+0cJYuticqpVq8uaT5rt/4mj/Mv6Q3D+XZJx/jssymiqVCHsuWMdlejTk7erbfzCiiivoz8WCiiigDz/8Aay/5NY+Jf/Yqap/6SS14H+zL/wAm3fD7/sWtO/8ASWOvfP2sv+TWPiX/ANipqn/pJLXgf7Mv/Jt3w+/7FrTv/SWOvMzLaJMjiPi1/wAn7/Bf/sWvFX/oekV7fXiHxa/5P3+C/wD2LXir/wBD0ivb682Wy/rqSeIf8E0f+UeXwR/7EjSP/SSKr3/BQ3/kwL45f9k+17/03XFUf+CaP/KPL4I/9iRpH/pJFV7/AIKG/wDJgXxy/wCyfa9/6briq/5efMZ6n4Y/5FrT/wDr2j/9BFXqo+GP+Ra0/wD69o//AEEVerIQUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFAHkX7ZP/IpfD7/sqngb/wBSbTa+4a+Hv2yf+RS+H3/ZVPA3/qTabX3DXs5f/Dfr/kWgooorvGFFFFABX5k+B/8AktHxx/7KVq38oa/TavzJ8D/8lo+OP/ZStW/lDX82fSo/5IyP/X+H/pMz9A8Nf+Ru/wDBL80cd+1h/wAhn4Sf9j/Zf+kt5Xr9eQftYf8AIZ+En/Y/2X/pLeV6/X8A4/8A3DC+k/8A0tn7hQ/j1fl+R5B8WP8Ak8H4Qf8AYP8AEH/oq0r1+vIPix/yeD8IP+wf4g/9FWlev0Zr/uuD/wCvb/8AT1UMN/Eq/wCJf+kRCiiivEOwKKKKACiiigDzr9pH9o/Tf2cvDWkzT6fqOv694n1OLRPD+h6fs+1axfSBisas7KkaKqu8krsFREYnJwpX4b+NfiZfeKo7Lxn4B8N6Np9zA8keo6B4sfV47eRdv7u4Sezs5F3hjsMSzDKNu2fKW8Z/b4uI/hz+1F+zb8R9akSz8FeEtf1TTta1GdttrpT6jY/Z7WeZz8saecBH5jYVTMMsM8+z/tMeN5dC+DvijTtC1K2i8c6r4b1WbwzYpdLHe6hcw2rMDbpuDOUZoySv3dykkZFfavK6EMDgo0qSnPEqbc5OfuNVJQ5VyyUfchGNSXMm7TTfu2PMjVnUxk6TnyqPLZaa3vq7puzfuq1tU+u121/aX+HF98UD4Ih8f+CZvGglaE6AmuWraoJFQyMn2YP5u4ICxG3IAJ6Vy/xB+PuseE/2z/hv8Ore2019E8YaDrWqXs8kbm6ils2tBEI2DhQp+0PuDIxOFwRzn5W+Ja+Drr/gg74SHhUWMl9JoejjwsLcA3jeJjNFs8jHzm9+2ebu2/Nu83dxvrrv28pfFMP7T3gN9B8z/hNF+EvjdrD7KpZ/twt7DZ5QHJbzMbffFfSYHg3B/XPZJuz+t0rVLK06NK6qabK8k+XXkcX70unl/wBq1a2GjUWnN7CSt0U60IuL+Tavpf3tFbX6n8O/tGfD7xh8RbzwfpPjvwbqni3TmlS70S01q2n1G1aI7ZBJbq5kUoeGyo2nrip/in8d/A/wLtLSfxt4y8KeDoNQdo7WTXNWt9PS5ZQCyoZnUMQCMgdMivjP9nX4Bar8X/2SP2ddT/4Wn8J9E8I+F77Q9Y0c6V4Nms7/AO2IVWaw+1vqrJ9ond54JsQBnkeTKbiVr1X4K3NvN/wVZ+Nya8ANfi8L6APC/wBoOXOjlZjdG3ychPtpxJtwNwTPYnzcfwnllCtiFRrSqLDxm5xScZNwqwp2TlBJX51OSUZ8ijJNy0kaYXOK9WlCpKCj7Tk5XdWXNd62d9LWT93mbSstT3zxt8afB3w18FW/iXxH4s8M6B4duzGINV1LVILWym8wbo9s0jBDuHK4PI6Vyfjf40XOsS/DbUfAniv4W3XhvxXrS211dalqZc6za+VKxj0p4W2TXWYyQCSu1JD2rxZZ/Cy/8FQfhZb2y6Kvg2H4aaiPAQsTF/Zg1EXsa3i2gQ+WJFs1QfIM+WXHQHFz9sMeHo/i5+zqnhz+xhAvxflN8um+XtF62nak1x5oj484yMxfd824ktyajC8NYanXoUnzOVSnOpeSXKko1bRt/MnBXd9Jc0eX3bvTE4+p7Os9LU7LTdtxjO6fRe9a1nte/Q9J/bD/AGqtP/Z58OafYW3ir4b+H/FmvXMUNm3i/XrXT7Syti+Jr2SKSeGWeONQQI4TueRo1yilnT0vxJ8RtA8Dppw1zX9E0l9UZo7M3l5HbC9dImlcRB2G4iNHcgZwqsTwCa+af2I7nQ7n4k/tTnxgNPHiFfHd2mvDVSh/4kQs4hp/m+YSPshtvMK7vkwZeB8wr5t+H3hBfGHwA/Yx0vxHp0Wo+Hbj4n6i+jWuoQmSOXSFGoyadlJBkx+QIWj3D7gjr1aPBWDrR+qyk4ui05ySTdT2mHnXXLrtH2VkrvmUuf3b2MK+b1ablWSuvfSjfZ058rb/AMXV/Z91Wd2z9Hvhf8ZvB/xv0SfU/Bfivw14v022nNrNd6JqcGoQRShVYxs8TMofaynaTnDA969E/wCCff8AykI+I/8A2TzQv/TlqtfJPhCBNI/4K4+NktVWBNY+F+lXd8sYwLqaLUbuKOR/VljJUE8447V9bf8ABPv/AJSEfEf/ALJ5oX/py1Wv0PwCy6lg/EDDKg3yzoOavuuenezasnZ3V7K61stjweOK86mTVoVN4TitNns/lo+5900UUV/ogfg4UUUUAFFFFABRRRQAUUUUAFFFFABRRRQB+d//AAU8/wCUgHgb/sn2of8Apyta8zr0z/gp5/ykA8Df9k+1D/05WteZ1+Hccf8AI2n6R/I8jF/xWeV/DH/k6v4q/wDXjof/AKLuq9Uryv4Y/wDJ1fxV/wCvHQ//AEXdV6pXzWM/iL/DH/0lGEtwooorlJCiiigAooooAKKKKACiiigAooooAKKKKACiiigAr6a/4Isf8m7ePf8Aso+t/wDoUNfMtfTX/BFj/k3bx7/2UfW//Qoa/QfDv/fan+H9UduC+Nn2FRRRX6+emFFFFABRRRQAV+dX/Bzf/wAo/wDwx/2UHS//AEReV+itfnV/wc3/APKP/wAMf9lB0v8A9EXlebnP+4V/8EvyZ9r4a/8AJXZX/wBhFH/05E/Eiiiiv5lP9xwooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAPt//g3R/wCUo4/7J3rf/pdpFfvbX4Jf8G6P/KUcf9k71v8A9LtIr97a/oDgf/kS0f8At7/0qR/kJ9KT/k5uZf8AcH/1HpBRRRX1h/PwUUUUAef/ALWX/JrHxL/7FTVP/SSWvA/2Zf8Ak274ff8AYtad/wCksde+ftZf8msfEv8A7FTVP/SSWvA/2Zf+Tbvh9/2LWnf+ksdeZmW0SZHEfFr/AJP3+C//AGLXir/0PSK9vrxD4tf8n7/Bf/sWvFX/AKHpFe315stl/XUk8Q/4Jo/8o8vgj/2JGkf+kkVXv+Chv/JgXxy/7J9r3/puuKo/8E0f+UeXwR/7EjSP/SSKr3/BQ3/kwL45f9k+17/03XFV/wAvPmM9T8Mf8i1p/wD17R/+gir1UfDH/Itaf/17R/8AoIq9WQgooooAKKKKACiiigAorxTxL/wUM+E/hfX9ZsX1vXdSh8N3L2ms6ro3hXVtX0XRZowDNHd6la20lnbtCDmUSzL5IyZNgFezWN9DqdlFc20sVxb3CLLFLE4dJEYZDKRwQQQQRTs7c3Qb0dnuS0Vxf7RXxz0n9mX4EeLviFr1vqN3ovgzSrjWL2GwjSS6lhhQuyxq7opcgcBmUZ7iul8LeIYfFvhnTtVtllS31O1iu4llADqkiBwGAJGcEZwTQk2m10t+N7fk/uE9LX63/C1/zX3l+iiikAUVU13Vf7C0O8vfs11efY4Hn+z2sfmTz7VLbI143OcYA7kisD4JfFP/AIXV8JtD8WHw74p8I/23bC5/sfxLYfYNW0/JI8u4g3N5b8ZxuPBFHRvt+t/8mG1vP9Lf5nVUVwn7Pv7QOlftJ+D7zxFoGn6zB4fTUZ7HTdRvYo44dfiibb9ttNrszWrtuEcjhDIE3qpjZHbu6bTW4dWux5F+2T/yKXw+/wCyqeBv/Um02vuGvh79sn/kUvh9/wBlU8Df+pNptfcNexl/8N+v+RaCiiiu8YUUUUAFfmT4H/5LR8cf+ylat/KGv02r8yfA/wDyWj44/wDZStW/lDX82fSo/wCSMj/1/h/6TM/QPDX/AJG7/wAEvzRx37WH/IZ+En/Y/wBl/wCkt5Xr9eQftYf8hn4Sf9j/AGX/AKS3lev1/AOP/wBwwvpP/wBLZ+4UP49X5fkeQfFj/k8H4Qf9g/xB/wCirSvX68g+LH/J4Pwg/wCwf4g/9FWleuzSrBEzscKgLE+gFGaf7tg/+vb/APT1UML/ABav+Jf+kRHUV478Mv27vh78ZV0mXwyPH+rWOuSLFZajF8PfEC6dNubYG+1tZCBY92cyM4QYJJABrodX/an8B6FpnxEvLrXfKtvhQN3ip/sVw39lj7Mt10EZMv7l1b91v6468VnV4fzSlVdCrhqkZro4ST1korS1/iaj/iaW7KhjcPNJwqRaem67N/km/RN9D0GivJ/h9+238OfiV450fw3Z6nrmma14itpLvR7bX/DOqaD/AGzHGqtJ9la+t4VuCqMrlYizBctjaCRb8Wftj/DXwN+0doXwk1bxRb2XxB8S2v23TdJe1n/0mI+bg+cEMKk+TJhWcMdvAORly4fzSNV0ZYapzqLm1ySuoRvzTta/KrO8tlZ3ehKzDCuHtFUjy3SvdWu9lfu7q3e56bRXLfGr41eF/wBnb4Y6t4y8Z6vDoXhrRI1lvb2WN5BEGYIuERWdiWZQFVSSSMCuev8A9r34ead8HPDfjxtfeXw34xFv/YLW+nXVxe6y043RR29mkTXUshXLeWkRcKrMQApIwoZTjq9ONahRnKMpciai2nO1+VNKzlbW29tbGs8TRhLknJJ2vZtbLd+i77HoGuaJZeJtFu9N1KztdQ07UIHtrq1uYllhuYnUq8bowKsrKSCpGCCQa5X4Wfs3/Dv4GXt3c+CfAXgvwdcX6LFdS6Holtp73KKSVVzCilgCSQD0zU/wo+N/hz402l8+hXGoefpcqw3tlqelXelX9kzLuTzbW7jinQOvKsyAMMlScVzXxE/ans/Bfxx0/wCHWleFfFnjPxTdaV/bl1Boy2ccWk2JnECzzy3dxAnzSbwEjLyERsdmMZ6sPhM05qmX01KOl5xbcFZa3mm0ktrOXlbdGVSthnBYiVmls99W7aWvq3pp1N7Sf2b/AId6B8TJvGtj4C8F2XjG4eSSXXoNEto9TlaQFZGa5CCUlgSCS3IJzXQ3vgvR9S8U2OuXGk6bPrelwy29lqElqjXVpFLt81I5SN6K+xNwUgNsXOcCvPPGX7Wmm/D/AMI/FXXdY8J+OtP0r4TW73V5c3GlrFFr0SW5nZ9OdpAs6gAoSSgDjHvXQj9oLwtF4h8C6Pc30tnrHxHtJ7zQbKW2kZ7pIIEnm3MgaNCkcik7mGc4XdV18Hm87VainL3Wk7uXuqHO1o37qpy5mtlFu9tSVVwsW4qyu9el2pKN/N81kn1drdBuh/s0fDjwx8SpvGem/D/wTp/jC5llnm1220K1i1KWSXIldrhUEhZ9zbiWy245zmrPxV+APgT46xWSeN/BXhLximmF2s11zR7fUBaF9u8xiZG2btq5xjO0Z6V11Fef/aeM9rGv7WXPFWUuZ3S7J3ulq9EdHsKVnHlVpavTd+fc5bxz8D/BfxQ8GWnhzxL4Q8L+IvD1g0b2ul6npUF3ZWzRqUjKQyKUUqrFRgcAkDg1JpvwY8H6Po2gadaeE/DVrp/hSb7TolrDpcCQ6NLtdfMtkC7YW2yOMoAcOw7mulrh/h18fNH+JvxV8e+ELC21KLUvh3dWlpqUtxGiwTvc2y3MZhIcswCMAdyrznAI5rWjVx9WhNUpScKfvtXdldqHNa/VyUW+tyJwoRlHmSu/dWnk3b0sm7eRJ8Sv2b/h38Ztbs9T8YeAvBfivUdOTy7S71nRLa+ntV3btsbyozKN3OARzzW/4g8DaL4t1HS7zVdH0vU7vQ7n7Zps93aRzSafPtKebCzAmN9rMu5cHDEZwa1aK5njsS4xg6krQuoq70T3S7X623NPY07uXKrvfTe21+9jMj8GaPF4vk8QLpOmrr01othJqQtUF5JbK5dYTLjeYw7MwTOASTjJrsv+Cff/ACkI+I//AGTzQv8A05arWBW//wAE+/8AlIR8R/8Asnmhf+nLVa/evo0TlPjqjzO9qdRfLlPivENJZNO380fzPumiiiv9KD+fQooooAKKKKACiiigAoorlvjf8ZNB/Z3+Dfinx74puZbPw14N0q51rVZ4oWmeG2t4mllYIoLMQqk4Aye1AHU0VR8MeJtP8aeGtP1jSby31HStWto72yu4HDxXUMih45EYcFWUgg9wavUAFFFFAH53/wDBTz/lIB4G/wCyfah/6crWvM69M/4Kef8AKQDwN/2T7UP/AE5WteZ1+Hccf8jafpH8jyMX/FZ5X8Mf+Tq/ir/146H/AOi7qvVK8r+GP/J1fxV/68dD/wDRd1XqlfNYz+Iv8Mf/AElGEtwooorlJCiiigAooooAKKKKACiiigAooooAKKKKACiiigAr6a/4Isf8m7ePf+yj63/6FDXzLX01/wAEWP8Ak3bx7/2UfW//AEKGv0Hw7/32p/h/VHbgvjZ9hUUUV+vnphRRRQAUUVi3fxG0Gw+INj4Tm1fT4vE2p2E+qWmmNMouri1gkhjmmVOpRHnhUnsZF9aANqvzq/4Ob/8AlH/4Y/7KDpf/AKIvK/RWvzq/4Ob/APlH/wCGP+yg6X/6IvK83Of9wr/4Jfkz7Xw1/wCSuyv/ALCKP/pyJ+JFFFFfzKf7jhRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAfb//AAbo/wDKUcf9k71v/wBLtIr97a/BL/g3R/5Sjj/snet/+l2kV+9tf0BwP/yJaP8A29/6VI/yE+lJ/wAnNzL/ALg/+o9IKKKK+sP5+CiiigDz/wDay/5NY+Jf/Yqap/6SS14H+zL/AMm3fD7/ALFrTv8A0ljr3z9rL/k1j4l/9ipqn/pJLXgf7Mv/ACbd8Pv+xa07/wBJY68zMtokyOI+LX/J+/wX/wCxa8Vf+h6RXt9eIfFr/k/f4L/9i14q/wDQ9Ir2+vNlsv66kniH/BNH/lHl8Ef+xI0j/wBJIqvf8FDf+TAvjl/2T7Xv/TdcVR/4Jo/8o8vgj/2JGkf+kkVXv+Chv/JgXxy/7J9r3/puuKr/AJefMZ6n4Y/5FrT/APr2j/8AQRV6qPhj/kWtP/69o/8A0EVerIQUUUUAFFFFABXnH7YniPWvB/7JHxR1bw28kXiHS/CWq3emPH99LmOzlaIj3DhcV6PTZYlniZHVXRwVZWGQwPUEVlXpupSlBdU0a0Kip1Y1GrpNO3oeC/8ABLPw5ovhb/gnF8EbbQI4006XwZpl3uQf66aa3SaaRvV3leRmJ53Ma8/0wX37ZP8AwUA+M/gzxH4p8YaV4K+Dtlodlpui+GvEd/4dkury/tmu5r24uLCaG4lwnlxJG0nlKFc+WXO+uz8Of8E7LP4b2Oo6J4G+K3xc+H/gPUrmS4/4RDRL3Tf7NsBLjz4bOa4spb6yidt7BLa6iETSMYfJOMbfjr9hzSdb+LH/AAnfhPxp49+GnjG50iHQtU1XQLmzu5Nds4Dm3S7j1K2vIpJIiX2z7BNiRgZGU4r0MRWhVrzr/wA12l2b1v2dtl682jik8KMHTpeyvdpLXvZr53avf/wG7TbXyR48+KPiPxL/AMEo/wBtLwN4j1rVvFL/AAel8ReD9N1zVGEl/qVhHYRT2/2mUAedPGk/lvKfmk2K7ZZmJbofxO8WftIftGeN/Btz4U/aA8ReFvhV4a8PaVpVp8N/Gdn4Wjtrm+02O7mvbqR9U0+e4lOUSJC00KLG5KBpCT9eT/sD+BD+yJ4t+DNu2uWnh/x1a30Wu6ot6JtZ1O5vdxur6W4mVw9zIzFizKVHyqqhFVBS8RfsD6QvxIg8Y+DPHHj34Y+KpNFtvD+rah4cfTpP+EitbYAW32yC+s7m3eWEbwkyRJIFldSxTCiIzjeSnreMFfvKMZKU++rb8/evujS+qcNNajXkpSg0tNtFayutOX4Xc8D+I3xg+Ofw/wD2Bvgn4V8eajqHgj4pfE3x7YfD7WNZE1lNqdlYTXVz/pSyW7yW6Xk1lbp88ZYJJOSoVgCvcftqwXv/AATj/Yq+KPjj4feLPG8mqS2VhYafH4q8SXnie20i8nvEtFvo21GWaZWH2pWaMymE+Qh8rJff7L8WP2OvC3xv/Z2j+HPie98T6raW7xXltrcuqyHW7PUIpPOh1CK5/guI5v3iYXylwEEflfu6ox/sYad4s8A+LPDXxI8Z+Ovi9pHjLThpF9b+Jrizt4Y7X5iUih022tIUcs27z9hnyqYkAVQCVRPmbWrk27ac0bR91dVqpf4ebmWugU7RcH0SWmjs+Ztvt8LUfPls9G2VPBH7Ktt+z94cvdZ03x58VNZ1GLw9cWt//bvjG/1m21Sfy1YXnk3Uki20qsjlfsnkJiZwUYCMJ8l/Ab46eNvjF+zx+w38OL/xv4ntB8ZNCvdV8Xa5Fqkya5q9vp1is5tlvt/2iJppZU3yxMs22MhZEyc/YHwi/ZDuvhnbRWuqfFr4r+O9PstJfRrCz1+809YbOJ0VC7fY7O3a5l2IoEl207L8xGGZicCD/gm34G0/9nj4YeALDVfFumz/AAa8l/B/ii2vIV1zR5Y0MZlDmE28vmRsySRywPC6sd0ZwCGpwUpOfvJuNlbovaX020covl2duXYhXtFX1Smm/NqFnffo/etzLdK55Le69rf7Kn7aPi34VaJ4o8caz4L8WfCnUvGtlDr+v3mt3fhvUrOWO1Zre+u5JLoQypIjeW8rKkse5NvmMK8R8C6r40+Gn/BKD4I/tCt8Uvilr3xJ+0+GZr2XVPFF3PpuqWV3fQWc1jPp+/7JIpgnb/SHia58xVkMxcA19z/D39irQvB+r+Ldc1nxH4t8deNPGmkjQdQ8T69La/2hFpyh9lpbx21vBa28StI8hEUCl3bdIZCFxk3H/BPDwVc/sT+GfgO2p+KP+EQ8KLpiWl4LmD+0pBYXUVzD5j+T5Z3PCobEYypONpwQ6NVQced3adO77pTqOS/8AlGHmk1sOraTfKrJqSt6wpxX3yjKXq+b4tt39sn/AJFL4ff9lU8Df+pNptfcNfD37ZP/ACKXw+/7Kp4G/wDUm02vuGu/L/4b9f8AItBRXw3f638VvjX+0r8bbey+PHxF8CaJ4K8XwaBpWkaBpHhqS2hg/sPSrxmZ77Srmdnaa8mJJlxgqAABWl/wqX4tf9HU/G//AMEngv8A+UFazxlOMuVhc+r/AIw+OZPhj8JPFPiWK3S7l8PaRd6mkDttWZoYXkCE9gduM+9cP+wx+1Zaftp/su+FfiFBpz6HfavbeXq+jySeZJo2oRnZc2zNgbgsgOx8DzI2jcDDivnjxZ+z58TPHPhXU9E1T9qH433WmaxaS2N5D/Y/g1POhlQo67l0EMMqxGQQRng1pfs8Q2v7Gf7TuiaDavLH4D+LFlb6KDK2RZ+IbC0CW0rkADN5YW5hZjgeZp1qoG6alDGU5zUYhc+0K/MnwP8A8lo+OP8A2UrVv5Q1+m1fmT4H/wCS0fHH/spWrfyhr+d/pUf8kZH/AK/w/wDSZn6D4a/8jd/4Jfmjjv2sP+Qz8JP+x/sv/SW8r1+vIP2sP+Qz8JP+x/sv/SW8r1+v4Bx/+4YX0n/6Wz9wofx6vy/I8g+LH/J4Pwg/7B/iD/0VaV6vq3/IKuf+uT/yNeUfFj/k8H4Qf9g/xB/6KtK9cuIBc27xtnEilTjrg8U8ydsPgn/07f8A6eqlYN2rVW/5l/6TE+O/+CSviH4mj9i74U2q+EfAh8H/AGJl/tQ+Lrsan5Pny5f7F/Zpj35/g+04/wBvtXBfFz/kmX/BRH/rkf8A1Gravs/9nb4FaR+zN8FfD/gTQbjUbvSPDduba2mv5EkuXUuz5dkRFJyx6KO1cl4j/Yl8K+J9C+M2nz6h4gSH45Lt15o54Q1p/oKWX+i5iIT93GD+8EnzEnp8tfd0eMMtWfY3MOTkp1ZqUbKTbSxVOreSlJ2fJF6KyvpY8anldeODwtFu7ptX2srUakNLJX96S3vofOg1XxR8Rv2h/wBmDw18StG0fwRpGlQnxH4a1DRdUfWIvEeoQae0YsJZpIbVrN1hkaYp5UyzAbVkBQ55f9uH4Nap8Yv29Pia/hpjH408G/C7RPFvhiUZyupWGrXc8SjHP7wK8R6cSnkV9jfFn9k/w78YfBHgjRr691ywf4eatp+taJqNhcJFe29xZjamWKMrK6F0kXbhldunBGjpv7OuiaX+0xqnxVjutUPiHVvD1v4amt2kj+xLbQzyTq6rs3+YWkYElyMAfKDzW2D48wmFqxxmGXJOFKcFCzceb26qp3bbanFvmUm9U0rRcYrl/sOrKj9Xre8pezvrrZK0o6dkm0/73dXPlX4mfG7Sv+CjmtfDLTNDPneEtE8Lf8LV8SRbgfJufJePS7KTH8YuDPMVyR/oa5HIpvwM/Zl8T/Gb9gH9l3xn8P8AXdG0T4i/C/w/DqGhjXLWS60jUBcWHkTW1ykbCRFdSuJo8ugDbQd3H0D+zn+wV4A/ZY0r4h2nhGDULZPiXqlxqmqNNMjtAZQwEEGEG2GPe+xW3Eb2yzVNon7HNv4I+EfgLwj4T+IHxF8IRfD3T10uxvtNvLOSa+txGqbbqC4tpbSU/KrB/IDoQdjKGcNWJ4vy6jBYHJZunRpzTh7SPMnGVOsqntEua/M6vs3o26aWmhdPKsTOca2MXPLlnGVnbS9Ll5Xpa/JKp/dlK1+pR/ZT/au1v4vePPGHw+8d+EV8G/EfwClrLqVvZ3w1DTNTtrjeIby0nwrCOQxOfLlVXQFQcndjyqy+Beiar/wWG8QXMt74xWSL4fafrYEPi7VoYzP/AGrcDYUS5Cm2+Uf6KR5HLfu/mbP0F8EP2atG+COt+INcTUNa8S+LvFzxNrniLWZYnvtTEIZYIysMcUEUcSOVVIYo16kgsWY3LT4B6PZftGXvxOW51I69f+HofDUluZE+xi2iuJLhXC7N/mb5GBO/GAPlB5r52OfYHB4zGTyy9KFWjyJRckuf3Oflu3JQclJx5nfltez0XdLBVquFjSr+81NNXtflU09baX5dNPS71b+Rfjt401jxJ8Gv2/NP1HVtSv7DQrR7fTba5unlh0+JvDsEjJCjEiNS7MxCgAsxPU1N8Y/gpo3jz9rn9kU3t54ug/trw7rMdx/Z3irVNO8sQaTbMnk/Z7iP7OSSd5h2GQYDlwAB9FeI/wBiXwr4n0L4zafPqHiBIfjku3XmjnhDWn+gpZf6LmIhP3cYP7wSfMSeny1a+KP7Imj/ABIj8AXNt4g8UeFvEHwz3roOuaRJam8gjktvs00TpcwTW8iSRhdwaE/MilSuK+iwvGeBoOnHDVJU/cnFtJqzngoUE3y62VVOTsm7LmSbsjirZTXqScppNc05WfVOv7SK+cFbsttjx2HwJqfxY/4KQeNvB9/428fWfgXwp4M0G/h0TTvEt/Y/aLp5L+JZXuYZlnHyqS4DjzmERkLeWAeT/wCChHxY8IeHfh38VdX8O/Ef4zXPxJ8E2U13ZXPhmbW7jRfDl3br58VldrYxHSwMr+9F+Gl8uY+Y4Xyyv1T4R/Z70rwh8dNe+Iaahq95r/iTRrDRL0XDxeQYrNpmjkVEjUiRjO+7nbwMKuOfNvHH/BOPw341tviFpaeNPiJovhD4oXd1qPiDw1pl7aRWFzd3MCxTTpK1s13HuZElaNbgROwIaNo3eNufK+JsrWZ0K+LqzjSo06KtFJKTioe1T92XNz2e6SqaKc1GxvVy+t7OahBNyl11tHlsuq2fS9lrJJy34zVfH2tftTftR/C/wHqeteIfDvhe4+Gw+IOqQeH9TutIn1m8mljto4Gu7aSOdYYfMkkMaOm5mj3bgoArfsOeF5fhZ+0V+1Tp+reMNTv4dL1TSD/buqSQi6tLYaQjI0srLsdoY9oMsqkv5e6TcSxPsPjL9jHRPEM/gXUNJ8ReKvCHif4eaadG0vxBo72jX0li0SxyW06XNvNbyxt5cb/NDlXQMhTnLfhV+w34J+FEHxHhifXtbt/iuE/4SSPV9QNybw/ZjbytvwJAZVZi3zYBbCCNQqjGpxHlf9m1cJSm4wnTjDkUFpJYiNSU7vT3oLTW7a5ZWUIXzw+X4r29KpXV3GSblfZex5GvP325W21vu2j5p1/49eH/AAb+0d8CtT+GnjP4ya/F408WNoOuz6+2vzeHPEtpc20zG5t3vYxp+8TRJJEdP8tNrPsUxcL9d/tQfDm6+K3wC8UaJYeK9T8DahcWLy2ev2F/JZSaTPH+8jmaRGU+WrKN6k7WTcp4JrzTw7/wTj0PTH+HS6r8Qfib4ntfhPfWl74WtNSvbFYNOFvC8KxMtvaRfaAY2UF5zJKvljZIm+XzPVf2gvgjY/tG/CXVfBmraprmlaPrqrBqDaTcLb3F1b7gZLcyMjFY5VBR9uGKMwDLnNcue5xldTHYGeBqy/c6SqSXM7e1couzjDn5YtLlcUtORNxSZ0ZfhcTBVfrEVaSXurZuzUt299FdvVauzufLH/BL39p/xd+31481nx/4o1KXQofA2mWvh6Dwtp91NHaajcXEMdxLrc0TBd8c42i1DAhI1kI5Ysfvr/gn3/ykI+I//ZPNC/8ATlqteKad+yl4Y8P/ALQ9h8SdGfUtC1e08PjwzcWFg8cWm6pZowaATxbCWeAgiN1ZSqsVOV4r2v8A4J9/8pCPiP8A9k80L/05arX7J4K5ngMf4k0cTllP2dF0ZJU7fw7Q1jf7Wt5c275tdbnx/FmFxGH4fqU8TLmnzx96/wAW1vSy93z5ebqfdNFFFf6An4iFFFFAHnv7WPx3P7MH7NHjn4hrpX9uv4O0a51VNON19lF60SFliMux/LDHALbG2gk7Wxg+If8ADWv7SP8A0RP4If8Ah4NU/wDmarrv+Crv/KNv41/9ine/+izSVw4yvOnblEzkv+Gtf2kf+iJ/BD/w8Gqf/M1R/wANa/tI/wDRE/gh/wCHg1T/AOZqutori+v1RXZyX/DWv7SP/RE/gh/4eDVP/margv2pvH37Rn7UX7Nnjz4bzfCr4KeH4fHeg3mgyamnxU1O8bT1uYWiMwhPh6MSlA27Z5ibsY3LnNe1UUfX6oXZif8ABP7xN/wqjxL4p+B942y38Lf8T7wbuPEugXUrf6Kn/XjdeZb7R9y3exzy9fT9fG19Z3HjT9uv4P6f4eJg1zwqmpeINdvl5W20KS2a1a0kHf7VeNaMikjJ0+SQZMG0/ZNenhZudJNlIKKKK6APzv8A+Cnn/KQDwN/2T7UP/Tla15nXpn/BTz/lIB4G/wCyfah/6crWvM6/DuOP+RtP0j+R5GL/AIrPK/hj/wAnV/FX/rx0P/0XdV6pXlfwx/5Or+Kv/Xjof/ou6r1SvmsZ/EX+GP8A6SjCW4UUUVykhRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV9Nf8ABFj/AJN28e/9lH1v/wBChr5lr6a/4Isf8m7ePf8Aso+t/wDoUNfoPh3/AL7U/wAP6o7cF8bPaP2q/wBra0/ZZi8JQnwd4x8d6z411STStL0nw4dPW5keO1mupHZ766tYFRYoH6y7iSAFOePOP+Hjvib/AKNd+P8A/wCDHwZ/8v6Z/wAFAf8Ak4H9mz/sbNW/9R7Uq6Cv0/FYqdKfLE9Jswv+Hjvib/o134//APgx8Gf/AC/o/wCHjvib/o134/8A/gx8Gf8Ay/rdorm/tCp2X9fMVzC/4eO+Jv8Ao134/wD/AIMfBn/y/r52/aQ8T/Eb48ftX+DfjZoPwf8AiF4G1D4JeGtQuNPt/EF/okk3iSWa7s2uNNiTT9Qu8/aLGO8iBl2Kkz278lBj6mopPMKj7Bc9q+G/xD0f4ufD3Q/FXh69i1LQfElhBqenXcf3Lm3mjWSNx35VgefWvgT/AIOb/wDlH/4Y/wCyg6X/AOiLyvf/APgmG1yvw08ew2HPw/t/HWqR+DWb732XchvVTt5C6qdRWHt5SoFGwIT4B/wc3/8AKP8A8Mf9lB0v/wBEXlb5s75dWf8Acl/6Sz7fw1/5K7K/+wij/wCnIn4kUUUV/Mx/uOFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQB9v/APBuj/ylHH/ZO9b/APS7SK/e2vwS/wCDdH/lKOP+yd63/wCl2kV+9tf0BwP/AMiWj/29/wClSP8AIT6Un/Jzcy/7g/8AqPSCiiivrD+fgooooA8//ay/5NY+Jf8A2Kmqf+kkteB/sy/8m3fD7/sWtO/9JY698/ay/wCTWPiX/wBipqn/AKSS14H+zL/ybd8Pv+xa07/0ljrzMy2iTI4j4tf8n7/Bf/sWvFX/AKHpFe314h8Wv+T9/gv/ANi14q/9D0ivb682Wy/rqSeIf8E0f+UeXwR/7EjSP/SSKr3/AAUN/wCTAvjl/wBk+17/ANN1xVH/AIJo/wDKPL4I/wDYkaR/6SRVe/4KG/8AJgXxy/7J9r3/AKbriq/5efMZ6n4Y/wCRa0//AK9o/wD0EVeqj4Y/5FrT/wDr2j/9BFXqyEFFFFABRRRQAUUUUAFFFFABXDfGP9p34bfs7NYD4gfELwP4FOqhzZDxDrtrpn2zZt3+X57rv27lztzjcM9RXc18Xftt+NtT+H//AAVF/Zv1HSfB/iLxzeL4a8WxjS9EmsIbtwV0/Lhr25todq98yg+gNVBc1SMO9/wjKX6W8i0tG+3+aX6n0n4Z/ar+F/jTwCfFej/EjwFq3hcX8elHWLLxBaT2AvJGRI7bz1kMfnM0karHu3EyKAMsKqfFj9sr4QfAXxQuieOfir8N/BmtPAt0un674mstOumhYsFkEc0itsJVgGxglT6V86/8FSPG+p/ED9hHSNR1bwf4j8DXjfEbwnGdL1uawmu0Ua9ZYctZXNzDtbtiUn1ArnfiH8btS+BH/BWn4x6xZfDTx18SreH4SaFPeWvhgadJcW0Ud5qjktFd3UDy7hkKkCyuSMbMkZq0bcz0XNNb/wAtNT3827eSEk9lq7Rfb4qjh/wfNn21cfErw7afD0+LZdf0WPwotj/ah1p76IaeLTZ5n2jz93l+Vs+bfu27ec4pfh78R/D3xb8H2fiHwpr2jeJtA1EM1pqek3sV7Z3QVijGOWNmRsMrA4JwQR2r86PCvgKDwP8A8G43xXuLDVtC1PSPFfhjxR4m0yDRDKdO0e1vpbm4jsYfNjikxDvKMGjQhw42jFb3/BNz4g6T/wAE/dZ8dfDLxFcQaP4Dn8G2nxj8LSELHDa2Mtsg1e2QfdUQ3SeYFyPluuAAOdZ0YwlVjPTkSfz95yi/OKjJ3WnuvuiFeUYSp680mvleMYyX+JyjpuubyZ922/xq8HXXxTm8DReLPDMnja2tBqE3h5NUgOqxWxxidrbd5oj+ZfnK4+Yc81i337WHwt0z4uL8P7n4leALfx5JKkC+G5fENomrtI6B0QWpk87cyEMBtyQQelfnz+wF8OdTg/4KWfD74n+KbSS08cfHX4f+JvG2rJMuJ7C3nv8AS10+xORuC29ksCbCTtcyeprrfhd4q1f/AIJXfC/Ufhx8ffhd/wAJZ8F5vEdzqA+KelrFrVpcyXuoS3SXGu2Dj7RbyIxj33KrLEH8pVIwDT+r29nGfxSTuv7ym4ON9tOV/wCJ7Dk0+f2eqTVn3i483NbdrVadE7vZn15+2T/yKXw+/wCyqeBv/Um02vuGvhv9sG4ju/BXw7lidJYpfil4FdHQ5V1PibTSCD3Ffclehl+lNp9you6uj89/DX7SPw7+EP7WP7S2l+LPHvgvwvqcnxFguUtNX1u2sp3ibwzoIWQJK6sVJVgDjB2n0rsv+G5Pgp/0WD4W/wDhV2H/AMdr6/1b4f6Dr981zfaJpF7cuAGlns45HYAYGWIJ6VW/4VN4W/6Frw//AOC6H/4mipgVOTlfcdj5J/4bk+Cn/RYPhb/4Vdh/8dqXWNIj/ac/aY8CeAbGRLnw/wCGJbL4i+JrqFgyrFbXHmaPbq46NcX8HngjgxaZOp/1gr3/APaL+Bmi+JP2ffHWnaV4U0W41S/8PahbWcUOnw+ZLM9tIqKvy9SxAHvXnn/BKn9m3Xv2fv2TtFuvG9qtr8R/GNvbap4jhzuOmsttFBa6eD6WtrFDE2PlaUTyAAymlTwKhNSvcLH0rX5k+B/+S0fHH/spWrfyhr9Nq/MnwP8A8lo+OP8A2UrVv5Q1/Pf0qP8AkjI/9f4f+kzP0Hw1/wCRu/8ABL80cd+1h/yGfhJ/2P8AZf8ApLeV6/XkH7WH/IZ+En/Y/wBl/wCkt5Xr9fwDj/8AcML6T/8AS2fuFD+PV+X5HkHxY/5PB+EH/YP8Qf8Aoq0r1+vIPix/yeD8IP8AsH+IP/RVpXr9Ga/7rg/+vb/9PVQw38Sr/iX/AKREKR3EaFmIAAySegFLVfVv+QVc/wDXJ/5GvGirux2xV2kebeH/ANuD4LeLPEVppGl/F/4X6lq2oTra2tlaeKrGa4uZmO1Y0jWUszkkAKBkk16jXxx/wSV+LviBP2LvhToQ+FvjttL+xND/AMJELvRf7M2GeU+bsN/9r2Dpj7Pv4+7XL+Jv2zfF/wAXfE/xdu/Dni/4meGZvAWt3nhvwzovhz4X3XiHTNUuLFRvk1C7XT7nd58zbPLhntmjiVSfmbfX6PjuAak80xGX5fpGhJxlKU/aX9/kTaowk4N7uMk+VJ3ff57D50vq1PEV/wDl5ayty6uLla8mk9FvovvPu+uW+IHxp8NfC3xH4W0nXdS+w6h411E6To0X2eWX7ZdCJ5fLyikJ8kbnc5VeMZyQK+fbX9oT4hftEfFv4ZfD+yuNT+Et7rPw/Tx/4tlisIZNYsHkeO3j06KO9hliiInaUyGSF3xGFAQkseT/AG5vB3jHQvHH7L+kP4yh1nxGvxJnS317UdHjV/JayvChmggaOJ5kiO3cgiRnUN5aglK4st4LTxtPCZhWjFzjUlyptySjCo1JtQlDlvDWzcnFqy100r5yvq9Sth4t8qTu1pdpSStdO9pLp5Xvc+16K+e/gX8QfF/gr9srxf8ACbxH4r1Dx1pcHhex8WaTquqWdnbaja+bPNazWshs4YIZE3QrIjCJXG9wzMNuPJ/2wv2tbv8AZi1m11az/aH8P694v07xRptnq3w3RNGSG4sbi5jgkhitwralDMkMyTeY9zIN0bEoI3CJy4LgnGYvHxy7D1IynKMZRaVRqSmk1qoe7q0m5qKT3dtTXEZxSoYeeIqxaUL83w3VlzfzWemqSbb9T7U1XU4NE0u5vLl/LtrSJppX2k7EUEscDk8A9Kwvg98X/Dvx9+Gej+MfCWof2t4c1+D7TYXfkSwefHuK52SqrryDwyg8Vs+IbqSx0C+miO2WG3kdGxnBCkg18G/Af9p/4w/HvVv2ZtDHxA/sb/hZfw+1LXPE2oR6JZS3U08MluFmtw0XlRTfOVGUeIBmLROQuMci4YnmmDrV6clD2b1lKTsoqnVqS92MJNu1N2aas9OV8146ZhmKws4KSbUk9EtW+elBatpLWor6O61uuW0vvbVdTg0TS7m8uX8u2tImmlfaTsRQSxwOTwD0rC+D3xf8O/H34Z6P4x8Jah/a3hzX4PtNhd+RLB58e4rnZKquvIPDKDxXkH7L3xe8WXXiv41+BPE2uyeKL74W6rFFp2vXFpb213fWt1YpeRLcJAkcBliLsm6OJFZVQld24nx79l/9qL4m/tXaV8E/Bn/CZXHh3VNd+HZ8eeLfE1hp1i2pXebn7JBb28c0ElpEGk3vIxgc7UVVCZLHup8E13Rrtyj+6dNufM+TknSnVUuX2bk+aKjbaSbUXBttxxqZvCMoNp2lzq1lfmhUhTtfmsvenbqnveKXvfXHxJ+OXhb4Q6/4V0zxDqn9n33jbVBo2ixfZppftt2UZxHlEYJ8qsdzlV4611tfN/xu8d+PfgDr/wABvD7+NZdffxV8QG0bVtQn0q1guNT05rO+njilVE8tZF8qENJAkO4x5CIGK1x3/BQr9oa8/Z28I+M9dtv2jfD3hPxt4f05tY0LwNJFoyQ6rDF+8S3nt7lXvppJ9kkXmW88I5QpGGRt+eD4OqY2rhcNhJrmrKVpfvJRlao4JpRpc0Fp9q+mrcb8qupmfs3N1Iu0VHT3U1e/Vys9un49Ppvx98YPDvwv1nwzp+u6j9hvPGOqDRtHj+zyy/bLsxSTCPKKQn7uKQ7nKr8uM5IB6avk/wDaw8X/APCwfEP7I2viH7ONb8e2eoCLOfK83Rb+Tbnvjdiu1/4KA/GfxP8As1/Dvwz8Q9G1EweGvC/iOyPjGxNtFIL3R7h/s8zh2QvG0LSxzAoVyI2BODWP+q06ssFhqDtWxCmrSenPGpOEYqy3lypK91zPVpbSs1jarWteEIxnpvZpt/gtj3ut/wD4J9/8pCPiP/2TzQv/AE5arXzR4C+MPif4q/t8eNNB0vVUj+HPw28PWVnqNstvE39oa5ek3KkSlS4WG0EeVVgN1wMg44+l/wDgn3/ykI+I/wD2TzQv/TlqtfsX0d8tq4Hj3DU6zXNKjKdlfRThzRT0Wri09L6Na3ul8tx5iYVsmq8m0ZpX6Np629HeL80z7pooor/Ro/BAooooA5j4z/CHQvj/APCbxF4J8T20t54e8U6fNpmoQxXElvJJDKpVtskZDo2DwykEGvE/+HW/gb/odPjl/wCHN1r/AOSK+k6KTinugPmz/h1v4G/6HT45f+HN1r/5Io/4db+Bv+h0+OX/AIc3Wv8A5Ir6ToqfZw7AfNn/AA638Df9Dp8cv/Dm61/8kV5F+35+xBoH7OH7D/xc+IHhzx78arPX/BPhHU9c06ef4iavcxQ3FvaySxs0TzlHUMoJVgQRwQa+8Kq61otn4k0i50/UbS2v7C9iaC4trmJZYbiNhhkdGBDKQSCCMEGj2cOwHgn/AATv+HN7H8O9W+J/iGznsvFPxduI9Za2uE2z6TpKIV0uwYHlGS3YzSIfu3F5ddjX0LRRVRiorlQBRRRTA/O//gp5/wApAPA3/ZPtQ/8ATla15nXpn/BTz/lIB4G/7J9qH/pyta8zr8O44/5G0/SP5HkYv+Kzyv4Y/wDJ1fxV/wCvHQ//AEXdV6pXlfwx/wCTq/ir/wBeOh/+i7qvVK+axn8Rf4Y/+kowluFFFFcpIUUUUAFFFFAGJ4/+JXh34UeHjq3inX9F8NaUsixG91W+is7cO33V8yRlXJ7DPNO8B/EXw/8AFPw5HrHhjXdH8R6TM7JHfaXex3ls7KcMBJGxUkHgjPFeBfCyNfij/wAFLfirea4q3cnwv0fR9M8NQyjcmnpfQST3VxGp4WSVgIzIBkpFtzjiut/a0+IV38CvDmlt4NsNEsPGHxP8Vad4bXUp7MMscswKG6mVdpneK3hYIHbGVQHKjafSeCXNToLWc1F+S5rNL/wFpt9HpbS5q4Lmceyu35cvM/uXrtse01zvg74raB4+8S+JNI0m/wDteo+EbxNP1aHyJI/sk7wpMqbmUK+Y5EOULDnGcgivHL/xn4v/AGcP2mvh34a1jxvq3j3wz8TTeaav9s2FjBe6RfW0D3KSRSWVvbo8MqBkZJEZlZUKtgsK8R8VftEah+z/APEv45Jo8s9nrHjX4paR4btb6HSZdWl0wTaPbNJcx2kQZ55EjjfZGFbLlMqygqdsNlMqsnGLveN4tXtf2kYa3V9Lvp2e279k+W/z+VpNvvpytfJ76H3nRXzl+zd8YPFmo/tD6j4YmvPiN4w8E3Ghf2pb694r8Ez+H7jTb2OZI3sy5sbOKZZEcSLiPeuyTLMMYT9l7xF8Qf2pvAFh8UP+E/vPDlhreqyz6Z4attKsrjS10yG6aJY52kh+1vPLHG7GRLiNVeRSI9qFGxnltSF5TkklbXXreyta9/dlutl6Xhwte/S34q/9eeh9HVy+m/Gbw3q/xc1PwLb6l5nirR9Ph1S8sfs8o8m2lYrG/mFfLOSpGAxIxyBXjXw98S+Pv2p/GXxJv9J8e33gLRvBviK58LaJZadpljeR3k1oqia5vTcwvJIrTPtEcDwYjTG/c28TeJv2lNc+HP7S3xWtNTuvt3hfwJ8ObPxRHp8cEcWbgPemZg+0yfOsCDBZguOBnOUsA1dSab5eayvdaJq+luuyfzRfsZNyhHVppfPmUbfe7fjr19n134o6H4b8e6H4Yu70rr3iNJ5bCzjt5ZnkjgUNLK5RSIo13IN8hVSzooJZgD0FfG/jLxT8RPhR+zHZftHX3i23vfEg0ix1bW/Dy6Hp6aXcaZK286fDOsAv1MS3DPGz3Tr5qksm1yo67/hPfHvxf/bi8X+B9K8eXnhXwZpXhXStai+waVZS6is07zg+XLcRSoqsFG8SRyH5VCeX8xO9TKZJaSXu8yk9bXjv9ns13T116KOVOPtIv3bb99Uvzkvk++i+mq+mv+CLH/Ju3j3/ALKPrf8A6FDXwL+xJ8XNe+Lfwgv/APhJ7mDUNe8LeINT8NXeoQwCBdTNncvCtyY1+VHdArMFAXcW2hVwo++v+CLH/Ju3j3/so+t/+hQ19fwHRlRzGtTlraP6rX5nXg01UlF7q6+admexftZ/smf8NPnwbd2njLX/AANrfgbVZdV07UdKtrS5YtLaT2kkbx3UUsZUxzt/CCCBzXnf/Dv74gf9HJ/EL/wmvD3/AMg19SUV+rSpQk7yVz0j5b/4d/fED/o5P4hf+E14e/8AkGj/AId/fED/AKOT+IX/AITXh7/5Br6koqfYUv5V9wHy3/w7++IH/RyfxC/8Jrw9/wDINfPv7RsHxQ+AP7T/AIS+C+l/FPXfGut/Gnw7fW+iXOo6Pp1rN4fuorq0SfUENrBECttZS3lxtkyGeCJAQXGf0mrnr/4U+HNU+KGm+NbjR7KbxXo2m3OkWOpumbi1tLiSGSeFT2V3t4Sf+uY98p4ek/soB/ws+GejfBj4a6B4R8O2a2Gg+GdPg0vT7dST5MEMYjQEnknaoyTyTknk18Ef8HN//KP/AMMf9lB0v/0ReV+itfnV/wAHN/8Ayj/8Mf8AZQdL/wDRF5XHnP8AuFf/AAS/Jn2vhr/yV2V/9hFH/wBORPxIooor+ZT/AHHCiiigArwGbxNaz+H/ABrJ/wAJPqy+L7PWr+PSbKDWZ3mZkmPkRJabyroSAuPLI257Dj36vMbT4QXWufDfxhpF/AttdanrV/qGnSl1JiZ5C8EwKklTnB7HGQcZr3cmxFGlzOs7aw+6+unVd11PynxLyjMswdGnl0OZqniN02ubkjyWkmuSd/4c9XF6pHSax47u/DWi+H7eWyF34i1wpbx2glESeaI98rM+DtRArEkKx6AAk1Wg+MEekxeIU1+zXS7vw3bpeXCW85uopoHBKvGxVCTlWUgqMEehzWJ4z8Car490LwdrGq6FaahqmhsX1HR7honS5EkflybSSYiwOHUEgcDkVNf/AAttvF3w+8SaXYeFLHwc2qWwt4mMdukszjLAuLcsojB24O8scv8AKMAttGhgVCPtWrt+801p79nb3tVy7Wi999NPPr5vxTLE1fqEZcsYXpRqQl+8X1bmjzv2XuVPb3UlKtCyio+zvJOWT8RvFWtavN4H/tTQP7NivPEFjPDJDefaPK++THMNibHwRjbvU4b5hgbvQPC3jb/hJfE/iHTvs3k/2Dcx2/meZu8/fCsmcYG3G7GMnpXIeJLPxV49Xwmsvh5tOGk6raXuoNPdwuzlMhjFsdtyDLElirH5cIcnF630rX/Bnj/xRcWGkf2nB4ieC5tbj7RHHFayrCImWcMwfblFbMaucMRjIq69OhUoKn7qkk7JS0vzx6uT+zfqc+U4zNMJmksb+/qUZ1KanKdBqcoLDVfsRpRaSqqCvGKs2k3y3vPpnxtguvg9p/iuawmRtSCJBYwyCR5Jnk8tIwxCjlsckACtGbxzqHhjQ9S1HxLplrp1np1t9pMtlem8VwM7kIaONg/AxwQc9RXH6Z8J9bh/Z88OaUIoYtf8O3EF/HbySjy5ZIZi4jLjIG4dD6kZ71v+M9K1T4v+Adb0SXSLrQBeWhjilvZ4XLS5yoCwvINmQMkkHnhT2zqYfBKo1G3J7SSb5tVC6s4q+ul9bSOrA5xxPLBRniFU+sfVKU4Q9kuSpiHSm5qrLk/dtVFFcvNSS072Nbwxr+ua60J1LQbewsbyAyK0eoedNDnBCSoY0Ckgn7jPgjGcc1xnw4+Idsnw58L6T4Y0Q2l3rNnNcWtm18TFp8KuQ8jzujn7zDH7tssQCMc12fhPxJrmqvbQ6j4dudMdYs3VxLdQPCXAAxEI3ZmBOfvqnHvxXn/w3+G3iH4c6f4P1Q6Y13dabpM2k6lp8c8QnVXmEqvGzMI2IZRkFxweMniqoQoclSNXlTumoqfuu0am/vPrbW+l7ac2uWbYrNViMHXwLrVIuFSNSrLDpVYKVXCXUV7GLV4c8uVRalyuVpukuXq/hRq0Gi2l/wCHG03+zNR0BFmmi+0/aVu1l3MJxMVVnLlW3FlBDZyOmcrRvj9faj4A0/xXceHBaaBctGk8h1APcw75BH5ixiPDRhiOS6tjJ24Azr+E/CuoXvjLXfEt/aHTpdTtIrG1snkR5Yo4y53SFCybmZsgKxAAGTkkDmofhlrafsoW3ho2X/E7jgiRrbzo+CLhXPz7tv3RnrVqOCnVbq2blKmn7z05k+dp31s+rul59eapiOJsPgI08B7SEaVDGSglRjebpVKawsZxdP3XOnzfu4qnKWtlFq0fWZGKRkhSxAyFHU+3PFcbffEvVPDuu6NFrGh29jp+u3YsreaPUPOnhlZWaNJo/LCgkKQdkjgEdSME9Zqcc8um3C2rpFdNEwhdxlUfB2kj0BxXjQ+HHiTXJfCEt3peuf2lpOr219q11fa2JreYruEjwQCVkVcsSPkjKrhVByccGV4fD1FJ4hq3m7PZ6r3l1/xa20PrOPM3zfByowyiFRybTfLDmg0qkFKMrUqjTcW7e9TXKpPnukfqX/wbo/8AKUcf9k71v/0u0iv3tr8Ev+DdH/lKOP8Asnet/wDpdpFfvbX7RwP/AMiWj/29/wClSP8AMr6Un/Jzcy/7g/8AqPSCiiivrD+fgooooA5b45eC7v4kfBTxh4dsGgS+17RL3TrZpmKxrJNA8aliASFywyQDx2NfH/wo0P8AaF+Hnwu8N6BN8CbS4m0PSrXT5JU8dWIWVooVjLAFM4JXNfdNFZVaMKnxgfnPrd38Q7n9vz4Q/wDCc/D6PwRGPDPij7G66/b6p9sO/Sd4xEBs2/L167vavpSuV/bQ/wCT7/gL/wBi54u/9C0WuqrxsXBQnyx2IZ4h/wAE0f8AlHl8Ef8AsSNI/wDSSKr3/BQ3/kwL45f9k+17/wBN1xVH/gmj/wAo8vgj/wBiRpH/AKSRVe/4KG/8mBfHL/sn2vf+m64rL/l58wPU/DH/ACLWn/8AXtH/AOgir1UfDH/Itaf/ANe0f/oIq9WQgooooAKKKKACiiigAooooAK8o+Iv7MP/AAn37XHw1+Kf9t/ZP+FeaVrGmf2X9j8z+0P7QFuN/nbx5fl+R02Nu39Vxz6vRTi3GSkt1f8AFNP8Gx9Lf13PJ/2zf2YP+Guvg9a+E/7c/wCEf+zeIdI177V9i+17vsF/BeeVs8xP9Z5Ozdu+XdnDYwTwv+zB/wAI1+2l4u+L/wDbnnf8JV4W03w1/ZP2Lb9l+x3F1N53neYd2/7TjZsG3Zncc4HrFFOEnH4e7fzlFRf3xVvx3FL3lZ9kvkpcy/8AJtf+AfK9j/wTPOjfslfHX4OWPjh4fC/xav8AWbzRA+jhj4Oj1NS01uoEy/aIlnaWRBmMgSFcnG6qH7Y//BJfw/8AtkeAfg9ouqeKL/R5fhc9vZ311Z2uG8S6SI4ku9NlAkBSK4MELHJkC7PuNnI+t6KqFSUWnHpy/wDkiaj+Dafe/vXHJ8zbfXm/8ns5fe0n5dLHkfiL9lZNa/bG8KfFiHWxYp4W8I6h4Vj0iOxBEguri2mEyy78J5f2faE8sg7s5GMHzn4s/sQfE/8AaT+GN18N/iX8ZNE1/wCG2qyCPVl0zwR/ZXiTWLRJPMjglvhfSWqFisYleGwj3qHCCEsCv1FRSVSSSXbbrvJye/m2/wDgWC7TbWjdttNkoq3ayS29dzx79r20jsPA/wAOYIUEcMPxR8CxxovRVHibTQAPwr7lr4e/bJ/5FL4ff9lU8Df+pNptfcNevl7bptvuOCSVlsFFFFdxQUUUUAFfmT4H/wCS0fHH/spWrfyhr9Nq/MnwP/yWj44/9lK1b+UNfzZ9Kj/kjI/9f4f+kzP0Dw1/5G7/AMEvzRx37WH/ACGfhJ/2P9l/6S3lev15B+1h/wAhn4Sf9j/Zf+kt5Xr9fwDj/wDcML6T/wDS2fuFD+PV+X5HkHxY/wCTwfhB/wBg/wAQf+irSvX68g+LH/J4Pwg/7B/iD/0VaV6/Rmv+64P/AK9v/wBPVQw38Sr/AIl/6REKju4PtVrJFnb5iFc46ZGKkorxU7O6O1Np3R5z+yR8AP8Ahln9nHwp8P8A+1v7d/4Ri0Nr9v8Asv2X7TmR33eXvfb97GNx6Vxmn/speMfhR8RvGeqfDHx/ovhzQ/H2pHXdU0jXfDMmtC01KRQlxc2csd5bGITBY2aOVZlEillwGKV7zRXtR4ix6xFfFOacq7bneMJKTcua7i4uN+bVWStsrJnF9QoexhQStGFuWzaasrLW99m1ueKfFT9lHVtf+K/hH4ieEfGa6F8QfDGkyeHrrUdY0ddUtPEOnyYdo7u2gktT5gnVZkeGSNVYuNhVsCHxt+ybr/xT1z4U6z4m8dQ6hrPw48Ty+JZ5LfQ1trbUN8E0ItYYxMWgjQSjDSPO52/MzZyPcaK0p8TZjCMIxmvcUoxfJDmUZKScebl5rWnKyvZX0tZWmeXYefNzJ+9vq9bJLXXeySvvoea2H7PP2H9rvUvir/a+7+0PCdv4X/sv7Jjy/Ku5bjz/ADt/OfN27NgxtzuOcV4Jef8ABLjxDP8As9n4SQfFHT7P4e2Wu/27p8cXhIf2vI41MX6RXt2bopcKDuUtHBBIzLExcqrxyfYtFdGB4wzbByjKhVScfZ2vCErOkmqbXNF2cVJ2e+urJxGV4WupKrG/Ne+rV7xUWtGtGkk1sV9Wsf7U0q5tt2z7RE8W7Gdu4EZx+NfOX7P/APwTy/4UX4n+DOpf8Jf/AGp/wqLwhe+FPL/sryP7W+0vC3n585vK2+V9zD53feGOfpWivOwGeY7BUKmGw0+WFT4lZO94zh1Ta92clp3vuk1tiMHRruMqqu47b/zQn0/vU4P5W2bv5b8Of2a/+EA+Lnxb8U/219r/AOFp3FlP9l+x+X/Zn2eyW0xv3nzd23f91MZxz1ryz4d/8E5tS+CPh34VXngrx5aab46+Gfhx/CU2q3/h83mneIdNdvMMNxZpcxSKUmCyRslyNp3ghw/H1LRXbR4szWlzKFXSSimnGDTUKbpRTTi00qbcbPfd3kkzKpluGnbmjtzPd7zlGcnv/NGLXa2ljxXxl+y1r/xPuvhfqHibxvDqOs/D3xY3imea30Rba2v821zALSGISloI1FwCGkknf5PmZt2R5/44/wCCdPiHWtB+MnhjQ/iRp+heDPjVf32q6tFJ4WF3rVrPd2yRSRx3puViNvvQHY9sXEbuiSo22VPqqitMJxjm2GadCoklsvZ02l7/ALRWi42Vp6p202VloE8sw83eSd9Nbyvomt732k18zxnxl+yP/wAJbpfwStv+Eg+z/wDCnNVtdT3fYd/9r+RYTWezHmDyd3nb8/PjbjBzkdX+1FY+GNS/Zt8eweNGiTwlJ4fvhq7yYxHa+Q/mMM8AhckH1AxzXd0V5/8AbWKnVo1K8m/ZO6taLV5ubs0tG5NtNp29FY2w+FpUP4UeiXfSOyPlv/gjn8HtS+Fn7CXhbUdfnubzxR49z4p1a6uf9dO1wqC3355BW0jtlx2KmvsP/gn3/wApCPiP/wBk80L/ANOWq1gVv/8ABPv/AJSEfEf/ALJ5oX/py1Wv6B8BM4q5r4mPMqytKqq0rLZXTtFeUVovJHwPGGBjg+G1hYu/Lyq/d31fzd38z7pooor/AEUPwoKKKKACiiigAooooAKKKKACivnP/go9+27efsJeFvhn4hj0u21PRPEXjiHRPEhkVml07SBpuo315exbWHzW8VkZiCGzHHIAu4gj6KjkWaNWUhlYZVgcgj1FADqKKKAPzv8A+Cnn/KQDwN/2T7UP/Tla15nXpn/BTz/lIB4G/wCyfah/6crWvM6/DuOP+RtP0j+R5GL/AIrPK/hj/wAnV/FX/rx0P/0XdV6pXlfwx/5Or+Kv/Xjof/ou6r1SvmsZ/EX+GP8A6SjCW4UUUVykhRRRQAUUUUAeTfEf9m7Urv4xx/EPwJ4mt/CHi24s00zVxe6UdU0zXbRN5iWe3WaCQSxO+UljmQgFlYOpAWh8Q/2Wda+N3w2k07xn44a68RWmr2uvaHqej6PFp9t4dvLXmB4IJHmkdSc+Ys08m8SOFMYIC+0UV1wx1aPLZ6x2dldWd1ra+/4abF+0le/9bW176aa9NDyTw5+ztr2u/F/QvGvxC8U6Z4m1LwjBcRaBZ6Poj6RYWMlwuya5kSS5uZJZzH+7U+YqIrPhNzbhy/jD9gi28Xah4+1D/hKbyw1nxN4qsfGWhX9rZqJPDeoWdrFbwsVZ2W5Q+W29WCBklZODh6+g6KuOY4iMueErNK2iS0UlLa1viV/P5sFOS/rya+6zem2pxnw28L+OdM1O4ufGHi7RNdUwiG3tNG8PNpNshzkySebc3MrycYG2REALZRjhl4D4Tfsr+K/gLBJ4c8H+P7DTfh3/AGrJqNrplx4cFzqmmRSy+dLaQXhuBCITIZNvmWrsiSFdxIVx7lRWccZVV7W1tdcqtptpa1/PfV92K7s49GeIWX7Lnin4b+PvGOo/Dvx1pnhrRvHl/wD2vqenal4dOqPZ37rsnubKVbmFYjIFRis0c6iRS2CGKVsS/sr2erfGvxj4p1fUjq+n+NPCVr4TvdMmtQpeKJrgySNIrAHzFuCCojUDHB5wPV6KbxtZqzfS2yva1rXtfZFe1ldyvq/807+t0n/w7PnWD9iHxBqfwn0z4Ya/8Q49Y+FelmGD7ANC8jW9RsoHDw2V1f8A2honi+WNJDHaxO6JjcpZifQPCv7PA8L/ALUnir4lDVhKPE2h2GijTBabfsv2V5W8zzd53bvNxt2DG3qc8elUVdTMcRO/NLe99FrzWu9Fu7LXcm75XHp/wU/lql9x57+zn8CP+FAeH/Edj/av9rf2/wCJdS8Rb/s3keR9snMvk43tu2ZxuyN3XaOlfbn/AARY/wCTdvHv/ZR9b/8AQoa+Za+mv+CLH/Ju3j3/ALKPrf8A6FDX2/AFWVTH1Zz35PyaR2YNt1JSe7u/m3dn2FRRRX62ekFFFFABRRRQAV+dX/Bzf/yj/wDDH/ZQdL/9EXlforX51f8ABzf/AMo//DH/AGUHS/8A0ReV5uc/7hX/AMEvyZ9r4a/8ldlf/YRR/wDTkT8SKKKK/mU/3HCiiigAooooAKKKKACiiigAorP8S+KbDwhpwutQuBBE8iwxgIzvNIxwqIigs7k9FUEn0qHw3440zxXDdNZ3DZsX2XMU8MltNbnG4b45FV1yDkEjkdK2VCq4e1UXy97afeefLNsDDFLAyrQVVq6hzLmtq78t72sm9uj7GtRWJ4Z+IukeL7nyrCeaQshkieS1lhjuUBALxO6hZVGR8yFh8y88iodO+Kmh6vqqWltdTymWZreOcWc32SWRc5RbjZ5TN8rDAfqCOvFW8HiE2nB6b6PT1OWPEeUyjTnHFU2qjtF88bSemkddXqtF3Xc6Giubvvi94f07U5bWW+cG3uFtJp1tZmtYJjgCN5wpiRssowzAgsM9ak134paH4c8QHSbm7k/tTyFuFtIbWWeeSNiwBRI1Yv8AdbO0EgDJwOaawWJdkqctddnt3/FES4oyaKlKWLpJRfK/3kNJO9ovXRvllpvo+zOgorL0zxtpOr+GjrEN/b/2Yqsz3EreUsO3ht+7BQqQQQ2CCOaq6D8S9I8R62dNglvIb/yfPFveWE9nI8ecFlEyKWAPXGcVH1at7z5H7u+j09ex0f27lt6UfrEL1bOHvx99PZw196/S17n37/wbo/8AKUcf9k71v/0u0iv3tr8Ev+DdH/lKOP8Asnet/wDpdpFfvbX7zwP/AMiWj/29/wClSP8AJn6Un/Jzcy/7g/8AqPSCiiivrD+fgooooAKKKKAPP/jt+yp8Nv2n4tLT4h+CPDfjEaI0r6edVskuDZmQKJPLLDK7giZx12rnoK88/wCHUX7Nv/RFPh7/AOCmOvoOigD4M/4JpIIv+CefwRVRhV8EaQAB2H2SOr3/AAUN/wCTAvjl/wBk+17/ANN1xXF/sja18Tf2f/2W/h74G1v9nv4yT6x4Q8PWWj3slkmjS20k0ECRuY3OoAshKnBwMjHFZf7efxq8Z6n+w18Z7a5+A3xl0m3uPAuuRS315BpAt7NG0+cGWQpqDPsUEs21WOAcAnivC+r1PaXt1IsfTfhj/kWtP/69o/8A0EVeqj4Y/wCRa0//AK9o/wD0EVerkEFFFFABRRRQAUUUUAfOTftn+N/id8SPH+lfCb4Y6Z420X4Y6idE1rUdX8VHQpdR1JIllns9Ni+yTpO0SvGhe4ltYzK+0PtBkGl4y/a78Ta9+0FrPwy+F/gXTPFfiTwhpdpqvia58ReIX0HS9IF3uNtarNDaXkk10yo7lViEaoATLuIWvO/gp4O+Kn7EXxJ+MGjaZ8L9Z+J/hr4geMLzxv4c1fSNZ0qyjsZtQwbiy1Fby4hmjEU0e4S28VzmKUHaXUob9n8PPiT+y5+2p8TPiLpXgDVPiX4V+Mmn6RPeWXhzVNOi1Lw9qmn2/wBlZWGoT2cUtrLEVKyJIZA6MDEAwNbRUXGF/wCVN/4uVXXkk+b7kru9y56Sml0bt5rmST9XHVrzemlit4x/4KtW/gz9jzVviXN8O9bm8TeEfGFp4F8T+DIr+Jr7TdUkvLe1migmA8u4AFwksLfuxMjx5MW47dDWf29viN4P/aN074U618FrdPGHjPR5da8Hvp/i9brTLiKGXZcrqk7WkbWLwK8Dv5Ed4reaVjaRgofybxv+wB8SdU/ZQ8ZynRrG4+IfxU+MWkfEbU9DstSiaDQrSLU7BzbfaJTGk0kNpa7pGXhpC6xhwELe/fFH4HeKPEf/AAUs+EvxAs9L87wj4Z8HeIdK1O/+0wr9mubqawaCPyywkbcIJeVUqNvJGRnSCh7qmldt310VqMZWXl7XminrfZN6Mzqu3PybK1vP964/f7Oz8t7HKap/wU3m8JfsyfHfxbrvgaGy8b/s+XE9r4h8N2uu/abO7cQx3MEltffZ1ZoZoJUYM9sjqwdTH8oZtC3/AOCgfiLw3rnw01Txr8Mh4Q+HPxc1O30XQNXl8QCfWLG9uoy9lHqWn/Z1S2WfayqYrqcoxjWRULME+dP20PgH4s+FX7MX/BRDxNr+k/YNE+IMUGo+H7n7VDL9vgh0m3gkfYjs8eJUZcSBScZAI5r0WfwH8Xv2xPCf7PvhLXvhwvgrQvAmtaN4w8S+JZNbs7zTNWGmokltb6ZHFKbxmnkKOxuoLcRLG65kO3c6MYTlFtK37nm12503U+cbOy3vpZ7BV91O3/T63ny29n8nffrffqM+CH7QOg/ssfFL9uHx94m+1to/h3xzp00sNoivc3Uj6JpscUESsyqZJJXRFDMo3OMkDmvX/A37Y/jbQvjr4G8C/Fn4baN4EvPibZXM3hy50bxW2vxm7toftE9jdhrO28mYQ5ZTEZom8uQbxhd3iXxg/wCCb/jT9ojwJ+2F4YvLXTNF/wCFreLdL13wld6lMlxZah9istOK+ekTNIkTT2jxOGXcFJYIwxn0f9k34EeGNJ+I2gavbfsY+D/gnr2m2rzXviA2fhmN7KZoTG0Wny6bJPcS7i7KWlW1HlFicsfKM0uR04c+6hBW9KUfTXmumtbcq01s3V053HrOf/pbt30fe2zeq0Zyv7IPxg+N3if/AIKO/tGaFrGk+GLrwxo2q+H4praXx5f3EfhuCTS94On27aaI5GmGJJULW4WQkbpcbz6D8dv23vHn7NlxofiPxj8KNM0z4Yav4ksvDtxqa+MUm1/RxdzC2hu7jT0tTbGL7Q0YIhv5XEcqvtyHjWD4T/DPxz8D/wDgon8ZNek8Fapr3gz4x/2FeWXiHTb+wFvoUllYPaTRXsE9xFc8lI2VreKYESgHaVavk74z/sVftBfHf4FxaH4o8G/E3xJ8VrLxlaatq/ia9+J0cPg/VbK21mORDp2kRX/kA/ZVjZYriwttgSVvMaYIsroKE50VO3LampX/AO3VLttrr81zFVVaFRrezt6uMmvxsnvZ2Tsfef7ZP/IpfD7/ALKp4G/9SbTa+4a+Hv2yf+RS+H3/AGVTwN/6k2m19w135f8Aw36/5DQUUUV3jCiiigArxH4g/wDBNj4BfFbxvqfiTxH8IfAWsa9rU32m/v7nSYmmvJcBS7tj5mIUZJ5OK9uopSipK0lcabWx+d//AAUU/YQ+Df7OmkfCPxD4F+GvhHwprn/CxtPtft2m6ekE3lPaXpZNw5wdq5HsKsV7R/wVX+Gniz4ifB7wNceEPC2reMbzwx44sNavNO0x7dbtrVILqN3QTyxoxDSpxvBwSe1fOn274o/9G8/GX/v3o/8A8sK/in6SfAPEGdZ/hsRk2DnVhGiotwV0nzzdvWzR+u+H+d4HB4GpDF1VGTlfV9LI4P4sf8ng/CD/ALB/iD/0VaV6/Xhvjm48WTftifCT/hI/hv428CRjTtf8iXXlsgl2fLtMrH9nuZjleCdwHUYzXuVfyzxdkmPyl4XAZlSdKrGlrGSs1erVa+9O5+k5XjKOKVWth5KUXLdf4YhRRRXyB6oUUUUAFFFFABRRRQAUUUUAFFFFAHw/8Q/CfiD40f8ABWttK8TfD34c+MtB8MeCrW70y113XZriHS4JdUcSalDbvp8kYv2ESqYwRxEg+04OF0PBX7XHhb9lj9kXxZ4z8JfDG6hsovinf6BdaJa669xPqF3Lq/2Oa8jknXAeRv3ggyqAkJvUfNXsOkfCDxFa/wDBRnW/Hj6fjwpd/D2z0OG+8+L57yPUJ5ni8vd5gxG6ncV284BzkV4H4l/Y3+JGofsZa74Ti8Ob9fvPjNL4rhtf7QtRv0w+IBeC43mTYP3Hz7C2/tt3cV+3YbHZTjo4XB42cVRj9UjKPtZRi0+d1br2lk0/iaScW73TZ8fjaWKoyxGIoRbn77i+VN3VH3LaX+L3V326u/0B4K/ai8SW/wC0NpHw88feCLHwnfeLdIudY8O3Wna//a0d0ts6C4trkG3h8i4RJonxGZo2BkxL8g3Vf2m/2nfH37PXg/xN4xg+GGn6v4G8GxG61Oe48VJaavcW0ZDXFxaWi28sMiJES6rNdQSOY3XYvyF5vjB8IPEXin9uT4M+MbDT/P8ADnhTS/EFtqt358S/ZZLqK1WAbCwdtxjflFIGOcZFfPH7U37Nfxj+NOh/HHw9qnhv4g+LNV8TSX9v4J1Sw+IC6J4UstLe1Bt7a4sYryJpJ1cyo3n2cyzOUDzLES8fiZJlmR4vGYWtX9lCnKEZVYOclZ+2lBqLlWg0/ZqM2pTbSblGLVkvRxFXF041IRcm1JKLstvZqV3aLuua6uo72XdnqfxF8QWvi3/gpT+z5qthJ51jqfgfxLd28mMeZHIdMZT+IIr7C/4J9/8AKQj4j/8AZPNC/wDTlqtfI/h39n3xfYftHfs869LpG3SfA3gDUtE1yf7VCfsV5NFpyxxbd+58mCX5kDKNnJGRn64/4J9/8pCPiP8A9k80L/05arX634HVMO+NstpYeakoYatHRp2/e1mr268rT9Gnsz4zi320soxNWtGzlOk9rf8ALmkn90k16prdH3TRRRX9/H4qFFFFABRRRQAUVhfEz4laH8HfAOq+J/EmoRaXoei27XN3cyAtsUcAKqgs7sSFVFBZ2ZVUFiAfljxvqfi/9stm/wCEqi1fwL8M3/49/ClveNbatry9pNUnhfMUR7WUL4I/18kgdrePKrWjTV5Ael/FT/goZ4L8F+Kb3w14UsPEHxW8X6bIYL3SPB8EN0umSDrHd3k0kVlayDg+VNOspHKxsK4K/wD2kP2j/G536X4Q+EPw9tj/AKs6vq994lu2H/TSGCKzijb/AGUuJR33dhv+DvBej/DvwxZ6L4f0rTdD0bToxFa2Gn2yW1tbIP4UjQBVHsBWnXl1Mwm/h0JuePeL/hV8Wvj3478B3fxV8c/C7X/DngTWbnXYtL8PfD+90ibUJ5tJ1DS/LmmudYvEMPk6lMxQQ5ZlT5gMg+s/8E3vGlxpvwx1f4U6vcy3GufBu7TQ4ZZ3LzX+iunmaTdMxyXJth9ndzy89lcGrFcP8JYbnWv+CkNnP4b+VdC8DzweNpD/AKmWC4ulfSYTjrcLLBqEido4pLjIzPGavCYicqtpa3Bbn15RRRXrFH53/wDBTz/lIB4G/wCyfah/6crWvM69n/4Kf/B74h61+1n4J8XeFPhz4p8daLB4RvtHuZNEey32lw15byqHW4uIjhkVsEZ6GvE/+EK+L/8A0b58XPy0j/5Pr8i4uyXHYnMpVaFJyjZapeR5mJpTlUukeYfDH/k6v4q/9eOh/wDou6r1SvKfhbpviDSv2qfirH4k8J694N1H7DoZ/s/WPs/2jZ5d1h/3EsqbTzj5s8dK9Wr4vMqM6Nf2VVWklG6/7dRyzTTsz4uT4g/D6+/a++NmnfFH4r6h4XTSNU02LRNPn+Jt/wCG4YoH06F5PKgivIEIMhJJ2nknnmvTvEPii60j9uT4PaDpGuapN4XvfBeszmD+1JrmDUPKawEM0jM7ee4V2xI5ZjvY7vmJPQfs/wDwr17wR+0Z8bde1Ow+zaV4u1fTbrSZ/Pjf7XHFp8UMjbVYsmJFYYcKTjIyOaTx98Ktf1v9uP4deMLaw8zw5oXhrWdPvrvz4x5E9xJZtCmwtvbcIpOVUgbeSMjPo/WKTqwhfRUu/u83sLbdHzab35vM0rST57eX/pUdvl+B8hRfFax1P4B/Ea/034r+MpPjzp/i7WovDWh2Pja+vr6eSHUpBaWy6KZ3hkgKKFZWtyoi3McBdw9m/wCClU/j3xJ8P/g34d0DxRrvgXxN4w8SR2F1caJqD2jLOdOuZBGzoRui89VypOCB9CJNM/Ys8QeL/wBijx54P1bT4dK8YSeLNb8T+Fbh54nexvDfzXOn3SSIzCMnK55DBZHDDkiuo+Lvw3+IHxrt/wBnPW73w19h1rwz4mtda8V2f262YaTixnjmIYPtlAlcAeXuJBBx1x3fWaDq05Rkvdl1a091Wt5XTutUnbuaVKiVSVSOv8W342+W3J/295D9A/be87/gnyPilPa7vE1tpzafNpePnfXkk+yfZNvBy14AoA/hYEZrxr4Dan8afAP7J37Q2mad4m8Q+Ofif4S1kQWVzfTtqM8UrWFlNcraRyZB2mSdoYQMEhFwSee0j/Yr8Yx/t3zzYtf+FH3Gsj4iyQ+bHvk8RCIQCEpndsEgF1nZt3gfNuGB3vwk8DeOfgrrnxz1+DwmutXPiPxgusaNYf2rBbPqtp9jtInZHJZUkzHLtSbywzKAzIrbxn7TCwhU9lyv2iU7O1l78LQ12t7/ADd42fQlctPlhDVRknr1VpWv6K1/7zaKn7Fvi34Y/Eu4XUfh98R/G2s6lplu1rrmh6/4lvr68SQbVY3NnfvJJbukgYboBEjMWGXUADtv2mf2lrj9nvVfAVlaeFr3xVdeO9fGgQQWl0kElu7W80qyHeNpQGIBssu1SzfMV2twGp/CfxF8ff2vfh18QpPAF78OIfh/Hd/2hf6vc6e+qa9HPBJFFZx/Ybi4BgjZjI3nSLgsAitliOw/ag+FWv8AxE+KXwW1HR7D7ZZ+EvF51TVpPPjj+y232C6i34ZgX+eRBhAT82cYBNcVVUJ4qnKpK6a1TknbV6cytfo12vYysouSvf3W9e/K7Jvq729b23ujC0D9rbx3rfxc8QfDY/DHTY/H2hW9vqe4eKGbw9Lp8ynbOb02YuFk8xJIvKFoxLKDu2bmWXS/28NP1X9nay8XR+Hbw+J9Q15/B8HhlbyLfJrqzvbtaC5OE8oPG7+dt/1QL+Xu/d1seEvhVr+mft3+NPGc9hs8Nat4Q0rS7S88+M+bcw3N28qbA28YWVDkqAd3BODjwO5/YS8XeI/2aryx1Pwvo2p614e+LGoeO7Lw9q08Etp4lsWuZmFs7gvGhnhlbb5gwrhQ4UFiNqVPAVHFTtFe43ZvvaS1e1tX1XdLQ0tC7S8rfOm33/nst1bZvqfRPwy/aF1nVfjZefDvxr4Z07w14nXSBr2ntpestqthqVmJRDIVle3t5EkjkZQyNFjDqQxyQPMtH/4KEeKfEPwa8UfEOz+FaHwh4H1K/tNakl8SKt7PBZ3LRzT2MItys+2JS5SaS3+YFFLAbz3n7OHwv8PeGfF95qWjfALRfhGy2Zt2vms9Ht9QvCzqxiQac8wMPyAsZJUO5Uwjcsvm/wAOv2afG2g/8E8viv4GutF8rxT4ln8Tvptl9sgb7SLya4a2/eBzGu8Ov3mG3PzY5rJRwcW24p25dL6b62tJ9N/effTYdJU5TSls5RXorO7++2vTY9K8cftU30nxd0PwP4F0DSPEmu6z4cPisPq+tvo9mLHzViTy3S3uHllZmzsEYVVGWcZUHa+I37Q0vw78bfCzRZ/D04n+JOoy6dIs94iS6OyWUt0dwjEiStmLYQrgc5DEDnyX9on4IXfj34NeGvDeq/Ba/wDG+raD4dtm0PWtK1yxsbzQNWjjAIM8lxBNboskcDeZbtNv2tuT5FD6PxK+BnxD074efAbWWB+IHjP4UahbXGvQQ3cUM+tq9jJZ3UkElw0cbSqZfMHmvHvCtkhiAa+rYP3Vdbtb7/Fyu6lotr3S73avbGOsE3u4v/wLlVvve23VNaXPTLD9oJ7/APaR8U/D2PQpZpPDfhyz19bqK6XzL03EtxGLdY2CqpHkcM0mDv52gZrmtQ/ag8V/D/4ueCdC8b+BdL0LRfiFezaZpWo2HiT+0Li0u1haeKC8tzbRLG0kaSDMMs6K6bdxUhzx/gz4cfFnxF+0n8V/HL6FY+CW8TeCbTSPCkt1ewXktldQvdMn2xImkQSLJKrsI/Mi2MoDuwYV5ton7NXxJ8Y+NfgfrOpeDviHa6v4O8QWup+L9Q8T+Pk1OC5kNrMk01lZpeTW6xiUnJWO3dVeNUiZTJ5d4fB4RySqONuVX97W7UtV7yWml99baWZpOMVztdFp68if4yvtfqtND2H9j3/k5T9pH/scbL/00WdfpB/wRY/5N28e/wDZR9b/APQoa+Af2bfhVr/gH43/ABt1jVrD7Jp3i7xLbahpM3nxyfa4E062hZ9qsWTEkbjDhTxnGCDX39/wRY/5N28e/wDZR9b/APQoa+s4KnGWYycXf9zTXzUYJ/c9GdOG1r1Gu8v/AEo+wqKKK/UT0AooooAKKKKACvzq/wCDm/8A5R/+GP8AsoOl/wDoi8r9Fa/Or/g5v/5R/wDhj/soOl/+iLyvNzn/AHCv/gl+TPtfDX/krsr/AOwij/6cifiRRRRX8yn+44UUUUAFFFFABRRRQAUUUUAebfHq1nt/Eng7VWvrzS9J0y9nF7e26RubMyQtHHIwkR0C7jtLMpCh85HBqz4K0vw/e33iHVofEN94r+1WcdrfzgRTQmNBIQim2iVWcBmyoy4DLwNy59Aor1P7Sf1eNC1mla6ttzc2ul9/O22mmvwr4Lj/AGxVzT2ilGpJT5JKbUZqiqN42qqFnBK96blrNc1pLl8b8C6rcQahP4V8N+IrbxXoDaNKbWVQjS6KwVUhieZPlYN8+AyhwF9F53/gd4/0Sz+H/hzQmuobbW7W1jsZ9MP/AB9xTIu2TdF99RlWYsRjBznHNei0Vpiczp14OEob2d00m2r6u0bO9+y23u2ziyTgfF5ZiKWIo4pPkU4csoTlGNOTpvkp81Vzgo8jtzTmlzWUVGMYrwLwlpWnr8Pr/wAL+KPGWraVem6uoLvSAtoJbnzJ3dTCrQNNL5gdSChYliQvTA7fRbZbX9p6+QZbyvC9ugZuWI+0SdT+FejUVpiM5dVzfLbnTvta8mm2rRXbq29tdNeTJ/DWOAhhoe35vq8qfK2puThSjUjGEnKrJfbveEYRT5moe8lHyvwZ4nm8E+B/iHqltYyajJp/iDUJktYztMnKE9AcdSTx61k2Piu48WfHXwHM2u6FrsccGoB5NItZI4YGaFDsaQyyBmO3O35WAGSPmFe1UUo5tBSnP2fvSUle/ePL26avS19mVW8PsVOlhcKsa1SoypT5eRq7p11XvpUS960Ye+qiilzQUW3f7f8A+DdH/lKOP+yd63/6XaRX721+CX/Buj/ylHH/AGTvW/8A0u0iv3tr9l4H/wCRLR/7e/8ASpH+an0pP+Tm5l/3B/8AUekFFFFfWH8/BRRRQAUUUUAFFFFABXh//BTf/lG1+0H/ANk08R/+mu5r3CsT4l/DrRvjB8ONf8JeI7Ial4e8U6bcaRqloZXiF1a3ETRTR70Kuu5HYZVgwzkEHmgDwjwx/wAi1p//AF7R/wDoIq9VGL/glB8IoIlRL342oiAKqr8bPGgCgdAB/atO/wCHUvwl/wCgh8b/APw9vjT/AOWteT/Z0u5Ni5RVP/h1L8Jf+gh8b/8Aw9vjT/5a0f8ADqX4S/8AQQ+N/wD4e3xp/wDLWj+zpdwsXKKp/wDDqX4S/wDQQ+N//h7fGn/y1o/4dS/CX/oIfG//AMPb40/+WtH9nS7hYuUVT/4dS/CX/oIfG/8A8Pb40/8AlrR/w6l+Ev8A0EPjf/4e3xp/8taP7Ol3Cxcoqn/w6l+Ev/QQ+N//AIe3xp/8taP+HUvwl/6CHxv/APD2+NP/AJa0f2dLuFi5RVP/AIdS/CX/AKCHxv8A/D2+NP8A5a0f8OpfhL/0EPjf/wCHt8af/LWj+zpdwsYnxm+Dvhz9oP4V674J8X6d/a/hjxNaPYalZfaJbf7TC/3l8yJlkXPqrA+9buiaNbeHNGtNPs4/Js7GFLeCPcW2RooVRkkk4AHJOab/AMOpfhL/ANBD43/+Ht8af/LWj/h1L8Jf+gh8b/8Aw9vjT/5a0f2dNXSlv+m35v7wte1+n62v99l9yLlFU/8Ah1L8Jf8AoIfG/wD8Pb40/wDlrR/w6l+Ev/QQ+N//AIe3xp/8taP7Ol3Cxcoqn/w6l+Ev/QQ+N/8A4e3xp/8ALWj/AIdS/CX/AKCHxv8A/D2+NP8A5a0f2dLuFjy79sn/AJFL4ff9lU8Df+pNptfcNfO2i/8ABLH4QaN4n0XVmT4oatceH9TtdZsYdZ+K3irV7NLu1mSe3le1utRkglMcsaOBIjDcoOOK+ia7sNRdKPKxrQKKKK6BhRRRQAUUUUAFFFFAHxP/AMFQf+TpP2fv+vfxN/6Isa4+vrT9pj9i74e/tdvoD+ONP164uPC8k8mmXWj+JtU0C5tTOqrKPOsLiCRlYImVZivyjivMv+HO/wAD/wC58Xv/AA8vjH/5aV/L/i74B5hxhn39r4bFQpR5IwtKMm/dvrp6n6NwtxtQyrBfValNyd27prrY8Yor2f8A4c7/AAP/ALnxe/8ADy+Mf/lpR/w53+B/9z4vf+Hl8Y//AC0r8v8A+JRc3/6GFP8A8BkfSf8AEUsL/wA+JfejxiivZ/8Ahzv8D/7nxe/8PL4x/wDlpR/w53+B/wDc+L3/AIeXxj/8tKP+JRc3/wChhT/8BkH/ABFLC/8APiX3o8Yor2f/AIc7/A/+58Xv/Dy+Mf8A5aUf8Od/gf8A3Pi9/wCHl8Y//LSj/iUXN/8AoYU//AZB/wARSwv/AD4l96PGKK9n/wCHO/wP/ufF7/w8vjH/AOWlH/Dnf4H/ANz4vf8Ah5fGP/y0o/4lFzf/AKGFP/wGQf8AEUsL/wA+JfejxiivZ/8Ahzv8D/7nxe/8PL4x/wDlpR/w53+B/wDc+L3/AIeXxj/8tKP+JRc3/wChhT/8BkH/ABFLC/8APiX3o8Yor2f/AIc7/A/+58Xv/Dy+Mf8A5aUf8Od/gf8A3Pi9/wCHl8Y//LSj/iUXN/8AoYU//AZB/wARSwv/AD4l96PGKK9n/wCHO/wP/ufF7/w8vjH/AOWlH/Dnf4H/ANz4vf8Ah5fGP/y0o/4lFzf/AKGFP/wGQf8AEUsL/wA+JfejxiivZ/8Ahzv8D/7nxe/8PL4x/wDlpR/w53+B/wDc+L3/AIeXxj/8tKP+JRc3/wChhT/8BkH/ABFLC/8APiX3o8Yrf/4J9/8AKQj4j/8AZPNC/wDTlqtekf8ADnf4H/3Pi9/4eXxj/wDLSvQf2bf2Efhr+yZ4p1vW/Ben+JU1fxDa29jfXmt+LtY8QzSQQPK8USNqF1OY1VppTiPbkvznAr9I8KPo+5hwlxDDOcRi4VIxjKPLGMk/eVup8/xNxzQzTAvCU6Ti207troewUUUV/VB+ahRRRQAUUV89/wDBRL4jajp/w58PfDzQL240/wAQ/FzV18Pi7tnKT6dpixvcancow5RxaRSxRyDlJ7mA89KUpKKuwPPrjxy37Z/xOPiqZ/M+Gfg7U5YfCFmDmHX7yBjFJrUv9+NZBKlov3Nqm5BcywGH0GqXhvw7Y+D/AA7YaTpdpBYaZpdtHZ2drAgSK2hjUIkaKOAqqAAOwFZ3xP8AiTpfwh8Bal4j1qSWPT9MjDusMRlmndmCRxRRr8zyyOyIiDlndVHJr56rUlVndkB8SPih4e+D/hSXW/E+r2Wi6XC6RGe5k2iSRztSJB955HYhVRQWZiAoJOK5fRtT+NvxnjE3gj4YWPhfRJhuh1j4ham+mTXEfZ4tMt4prkZ/uXRtHHJK8AH079mT9l+9g1q1+JHxKt0ufiBOjvpultKs9l4It5Bj7NbAfI10UO2e65Z2aREZYdqV75Xo0cBFK9Tcqx+fXxs+LXxO/Yv+J3hyP4n698P/ABH4d8Q6FrOoiz8M+GL2z1COeyayWGKKWa+mW4eeS9SJIRAjFyuHOdtfUv7EvwH1H4G/BrzPE3kP4+8Z3j+JPFssL+ZGNRnRFNvG/wDFDbQxw2sR6mK1Qn5iSavx7/Yp0X9of9qP4P8AxJ1zULgxfB9dVmtNGEWbfUru7Nk0M8zbsYtnsxKi7T+98pwymIBvaq66dCEJOUUMKKKK2AKKKKAPzD/a8/5SUfFT/sA+Hf8A0XeVz9fcHx6/4JlfB79pX4qXXjXxVpHi3/hJb20gsbm60bx1r2grcRQ7/KDxWN5DExXzHwxUtg4z0rkP+HLXwB/6B/xU/wDDv+MP/lpX51nfBVfHY2eKhUSUraWfRJfocNXCuc3K58nUV9Y/8OWvgD/0D/ip/wCHf8Yf/LSj/hy18Af+gf8AFT/w7/jD/wCWleV/xDnE/wDP6P3Mz+oy7nydRX1j/wAOWvgD/wBA/wCKn/h3/GH/AMtKP+HLXwB/6B/xU/8ADv8AjD/5aUf8Q5xP/P6P3MPqMu58nUV9Y/8ADlr4A/8AQP8Aip/4d/xh/wDLSj/hy18Af+gf8VP/AA7/AIw/+WlH/EOcT/z+j9zD6jLufJ1FfWP/AA5a+AP/AED/AIqf+Hf8Yf8Ay0o/4ctfAH/oH/FT/wAO/wCMP/lpR/xDnE/8/o/cw+oy7nydRX1j/wAOWvgD/wBA/wCKn/h3/GH/AMtKP+HLXwB/6B/xU/8ADv8AjD/5aUf8Q5xP/P6P3MPqMu58nUV9Y/8ADlr4A/8AQP8Aip/4d/xh/wDLSj/hy18Af+gf8VP/AA7/AIw/+WlH/EOcT/z+j9zD6jLufJ1FfWP/AA5a+AP/AED/AIqf+Hf8Yf8Ay0o/4ctfAH/oH/FT/wAO/wCMP/lpR/xDnE/8/o/cw+oy7nydRX1j/wAOWvgD/wBA/wCKn/h3/GH/AMtKP+HLXwB/6B/xU/8ADv8AjD/5aUf8Q5xP/P6P3MPqMu58nV9Nf8EWP+TdvHv/AGUfW/8A0KGtf/hy18Af+gf8VP8Aw7/jD/5aV7Z+zb+zD4L/AGSPh0/hXwJpt9pujS302pSre6veatcT3ExBkkkuLuWWZ2Ygfec4wMYr6XhnharldeVapNSuraX7pnRQw7pu7Z39FFFfanUFFFFABRRRQAV8z/8ABVn9gfUf+Ci/7M9l4G0nxRZeEr/TvEFprsV7d6e17C/kpKhjZFkjIyJSchuCvTmvpiioq041IOnNXTVn6M68Bjq+CxNPGYWXLUpyUotbqUXdP5NXPxi/4he/iR/0WvwR/wCEndf/ACZR/wAQvfxI/wCi1+CP/CTuv/kyv2dorwP9Uso/58L8f8z9a/4mE8Rf+hrU/wDJf/kT8Yv+IXv4kf8ARa/BH/hJ3X/yZR/xC9/Ej/otfgj/AMJO6/8Akyv2doo/1Syj/nwvx/zD/iYTxF/6GtT/AMl/+RPxi/4he/iR/wBFr8Ef+Endf/JlH/EL38SP+i1+CP8Awk7r/wCTK/Z2ij/VLKP+fC/H/MP+JhPEX/oa1P8AyX/5E/GL/iF7+JH/AEWvwR/4Sd1/8mUf8QvfxI/6LX4I/wDCTuv/AJMr9naKP9Uso/58L8f8w/4mE8Rf+hrU/wDJf/kT8Yv+IXv4kf8ARa/BH/hJ3X/yZR/xC9/Ej/otfgj/AMJO6/8Akyv2doo/1Syj/nwvx/zD/iYTxF/6GtT/AMl/+RPxi/4he/iR/wBFr8Ef+Endf/JlH/EL38SP+i1+CP8Awk7r/wCTK/Z2vnb/AIKxDzf+CevxKiywS5tLa3lCsV3xyXkCOhx2ZWZSO4JFH+qeUf8APhfj/mH/ABMJ4i/9DWp/5L/8ifnX/wAQvfxI/wCi1+CP/CTuv/kyj/iF7+JH/Ra/BH/hJ3X/AMmV9W/8O3/gR/0SnwX/AOC9as/sefA3wj+z9/wUim0zwT4f07wxp2q/DW5ury10+PyormWPVLVUkdRwWVXYA9QGPrXHS4fyapLlWHX4/wCYv+JhfEX/AKGtT/yX/wCRPkn/AIhe/iR/0WvwR/4Sd1/8mUf8QvfxI/6LX4I/8JO6/wDkyv2dors/1Syj/nwvx/zH/wATCeIv/Q1qf+S//In4xf8AEL38SP8Aotfgj/wk7r/5Mo/4he/iR/0WvwR/4Sd1/wDJlfs7RR/qllH/AD4X4/5h/wATCeIv/Q1qf+S//In51f8ABLf/AIIi+KP2BP2rZviTr/xI0DxVD/wjV7oEOn6foUtk265uLOYytI9xIMKLXG0LzvzkY5/RWiivawmEo4WkqGHjyxWy/E/NOIeIsyz3HzzTNqrq1525pO13ypRW1lpFJfIKKKK6DxQooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAr43/aG8ZWF7/wUO1O/wBav7PTtB+D3wzju5b28nWG1sX1e+uHupJHYhU8uDQ4GLscKkp5AZs/ZFfEGuWGreJ/2xf2mIdG1WLRNea10TS9M1KS0F4mnP8A2T5kMzQllEoSW4dyhZQ3TIyTXHj5ONCTjuOKTaUtv69fyZ88/AD4x+HviL+3BFLoPx3+Ffiv+0tTv7qJNI+Nt1rN5rtm0UzRWCeGMHT7XyFMRNzbyNIwsi5AM8u36g07Qx8cP27fBfhWdfP0H4baS/j3Vof4Xv5JjZ6OrjugZNTnAPSWxgYcrx5J+zT+xZ8R/hF8WbLxZ42+Inw++JWsCGWC/wBdu/Al9B4hmR0w0drdSaxPb6fC0ixu9va2kcDFTiNWbePdP2J4f7R/bc+P2ot/rbfSvCuir/1yhj1K6X/x++k/WuXDRg5witeVf10X5L0RldyqOb6/1/W/RXskl9WUUUV6xoFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXzv/AMFXv+Uf3xD/AOuVl/6X21fRFfO//BV7/lH98Q/+uVl/6X21J7ASVw/wd/5Sf23/AGS69/8ATtZ13FcP8Hf+Un9t/wBkuvf/AE7WdeHgv4yJR9eUUUV7pQUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV8ZeGP+T7/ANoj/r/0D/0zQV9m18ZeGP8Ak+/9oj/r/wBA/wDTNBXJjv4LE9j0mua/YW/5O4/aH/66+Hf/AE3yV0tc1+wt/wAncftD/wDXXw7/AOm+SuDL/wCL8hI+qaKKK9ooKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACvnf/gq9/yj++If/XKy/wDS+2r6Ir53/wCCr3/KP74h/wDXKy/9L7ak9gJK4f4O/wDKT+2/7Jde/wDp2s67iuH+Dv8Ayk/tv+yXXv8A6drOvDwX8ZEo+vKKKK90oKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK+bfij/AME/dU8XfHbxX468M/F/xn4Gn8ZfY21LT7HS9KvLZpLa3W3R0N1aySLmNFyN2M5NfSVFKUVJWkB8t/8ADv74gf8ARyfxC/8ACa8Pf/INd9+yh+yE/wCzNrvjPWb/AMceIvHmu+OJ7SW9vdVtbK18pbaExRIkdrDEgGCckgk17NRUxpQi7xVgCiiirAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACvNf2tPj7dfs0/Bd/E9hoK+J9TuNc0Tw9YaY98LFLm61XVrPS4N8+yTy0WW8RmbYx2q2ATXpVfP8A/wAFLP8Ak3Xw5/2VX4cf+pvoVAB/wuf9o7/ohHgD/wAOi/8A8qqP+Fz/ALR3/RCPAH/h0X/+VVfQFFAHz/8A8Ln/AGjv+iEeAP8Aw6L/APyqo/4XP+0d/wBEI8Af+HRf/wCVVeh/tJftNeC/2R/hZN4z8fapdaT4ehvLXTzNa6Xd6nPJcXM6W9vElvaxSzSO8siIAiHlhXO/A39u34ZftD/EzUvBWgatrlh400rT49Xn8PeJfDGq+GNWeydzGLuK01K2t5poPMXYZY0ZFYqrEFgCR96/Lrb/ACv+WvoEvd+L+tbfnp6nPf8AC5/2jv8AohHgD/w6L/8Ayqo/4XP+0d/0QjwB/wCHRf8A+VVfQFFAHz//AMLn/aO/6IR4A/8ADov/APKqj/hc/wC0d/0QjwB/4dF//lVXon7Rfx2/4Z38AQa//wAIb488c+fqdppv9neEdJ/tPUI/tEyxfaGi3riCLdvkfPyICcHGK7yhaq672/J/qg2dvn+a/Rnz/wD8Ln/aO/6IR4A/8Oi//wAqqP8Ahc/7R3/RCPAH/h0X/wDlVXpsX7QfhCb9oOb4Vrq+fHlv4eTxVJpf2Wb5dNe4a2Wfztnk8zIy7N+/jO3HNdnR0Uuj/RtP8U16poNm4vdfqk196afo0z5//wCFz/tHf9EI8Af+HRf/AOVVH/C5/wBo7/ohHgD/AMOi/wD8qq+gKKAPn/8A4XP+0d/0QjwB/wCHRf8A+VVH/C5/2jv+iEeAP/Dov/8AKqvoCigDzP8AZL+P95+0l8I5fEGp+H08L6tYa7rHh3UNNj1AX8UFzpuo3NhKY5/Lj8xGe3ZlJRThhkV6ZXz/AP8ABOD/AJIx41/7Kr48/wDUq1SvoCgAooooAKKKKACiivl/wx8V/j78dPiP8SU8F6r8INA8OeCvFc/hm1h1vw/qN/ez+TbW0rSvJFfQp8zTnACDAA5PWgD6gor5/wD+Ec/am/6HL4Af+Ebq/wD8tKP+Ec/am/6HL4Af+Ebq/wD8tKAPoCivn/8A4Rz9qb/ocvgB/wCEbq//AMtKP+Ec/am/6HL4Af8AhG6v/wDLSgD6Aor5/wD+Ec/am/6HL4Af+Ebq/wD8tKP+Ec/am/6HL4Af+Ebq/wD8tKAPfycDpn29a/N39rz9uf4gftBaDr3wH174IX/wv8TeKbu3stOuvE3ii0itdTSO5imM1lIqmK9+SNiYoJGlT+NFr6j/AOEc/am/6HL4Af8AhG6v/wDLSsP4kfA/9oP4xeDL3w74s1P9mjxNoGpJsutO1TwDqd3a3C5yA0b6kVODgjI4IzSkm1ZAbFfPf7QHx41L9jP9qbQ/igvh+w8T6XqXhS68KR6eNbjstRuLx7y3uUS2tyjyXbssbARwK0mR92rGh/8ABOL9p7wFa6nYeFPjv4B0XRL+LyrXT7nwvf6uNAPd7Ka9vpZwegEdxJPCgGEjQcV1fwE/YO+NX7PWrza1pniL4Faz4wvYvJvfFWu+FtZ1HW7xOpQ3Emp5iizyIIRHCp+7GtedQwc4T5mxWPof9kr42+LP2gvhFD4l8YfDHX/hNqF1cOtvomtXkFxePbhVKTuIifKLEsPKkCyLt+ZQTivTq+f/APhHP2pv+hy+AH/hG6v/APLSj/hHP2pv+hy+AH/hG6v/APLSvSGfQFFfP/8Awjn7U3/Q5fAD/wAI3V//AJaUf8I5+1N/0OXwA/8ACN1f/wCWlAH0BRXz/wD8I5+1N/0OXwA/8I3V/wD5aUf8I5+1N/0OXwA/8I3V/wD5aUAfQFFfP/8Awjn7U3/Q5fAD/wAI3V//AJaUf8I5+1N/0OXwA/8ACN1f/wCWlAH0BRXlv7Ffxs1f9or9mDwn4x1+20201rV4JhexaerraiWKeSFjGrszKrGPcAzMRnGTjNepUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU2aZbeJndlREBZmY4CgdSTWF8Tvij4f+DXgq88ReJ9UttH0exC+bcTZO52YIkaIoLSSu5VEjQM7uyqqsxAPkEPw28SftlSrffEPTrzwv8MCQ9l4HmIW98RJ1WXWipIWFuCNPUlSv/HyXLG2hAOk+GX7bPw/+LXj2LQNJ1C+U6n5h0HUruxlttM8WCIEzHTLpwI7vywrE+WTuRTIm+Mb69armfib8HfDXxi8BS+GfEWk29/o0nlskILQvayRkNFLBJGVeCWNlVo5Y2V42VWVlIBrym0+K/iP9kO6j0v4n6jN4g+HzOItO+IEqKkulgnCwa2qAJH6Lfoqwt0mWBgrzgHvlFIjiRAykEEZBHQiloAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACvn/AP4KWf8AJuvhz/sqvw4/9TfQq+gK+f8A/gpZ/wAm6+HP+yq/Dj/1N9CoA+gKKKKAPir/AIL5zXtv+wNZvplva3eop8RPBrWkF1cNbwTSjxBY7EeVUkaNC2AWCOVBJCtjB5T4R6t4o+Mn/BbW1m+MOi6T8NvG3wz+G90fCOiaFqb63pni6w1C5gF5fpqckNrK5t5IY4Ws3s0KMwlEkiuMfWn7WH7K3h79sX4WW3hHxNeazY6ba65pfiBJdLljinM+n3sN5CpMkci7GkhUONuSpIBU4IqfF/8AY+8M/GT9oz4XfFO6vtf0nxd8JpL8aZNplykUWoW19AIbizu0ZG82BtsbgAqyvEjBhyCYb3JXl/PN+ilSjC6+d73v7t7Lmswr++ml/JFerjUc7flZ999Lo/PZPiX8W/239e/aL17SvCP7UuoeJPCfjXWPBXgC+8C+PtK8PeHPDDaWdlu01hNrNmL2WS4Yy3BvbW5RkdY0yiba7D45ftL3LftSfArwN+1L8Tj8D9AvfhGfFHiCz0v4gt4QjuvFnmwwTW9xqNjdRSeTGrTmGNZ1hldXI84xAJ9Tav8A8E6NP0v4t+MPFngD4n/FL4R/8LEuU1DxTo/hWbSm0zWr4L5b33l39jdPbXMsQRJJLR4C/lRsf3g3187/ALUfwbsvC/8AwUwtvEPxD8NfHuP4ZQfDC08OeG/Efw0vvGF7fahdwXzyT22ryaDK9+WjSSN4XuB5cnm3LNJJIAEzpqypU3bZJ32vGjUTfb3ptNP4pSUG7SSvdX3nVlG+91be0qsGl392CafRR5raN28a8G6/8R/g1/wRR8PeKpfHPxkbxD4n+MGjCx8QeJfFWrza3q2gzeLYbeykb7TLut4rjTjFuijVI5Ek3Mp8xifYv23PiN4h/Z6/bc8SeLvjbc/HXSfgTc2ekp4H8a/DzW7yLQ/h/cxRzNeya7YWko84SXAhKzXtrd2pV44mVV80Hp/2W/2SfFH7XP7LHjDwn8XLn4paV8P1+J0Pif4bL4jv2k8WxaLZXdrfWSX0l4ktzsa6il2x3mbtYGVJGRgCPaP2kv8Agnfp/wC1Ne+ILDxJ8UPi3F8P/F81tN4g8C2mpWP9iassPlbofNltJL+2glEEYlhtLuCN8yHaGllL9F7OMv7ykr7pOnSXvW2krSvy7S0tZsy0lKd9mmtNrqrOTceji01y30ceux83XX7MHhXx5/wcD6zcS6x8RDHdfBWx8RrJp/xD16yR5zr0yhV+z3iAWhVQfsi/6NklvKyxJ6X9h/wHr37Qn7dH7S+t+MPiP8TNR0P4ZfE8WHhXw1a+KtQ0/TNNJ0zTZ5DLHbzoLqEkoFtZw1umZmEZadzX0J8TP2GtG8a/tK+GvizoPi7xp8PfGOgaH/wjFxL4eawe213SPtMdytjdQ3trcpsWRGKvAIplE0gEnIx0fwF/ZY8P/s7+Ofib4g0W81m6vPit4l/4SnV0vpY3it7r7LBa7IAkalY9luhw5dtxb5sEATh7U1FfyxqJesqynF/+AaX6NW7MKyc3KXWTpt+kaThJfOWvmnrrdHyv+zl4Iv8A/gpn41/aG13xt8RPijolp4P+IerfDvwnp3gvxpqfheLw3backcTXJSwniW6uZZneYteCZQAiqiplD63/AMEh/wBpLxV+1X+wV4T8U+N7r+0vFVte6poOo6kLZLYavJp+o3NkLvy0ARDKsCuyoAocsAAAKueLv+CcGlT/ABN8beJvA/xL+Knwjf4lyrdeLtP8I3enCz126EXkm7xe2VzJaXDRBUaWykt2bYjE+YoevYvgj8E/C37OHwl0DwL4J0a30Dwp4YtFsdNsIWd1giX1dyzu5JLM7szuzMzMWJJml7tKMX/LBNd5JWlL5u77yveVnFFVfeqNr+aTT7Rd7Q+S5fTl0vzM6qiiigD5/wD+CcH/ACRjxr/2VXx5/wCpVqlfQFfP/wDwTg/5Ix41/wCyq+PP/Uq1SvoCgAooooAKKKKACvn/APYR/wCRi+Pf/ZVdS/8ASHT6+gK+f/2Ef+Ri+Pf/AGVXUv8A0h0+gDd+Pn7a2gfBP4oaN8P9N8P+K/iN8S9esZNWtfCPhWG2fUI9PjbY99cS3c9vaWtuH+RXuLiPzXykYkYFR1PwK+NVz8ZtH1F9R8DeOfh7q2kXQtbvSfE9nAkykosiPFcWs1xZ3MbIyndb3EgRtyPskVkHy1+yRnSP+C4f7XFtrny6vq3hrwbf+HfP+/Lo8dtcxTeTnkxre+Zv28b5Fyc8Db/4LO+M9R0r4P8Awk8KR6xq2geG/ih8WPD3g/xVe6dcvaS/2TcyStNAZ0w8STtFHAxVlJWYrnDEFQu40+rqOK8k5TUV/wCA7S397mtskOdlOd9FBcz72VNTfz/lWnS73PsSivgL4nfAvwd+wp/wVB/Zit/gj4O8PfDy3+LD6/oXjPQvC9lFpena3p9npz3cF3cWkCrE01rcEBLgrvAuWjLESAV53+y/+xV8MP2sf2pv28X+JXhDTPHEVr8QDZWFrrSm8tNJL6HZl7m1gkJjt7s5UG5jVZsRoA4CgVnOqoxlLpGMpP8A7dlFWXe/Pe/dNbp2pQ96MZfacUv+3lN3fo4PTtZ9bH3p+0h+1V4e/Zevvh3b6/Z6zdv8TPGNl4I0s6fDHIIL26jmkjkn3yJthAgfcy7mBK4Q849Nr8avE/gnQf2uf+Cdv/BOHxf8UfDvh74heKtc+Inhvw9qmseItMg1K+1SwMOorJbXE0ys8scphjeRHJWR13MCea+x/wBqPQf2fdG+KXhL4Kn9nd/jTruj+G21jS/h3oui6W+h+GtJFw0Ivxaanc2ukQOZi8KPGftRDOqjy99dE4OHPDeSqTgvNRhGX6tvsr30jd5c13F7J04yfk5TnHf5JLu7dZWX2dRX4jeHbvUfiB/wRO0DwlcTeKPC9h4Z/aYs/B2iQHUt2r+F9Oh8VpHbWiXKSzASWqOI0dJZFTyk2syqpr9hfgR+zn4C/Zf8Bp4Y+HXg7w54K0BZPOay0bT47SOeUqqNNLsAMkrKi7pHJdto3MTSilKm6q2ured4U569tKiWjlqnrYJO1T2fa9/lOcPzhfpueWeCf2zPFfxg/bT8ffDPwd4C0K68LfCi602w8V+JdY8USWF0Lm8tftYSwsY7KcXIjieHc01xb5Z2C5C7j9D1+ef/AATc/ZJ+FXg7/gpp+11rOkfDD4e6brHg3xZoyeH7208OWcNzoguNAga4W1kWMNAJWlkLiMrvMjls7jny39mn9nD4cfte/wDBHjxf8ffiZplhrXxp1az8S+J7zx7f7Y/E3g/UbG7u2to7K/IE2npYm2hEcULRxp5Z+TDODzzrKnhlWn0hGcu/vRT0XXr2tom9UzojRdSv7OHWXKvvau/u8+tkkj9XqK/Lj4D67c/8FEP2qv2cPCX7QGmW/iPQX/Z1sviLJ4U1u0WbR9e8Q3NxFbT3dxaSKYp5IITuRJA4hN0WVVY7j6T+2N8GPCXgn9pn9kv9nWy07+wvgX411vxNqeseF4pZP7H1ia0s/tdrpMsTEp9iM08sgsRi3It1QRbECr11aMqcuR7uUkv+3Jyg/wD0htLW+i3djlp1YzjzLpFSf/b0FNfhJJvRJ36K7+/qK/O6L4EW/wABf+CoPjD4KfAv7P8ACbwp8U/gjqGu6lY+G4EtdM8Ma4l4lhZazb2Ue2K3uGjcqxjVPO+yoTloy1P/AOCVOlad+yb8ebP4G+Ovgb4f+Ffxoj8Hfav+Et8I6il7onxWsbNrWG41S4dRFcNe/aZS2NQhaZPNmKTETOGzo2qWa6ptd3yynGSXTTkb3u4u6WkraVf3d79Gk/LmUGr9dee21rqzavG/6HV43oX7Wv8AbX7fHiH4H/8ACP8Al/2D4IsfGX9tfbs+f9pvbi1+zfZ/L+Xb5G7f5hzvxtGMnwr/AIKeWNv8X/2zP2UfhB4tT7b8KviBrut3/iPR5mT7B4iudN08XWn2V3G3+vg84tMYWyjtbpuRsDHEfsnfAzwd+zV/wXJ+MmgfDy2Sw0iL4P6NeR+Hba8zYaBK+p3jfZbS3zssoG4lEEYWMNO7qoD1nGaU4yn8P7z58tOcvVe8r26pLpIdRWpScd7QfpzVYx+el/v7o/RCivxY/ZQ/ZR+Mf7f37Bml/Fjw78OfgVa/HTxhrNz4jtfjJq3xA1JPF+k6jb6lJH5Rjj0WVobaKKD7ILFL0wCFdpUbmWv2jtRIttH5pUy7RvK9C2OcVs4OK9/e9vwXz3utlsn1aUydp2W2v4Pr/wAP3vayv4H/AMEt/wDkxLwN/wBv/wD6cLmvoCvn/wD4Jb/8mJeBv+3/AP8AThc19AVAwooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAK4D44/tDaV8FU0+wFpf+I/F2vl00LwzpQV9R1d0xvZQxVIoE3L5lxKyQx7l3OCyhuf+KX7RGp6p41uvAPwvsrHxF45tgo1S+u9x0TwcrqGWS+dCDJMUIaOyiYTSBkLGCJvPXe+B37O+mfBl9Q1Sa9vvE3jPXwh1zxNqm1r/UymdkY2gJBbx7m8u3iVYk3MQpd3dgDn/hj+zzquveNbPx98U7uw17xnZln0jS7Ms+h+Dg6lWWzVwrTXJQlHvZVErBnEa28btDXsNfibe6xdfsrf8Fpfjt+0xHPcroPhj4o6H8PPHpM7+TH4f1fRbCKG5dSdira362zluPlkbrgCu/8A+DkLUJv2tfCfjz4Z2V1P/wAIh8Bvh9dfEvxcIJWVLrVbgPbaHaOVIPy7by7K5IPlRbhgjMKd6NOqvtK7Xb3FVevW0Gmtk5e75mkIc1aVJ7J2T7++6a9LzVutotSfY/XOo7u0iv7WSCeKOaCZDHJHIoZZFIwQQeCCO1fDv7T3/BTW1+Avxr+HXwY074j/AAU+EuqXnguLxdrHir4m3G7Tra0z9mt7O1tReWX2i6mlV3JN0giigY7JC425fgv/AILF6p4h/wCCZXxr+MFho/hLx/4r+CGsXvhy9n8JXklz4b12SA27DVbZ0M0gsRb3UdzIgeV0WKVA7lQ50naPO1qo833RlyN9l72mrXf4dTGi3UVNWs58unnKPMl53jrdadN9D3t/h94k/Yvc3XgPT7/xX8K1O678GQfvdR8MJ3l0fJzLbryTp7HKr/x7MuxLWT2L4a/EzQPjD4KsfEXhjVbTWdF1FWaC6t2ypKsUdGBwySI6sjowDI6srAMpA81/Yo+K/iX40/DiTxDqfxB+EHxW8PaiVl0bxP8AD61nsbK4xuSe3eCS8vV3ROn+tW6yxdkaGIxbpJPiV+z3q/hXxrfePvhTc2Gi+Lr5ll1rRb1mj0PxltUKPtQQMbe72KES9iUuAEEqXEaJGrnBwfK9whNTXMj2OiuC+B37QmkfG631C0S2v9A8VaAyRa74b1VVi1PRZHzs8xVZleJ9reXPEzwyhWKO2DjvakoKKKKACiiigAooooAKK4D9mf8Aaa8IftdfCmPxt4F1BtW8NT6lqGmW95s2pdPZXk1nLJH/AHo2kgco38SFTxnFd/QAUUUUAFFFFABXz/8A8FLP+TdfDn/ZVfhx/wCpvoVfQFfP/wDwUs/5N18Of9lV+HH/AKm+hUAfQFFFFABRRRQAUV88/wDDVvinwp/wU2Pwa8SafokPg3xb4H/4SbwXqkEUiXdze2lwIdTsp2aVkcok1rMm2NMK75LYyMD9lH9pr4w/tf8AwJ8aeMNAsPh1pdrqHjnVdF8C319FeNB/YFpcPaJqlxEkm68meaGZ1gSS0SRNg82P75UfeipLqm/unyW9ea+/RN7IJe62n0aX3x5/y/HTdn1DdXcVjayTzyJDDCpeSR2CqigZJJPAAHes3wN480P4n+D9O8Q+GtZ0rxDoGsQLdWGp6Zdx3dnexNyskUsZKOh7MpINfKnwn/ar+It9+0Z8cPgN8VJfA+t674S8F2ni7RfEHhXSbvSLbUNPvEuIHhuLO4urpop4ri3fDJcOskciHCMrA+I/sC/tX+JfhF/wTQ/Ye+F3w/03Rrv4h/GTw0lrp9/rSyS6X4fsrGz+03t/PBE8ctyUUxokCSxb3lGZowOXD3lJx1+C1uvM6ie+3K4NO9ktW3ZXCfuyjF6fFfy5VCS23up3stdla+h+l9FfIfh39s/4lfB79ovxv8HvikfAWveJ7bwDc/EPwh4i8N6XdaTY6raW7CC4s7qwnurqSKeKcowdLlllimGBG0bZ2v8Aglz8fvjV+19+zr4N+LXxJHw58P6F468O2uo6f4b0PSbtr63keKIm6lvpLx4zHKRK62wt90SSRK07sj5dNc6co7JJ39XONvXmhJfLtqKb5Hyy37fKL/8ASZxf4b6H1FRRRSGFFFFAHz//AME4P+SMeNf+yq+PP/Uq1SvoCvn/AP4Jwf8AJGPGv/ZVfHn/AKlWqV9AUAFFc98M/it4c+MvhqTWfC2sWWu6VFf3mmNdWj74vtNpcyWtxGD3KTQyISOMqcEjBroaACiiigAr5/8A2Ef+Ri+Pf/ZVdS/9IdPr6Ar5/wD2Ef8AkYvj3/2VXUv/AEh0+gDsP2hf2Nvh3+1Fq3h7VPF+i3ra94Tlll0XXdG1q+0HW9K82No5o4dQsJoLpIpFbDxCXY+FLKSq4yrL/gn38IIvhD4o8D33g9PEeg+NpFm8QP4j1K813UtZkQKsMtxf3sst3JJCEQQu0xaDy08spsXHstFKySt0f+d/z19R3d1Lqjyf4IfsTfDr9n3xvdeKNC07XtR8U3diNMOueJvE+qeJ9Ugsw/mG1hutSuLiaC3L4doonVGZVZlJUEbnw2/Zn8EfCHxP491nw7on9n6l8TtTGs+JpvtlxL/aV2IEt/M2u7LF+6jRdsQReM4ySa7TVdTg0TS7m8uX8u2tImmlfaTsRQSxwOTwD0rnPgZ8bvDH7Sfwg8O+PfBWp/214T8WWMepaVffZpbf7VbyDKP5cqpImR2dQfam/fTT10s/STvZ+Tcb26teQvhtbTW69VpdeaTt5X8zgr//AIJ6fCDUf2YPDvwcfwk0fw/8HzWt1oNnBq99BeaLcW03n29xbXyTC8hmjkyVlSYPgsN20kGt8Qf+CdHwr+J2oeGNQ1Wz8arrvhHSJdAsNesPHuv6drkunyOkj2t1qNvex3d5EXjR9tzLKNwLdWJPuNFDbb5nu3f52tf1tpftoNNrRdNPl2PDtA/4Jt/BHwn8HoPh/pPgKw0rwZa+LI/G9vpFleXVtbW2rx3S3cc8YSUbEWZFYQqRCANuzb8te40UU7u3L0/4CX5JL0SXRCsr3/rdv8236tvqcb8Pf2f/AAj8K/iL438WaDpP2HxB8Rry3v8AxFdfappf7Qnt7dLaF9juyR7YUVcRqoOMkEkmvMvFH/BLz4H+MPFes6ne+D737L4k1IaxrXh+38R6pa+GNdvMozXF7osVyum3UjtGjSNNbP5rKGfcea9/opLRprdaLySta3pZfch3dmu55h+0D+xz8Pf2nbzw5eeLNHvv7Y8ITSzaHrOi61faBrGk+bGYpUgvrCaC5jikQ4eNZAj7U3Kdq4y9d/YD+E3ib4L2HgLUPC0l3oelan/bllcyavfHWrTU/MMv9ox6p5329b0uzE3Qn847mBchiD7HRSstv63v+evqLt5HkXgD9hH4V/Djwh4y0W18MyatB8RLf7H4ovPEOrXviDVPEFv5TQrBdX1/NNdTRJGzqkbSlIw77Au45d8Cf2Hfhx+zp4+1DxX4e07xBe+KtT0+LSJdb8SeKdV8TalHYxyNItpFc6lc3EsMHmOzmKJlRmwxBIBHrdFVdp3W+3y7A0mrPbf56f5L7l2OC/aF/Zk8E/tT+EbLRfG+kS6lb6VqEOr6bc2moXOmahpN7Ccx3NpeWskVzbTLlhvhkRirMpJVmBwPgd+wr8K/2cfiTf8AjHwf4XOneLNX0tNH1PWZ9TvL6+1eBJnnDXc1xLI9zOZJGJuJi8zAKrSFVUD1yilH3XeOj/zVn9609NAl7ytLVf5O6/HX11PBrv8A4Jn/AAaufH2s+IE8Oa1ZHxJqn9tazoth4r1ey8N6zekIHnutGhul064eTy0Mnm27eaw3PuYkn3miihaRUFstgesnJ7s+f/8Aglv/AMmJeBv+3/8A9OFzX0BXz/8A8Et/+TEvA3/b/wD+nC5r6AoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAbNMtvEzuyoiAszMcBQOpJrwS4+KHiP8AbDuH074a6jc+G/huGMd/49hRftOtgcNDogcFSh6NqDqYwOLdZWYzQQftt+AfEPijVfD1/f6fqHjT4Qaakp8W+DdIQrf6ocqY7h0GW1C1iAbzNPUoZgScXOFtn9n+Gfj3w/8AE7wJpmueFdR0/VfD9/CGsrmycNCyD5dox90qQVKkAqVKkAgigCP4W/Cnw98FfBVr4e8L6XBpOk2ZZlijLO8sjsXkmlkcl5ZpHLO8sjM8jszMzMST0NFFAHxto3/BL+48aa5+2Tpvjy60S78H/tM39tJp0djLLJdafDHpENkXmVkRUlSaPzE2O4+VTuB4HmHwr/4JB/FDQP8Agkl8afhZ4w8beG/F/wAfvjdb3R1zxTcTzrps0wijtbKMyCASCGK1giXiDhmfCnqf0YoqYxUYuK6wjD/t2Oy/BXfUtTanGfWM5TX+KWr+XkfKHxS/Y8+I3gP9rHwd8cfhS3gjWvFFp4KX4feKfDfifUbjSrDVtPSU3UFzbX9va3MkNxDcZBV7aRJYpWGY2RSfW1uPjbN8Dr28Np8LLf4lPdrc2mji7v5tDithJHus5L/y0neR41lAuxaKI2kQ/ZZBGRL6pRWjk2retvK8nJ/fJt633t8OhlGEY2t0SXrZKKv6RSWltk90mfKH7AH7DOvfs7/tDfGX4na7oXw6+H8nxVk01E8GeA7ma60ezNpHKZL+a4ktbPz725luJC7C1jwqJlpGLNX1fRRSvoorZKw7auT3f9f1971PPPjj+ztp3xiuNP1i0v73wt430BXGieJtNC/bdPD4LwurApcWshVfMt5Q0b7VYBZEjkTF+FP7ROo2/jW28A/EywsvDPj2dX/s64ti39jeL0jUs02nyOSRIFBeSzkYzQgMQZolFw/rteQftjeKfAy/D+Hwt4t0S68Zal4nl26H4Z0znWNTuoSrrNasHRrdoG2ObzzIltjtcyxnaaQz1+iuA/Zg8L+OPBvwS0fTviJrNvr3imDzTPcxESNHEZGaGGSYJGLiWOIpG04ii81kL+Wm7Fd/QAUUUUAFfPX/AAUQ+I17H8O9J+GHh68nsvFPxduJNGW5t32z6TpKIG1S/UjlGS3YQxuPu3F5a9jX0LXyP8edPn8Cf8FD9P1bXCbnTvH3hBNF8L3L8Jpl3Yzz3V7ZDsHuoZYbgH7zrp0oY4hQDKvNxpuUQOE/Zu0z41/sbeAtR8DeB/Cnwc1LwhD4i1nVtIe98Qahp88FtfajcXkcDQx2MiJ5SziMbXIIQHjOB3//AA0n+0r/ANCH8Df/AAsdU/8AlbXYUV5CxtVK1yLs4/8A4aT/AGlf+hD+Bv8A4WOqf/K2j/hpP9pX/oQ/gb/4WOqf/K2uwoo+vVe47s4//hpP9pX/AKEP4G/+Fjqn/wAraP8AhpP9pX/oQ/gb/wCFjqn/AMra7Cij69V7hdnl/wAUP26vj38EfB0vibxD8PPhDcaHp9zapepp3i7UWuzFLcRwsYw+nqhYeZkBmA4616p/wUs/5N18Of8AZVfhx/6m+hV4p/wUG/5NH8Uf9ddP/wDThbV7X/wUs/5N18Of9lV+HH/qb6FXo4OtKpFuQ0fQFFFFdYwooooA+Xv+Cn37IfxE/aV8G+D9e+DmteHPDnxY8A6heS6Pf65cT29kbW+sLixu4neGKV/uzRyqNhBkt4846jB/ac/4Jw6x4g/YC+F/wb+G19o/2D4Z3uhG80DXL25sdH8eaXYgR3Wl6jNbpLIsNyhaRswzKzooeN1Zq+v6KUYqKaXWUZd9Y/Ctei1dtnd3vcpyvKMn0Tj8pb7dez3XRo+Df2OP+CXXiT4Bftb/ABP+JQ8IfAv4X6B8QvAFv4XtvCHw9iljtNKvIbm4Yyyy/Y7ZLnzEeNmnEELDIi8phEJpYvh3/wAEufiN8Hf2av2TJ/D+u+DpPjH+y9p0mnG1ubq6Xw94ntbu2W1v7NrlYTPbhlWN47j7NIUaIAwsGOPveiqTcVaGlrW+Tm1vf/n5JO97p2d0S0nLmfn+MYxe3lBarVPVanyXon7F3xA+L3x98Y/GH4nHwVovjG88A3Xw88K+HfDuo3Oqabo1rcP59xdXF/NbW0txNNMIhtW1jWGOLA81nZh6r+wB8AdY/ZU/Yi+FPw18Q3OmXmu+BfC9hol/Pp0jyWk00ECxu0TOiOUJU4LIpx1Ar1+iiL5IuEdE7fg5v8XUk35sUvekpy31/FRX5Qivl3bYUUUUhhRRRQB8/wD/AATg/wCSMeNf+yq+PP8A1KtUrR/b2+Muq/DX4NweH/Cl2bPx98SL5fDHhydBl9OklR3uNQx0xaWsdxcDPys8UcZ5kFZ3/BOD/kjHjX/sqvjz/wBSrVK4b44rcwf8FILN/EnNrP4EKeBiP9SjLeZ1tef+W7Z0g+8SfKPklJyrzcKbkgPN/wBiLxj4y/4J+/BG4+FmkfAbxX4g0HQ/Euu3Wj32meIdGS3msLrVbq6tQFuLtJQywzRq29Qdwbr1Prv/AA8B+IH/AEbZ8Qv/AApfD3/ydXQUV5Sx9RK2hNzn/wDh4D8QP+jbPiF/4Uvh7/5Oo/4eA/ED/o2z4hf+FL4e/wDk6ugop/2hV8guem/s2fHOw/aZ+AXhD4g6XY3+m6f4w0qDVYLS92faLVZUDeXJsZk3LnB2sRxwa82/YR/5GL49/wDZVdS/9IdPpn/BKL/lG38FP+xTsv8A0WKf+wj/AMjF8e/+yq6l/wCkOn17JRgft3/H7xto/wAfvgb8FfAGtJ4Q1j4yajqc2peJ1tIby80TStLtVubn7HFOklubmVpIYleeOREDO3lOcbeo8QeCvGnwR+AniYeJ/wBor7Jplrd/bJPHXibQtItdR0HS/KQTq8saW+m+csgd4riWzMcasqyQTlS7O/bJ/Yvl/aW8S/Drxn4a8U/8IL8TvhLq0mqeGNdl0wapZhLiMQXtnd2nmwtPbXEHysqTQurLG6yDZhuJ/aD/AGAvH/7VHwf0Wy8b/FnQbzxz4T8caV468P31n4HEPhvT57Ajy7aXS3vZLi4hkVpS++/3iR1eNowipUxvy26t6+nNHVei1tdNtSV7SLdrr0f3+9v66K+trp293XyL9jj9uGf4qftQ/H74RWHxx0n9ovwToXgay8V6D4tt30aa60+ScXFtdabPPpMUNpNteGOZCIUkUTlW3ja1ef8A7D37QXjTQv2Av2BPgt4B1eHwprHxl8Lyyaj4m+yxXd3oel6ZYrcXJs4p0e3N1I0kMaPOksaBnJik4x9Q/B3/AIJ7+K/DP7Vnj34xeN/ihY+LfFHxB8EQeDLqz03wqNH0zTFguLiWOS1jN1PKE2zDck00rmTzGEqoyQxc94c/4JK/8IJ+zR+zv4d8P/EJ9N+Jn7M9usPhfxg2hiW0vQ8H2e7gvNP88NJa3EPDxx3MciskbLKCvNQso+/u+S9v7sq1u20ZU2+rV9XK7M53dTTZc9vnCl67yjP0etrWRk2/7Q3xJ/Ze/a+8afBXxL8QtS+JVlrHwvv/AIieEfEWsaTp1rrOkXFm621xZXH2G3t7SeEs8c8T/Z1cEyo5cbDXVf8ABIbWvi18cP2Sfh58Xviv8UtQ8Vat8QPCtlf/APCP2ekabZaLYB4YitwpjtVumupApeXM/kB55FjhRVTHReCv2DdY1P4leNPiL8SvHNj4x+JfinwnJ4HsL3SfD7aNovhrSXJkeO1sXurmUySzkSTSS3Tl/KiVBEq4Po37Gf7O3/DI37J3w6+F/wDbH/CQf8ID4fs9C/tP7J9k+3/Z4lj83yt7+Xu25272xnqaqj7sJc+srRXl8VZv5qLpJu121o3qwq+9Ncnw6/8ApNK33yVRpdnra9jwT/gp7+23rHwI+MfwX+FOga14q8JXPxXu9Su9U8ReGPBlx4t1rS9O02GKaWOysYra6BnneWOPzZLaeOKPzWKZ2svh3jn9vX42fDP9ij9r7UbXVPiVer8J9Ag1z4e/EXxr8NpfC2o6n9ohkaW3ls7rT7S2uJLSWIr5kdoiMk0W5S3zN9i/tgfsbXH7Rfiz4e+N/C3ixvAfxQ+FOoz3vhrW5NO/tSxMV1GsN9ZXln5sJuLa4hG0hJopEdI3SRSpDc98eP2LfiD+1b+xl8Vvhf8AEX4q6Near8S9JfR4NR0Pwd/Zul6BEVxvis5Lye4lkYklzJelTtTYsWG38tWNT6tVUPjalb1duW29tLL7Oql0fvbwlD21Ny+G8b+Vm737pr13Wl0mvJdS+Mfxo/Zq+JH7MfiLxR8VJvH2mfHTXYvC3ibw1L4f0+w0rSZrvTZryC60toYReR+VJbbGW7uroOkzn5WCFfcP+Cn/AO2LdfsF/sO+OPidp1jBqWsaLHbWul29xby3EDXl1dQ2sLSRxESSRq8yuyIQzKhVSGIqT44fsRf8Llh+Aaf8JP8A2b/wo/xTY+Jc/wBned/bX2axuLTyf9avkbvP37/3mNuNpzkdp+1p+y/4a/bO/Z28UfDPxedRj0HxVbLDNPp1x9nvLORJElhuIZCCFliljjkUlWXcgyrDIPXi3GXtFT253a2j9naFrba357Nu60u0rHNhVJey9p/JHm6+/eV/w5b20frc+SP2bv2mfilon7X3ww8LQeKvj78ZvBvjSxvrPxff+N/gpe+DrbwlewWpuIL22uv7HsI1t55EkgME7zuC8OJM7t6fDj9s/wAU3v8AwUV8T/D/AOKfxc1n4PeJ18XzW3gHwHqXhizTwt4+8ORrbJHcW2oy2wnur+YzudkF/GYpgqm2kWKRJPpX4BfCT43+EdX0n/hY/wAZPC3jXStGs/I8rQ/AX9g3eszeWIxNfTS394rd5NtrFa/vMHPl5iPBfG/9grx/+0z428OWHj74taPrPwv8LeN7bxvZ6PbeCls/EE8tpM9xZ2c2preGA28UrIMxWEUzxQqjSlmkkdqyrwb1jqn85J321aV7Jppq8dNHEd3Snb4tLebUX9yeiduVppS196/IfBrxx8Xv2/fiT8bdY8PfF3VvhH4Z+GvjS+8BeF9L0XQtK1KHUrnT0QXN7qhvraWaVHuZCohtZbTEUWBIXbzRwfxv/bv8S/Er9uL4qfC+z+Ivxg+GWi/B7TtNsvtXw3+EV14yudd1a/tRdNNdy/2XqcVvbQJ5aJAFhlkZ5WMrKqge06V+wP4/+CHxd+JWsfBj4saF4H8MfFrVj4j1zRdb8FnXpdM1aVPLu73TJ1vrZIGnVYnKXMN0gmQvtKuYq0PFH7DnjTwT+034h+Kvwg+Juj+Etd8faXZaf4zsPFfhR/Eema9PZIIrPUI47a9sJLa6WIyROUkMUiFP3StGGOEVeEIy/lSffn5UpO/Zvmt5uLSVtNpP35td3y9uXm91bbqNrvrZ6u9388/Hz9vv4/aN/wAEhvhr8SVtG+GXxl1vxvovhjUY9c8LS28Uqya39geaTTrsLPFDdRKsuzKSqkw2SKcPXrnhn4kfE79mr/go/wDDv4W+KPiXqvxT8L/F3wlrGqA6xo+m2Fx4e1LTJLZm+yNY28GbSaK7ZfKuPPlQwxkTHL7u3/ax/YW1f9rn9mLwr4D1/wCIcx1rQfFGjeKL3X5NEiP9oS2F+l40K20TxLFG+zyk+Z2RApYzMGZum+KP7JH/AAsr9tL4WfF//hIPsX/CtNG1zSf7J+w+Z/aX9pLbDzPP8weV5f2fpsbdv6rjnRSaqSk+s5vy5XSSj00XtL2slZ62SMWrxSWi5Y+qftJNq+t/csm23daXbPjz9vb9vS//AGMfGGi6xp/7W3hPxf49sPHWj6T4i+EEUfh2O3uNOvLuO1mht7RVbWLaeKGdLjzJLyYbomJQRuET9KK/Pa4/4IkeLX/ZUsvgTB8bdJsfhR4e8UR+J9Ggt/ASjXZnj1kanHBqV6180d2oy6GSG2tZWdYnZ2VZIpf0Jop2VBRe/M/W3LC3/kyl1fe+pdT+O3H4bW+alPX5px6LtbQ+f/8Aglv/AMmJeBv+3/8A9OFzX0BXz/8A8Et/+TEvA3/b/wD+nC5r6ApAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFeM/EP8AZ+1rwJ4zv/HfwmmsNM8R6jJ9o13w7eSNFoni9gAN8u1WNre7QFW7jUlgFWZJlSPy/ZqKAOF+B/7QOi/HPT79LWG/0XxDoUi2+ueHtUjWHVNDnYEhJowzKVYAlJo2eGVRujd15r40/wCC548a+MfFH7N/gax8H+CPF/gXxh8UNOttV0rX/E1zYWviOeO2vZ49PvreOwuEax3Qxys7GTMkSKbdh84+v/jh+zpZfFnULDX9M1G68JePtCjaPR/EtgitcWyMQzW08Z+W6tHIG+3k+UkB1McqRyp8q/tU3fjv47ftA/sz+E9e8L2ukeNvAXxPt/E+pvDfRx6TrelQadfRS3+myTurTbWmj8yz+a5h3ElJIgs7za9Wlfb2lO/pzxu35Ja9tNdLlc1qdW2/JO3ryStbzb0766a2L3wM1D4ffs+/twfE3S9H+CfhfwL468EfBnRNW1O58N69M2kSWpmvymk2tr9nhhiihlt5CtwsEbyCX5olxiqPwz/4LE+MfFH7Mvwt+OviP4HReFPg18RL3StOu79vGaXeu6K+oSrbRXf2BLQQy2H2qSJfMN3HP5bmQ2wI8s9p4g/Ze8dX3/BQn49+OItD3eFvGnwg0vwto179ttx9s1KGfVHkg8vzPMTC3MJ3uqod/DHBx5Jqf7CPxWuP+CCPwm+CyeFc/EzwzF4SXUtG/tOz/wBGNjq1ncXX7/zfIbZFFI3yyHdtwu4kA1h5OXK6n80dPKVatz/+S8jb81J/E24qx5ZOMO0tfNUqXL/5M5JLy5do2X2f8b/HnxO8P6laWHw3+HegeLJ5IjcXV/4j8Wf8I/pcCglfJR4LS9uXuCdrBTbLFs3EzBgI2+Fv24v2w7f9uH/gk14e8ZjQLnwlrNp8XvDHh/X9BnvI7xtF1Sx8V2dtdW4njASZBJGSkigb0ZW2qSVHs/7dvw0+Lviz9rHwzfQeH/i14++CEfhW4ik8PfDXx3D4O1W08Qi5UpdXlwdQ02ae1e1dkVI7p1R42ZoSSjD54+H3/BNP40eFP+CWmrfDO68HqPGc3x6g8Zx2C+KY9UDaSvia1vmuPt9zIrz4t43fdPtuJNuWjEjbaML71SEp7KUJa7rlrwWuys480uvupSuveTMVeNKahu4yX30ZvTr8XLHW3vNpJppn1b/wUD+CP7JFppWp+PP2ldJ+E5XVtP8A7Fg1bx1NA8scccc0gt9NNwxaCcq0r7bLZK7AN8zKpF3/AII+XXxKvP8AgnB8MJPi0Ne/4TRrCbzDrpc6q9l9pm+wteFyX+0Gz+zmTf8APuzu+bNa37ZHwq1jx78c/wBnrW9M8Far4ptvBHjWbVr+/tPEVnpcPhyF9Mu7Rru4hnid7yPFyy+TA0cm4q27apUzX3xZ8R/tb3s2k/C/UJdB8BRu0Op/ECONXfUMHDwaIrgpKeCGvnVoE6RLO+4wlLSE/Nr8Fe/ndy320a1d7OprKK7K9/VtW8lZJtbv3dklff8Aiv8AtFX7+NZ/APw10+z8T+P4kRr+W4Zv7H8JRuNyz6jKnO8qd0dpGfOmyp/dRFp02Pgf+zrYfCC6v9avtQvPFfjnXkRda8Taiq/a70KSVgjVfkt7WMlvLt4gEXLMd8jySP0Hwo+Efh74IeC4PD/hjTY9N02B3mYb2lmuZnO6SeaVyZJppGJZ5ZGZ3YlmYkk10lABRRRQAUUUUAFeU/tnfAW7/aF+A9/pmizW9l4w0aeLXfCt7Nwlnq1q3mW5cjnypCGhlA5aGeZf4q9WooeujA/Pb/gnF+zB4S/br/ZyvPid43ufjZZeIdf8Y+JopbBfin4o0X+yorfW723hszaWeoRW8JgiiSIrHGozGTySSfef+HUvwl/6CHxv/wDD2+NP/lrX0XZ2MOnxGOCGKBGd5SsaBQXdi7tgd2ZiSe5JJ61LUKnFK1gPm/8A4dS/CX/oIfG//wAPb40/+WtH/DqX4S/9BD43/wDh7fGn/wAta+kKKfJHsB83/wDDqX4S/wDQQ+N//h7fGn/y1o/4dS/CX/oIfG//AMPb40/+WtfSFFHJHsB82S/8EmfgzevD9uX4savbwzxXP2TVPi74u1CzmeKRZE823n1N4pVDqp2SKynHIIrT/wCCln/Juvhz/sqvw4/9TfQq+gK+f/8AgpZ/ybr4c/7Kr8OP/U30KmklsB7X458a6Z8NvBOseItbulsdG0Gym1G/uWVmFvBCjSSOQoJOFUnABPHArwP9lj9r74qftM6X4N8Zj4L6doXwo8fRm80zUpfGayeJLGxkikltby+0trRIIklVY8pBfXEqfaI8oQJDH61+0v8ABqP9oz9nPx98Pprx9Oi8c+HdQ0B7tFDNbC6tpIDIAeCV35x7V4V+wt4t+Nnw7+Efw3+E/jj4Japp1/4N02Hw7rXjJPEmkt4avLezhMCXlisdxJqEjTiOFlhns4NvmOHddg3qDd591y8q2Tvz8135Wh23e/R1NIRa/vX7q3Ly2XneV9Hstr68+3/BT/x345+DPij4w/Dv4K23jD4KeEry9jl1K48XNp3ibXbOwnaK/vtN0s2MkM0a+VO0KT3tvJN5JGyMsmd34l/8FOJ0+NnwU8F/DHwC3xJb48+D7/xd4e1ZtbXS9PsobdLSSN7xjDK8ds8d0C0saSyIwRVgkL/L478D/gf8ff2Uf2Cdf/Zd0r4TT+MbyCHV/D3hTx+uv6XbeFzp1/LM8F1qKPcLqUUsCXLCWK3spg7Qjy3Ifcvb/Cf/AIJ/eJf2ef2t/wBlVdEtpNa8A/Bb4S6x4J1PXpJ4ImN2/wDZSwEwGTzT5v2WZvkVlTGCwyM3SSc0m/d6PZv93VbuulpKnbbWTjd9FX92L5N7vzX8Smlbv7rn8ldpEngH/gpx8WPi5B8T9D8N/s+2DfEH4Jajdaf4ystU8eCy0JmSIXFsumagthJLeSXFu8cqrNaWqxhiJHQ7N934if8ABYfRtM+Ef7NnjTwV8P8AxL47079pa9Fhotjb3MNrqOnyvYy3EayI26I4ljEUpMqpEokk3OE2tt/swfs1eNfh38bP2v8AV9Y0X7Hp/wAUvFUGpeGJftkEn9p266DZ2hfarlov38UiYlCH5c42kE/Dvi34N/FT9j/9mf8A4Jl+DLjwjp0vxS8FeMJre58OX2sRRxTSppd+0tuLuDzoldoiwRwWTeU3ELuImnK8YKejk8P6/vLqol/hdtLNp2vcVW653HosQ/8AwC3s389X2kfefgT9v3X9N+MXjv4ZfE7wHo3g/wCIvhTwe3jzTLXRPE765pPiTSVLRO8N1LZ2kscsVwvlyRvbfKJI3VnDHbl6/wD8FQP7C/4Jv/C79oL/AIQfzf8AhZMvhiL+wf7Z2/2d/bN5bW2ftPkHzPJ+0bv9Uu/Zj5M5GH4C/Z4+IX7Un7cPjL4weOfBeo/Cjw3F8N5/hp4b0XV7+wvdcujdzpdXmoTiwuLm1iiDLFFEnnu7GORmEYKhvnjxD+y/+0z4o/4Js/Br9niD4Lx2V/8ACbxD4Yi8QeILzxPph0/XtO0jVbd1udJWO4admeOBJ3W9jtSiB0USyFVqqOvIqm7lTv25fa1FPXpen7NvW6u7W1SK2im6eyjO3+L2cHH19/nW1tk76X9j/bw+NXxz8Ff8FXv2afDvgPTPDV54e1vTPFUkemX/AI/1DRrPxJJFY2rP/aEMOm3CR/Zi26A4uC7O+RB1b3z9oP45/G34eaFqV74J+DHhbxavh7SDqOoHV/iB/YqajOIfMNtppjsLppiCrIXvFsl3GMjcrMycJ+3X8H/Hq/tlfs6/GXwd4J1L4i6d8L38RadrmhaPqFhZ6sYdUsY44rmD7fcW1tIsctuqujTo2JgVDbSK80/ah+CHxu+Kn7Uvj8614R+L/jPwB4h0HTbf4eQ+EviivgrSfCd1JDJHqCa4LPULW6n/AH4jkMscWogQsVijLBkfnqSmsOlT+Jc/3802l53TX5cy2fRSjD2qdTbT7rpP57u29tbW1PStW/4KqaT4v+AfwI8S/DPwnd+M/Fv7R5i/4Q7w7f340qKBBB9pvZr+7WOcW8VpEr7zHFM7OEVEbcWXxn4B/HvWIP8Agsx8a/EHxR8It8NrrwP8EtMbWNmof2tpk9vBqd/cNeWVyscclxbmJuslvDKHSRDENoLYPwF/YG+MX7N/7Lv7EXiiHwW+u/EH9m+21DSvFHg201iyjvb+w1OFra5+yXElwlk80JWCYJLOiOquN4bAPbj9kL4s/tYftffHnxL498HL8N/A/wAW/gqnw+0Zv7WtL/U9NZ7i9DrepDK8YuQLgykQtLAFaNRM7B8b4v3K8pYbXlda3b+FNQttdO616uTXS5hhvew8YYh25lS5rb39rBy9LJXtbZX6nU3v/BVPxt4X/Z/8PfHfxF8FbfRv2fdfe0updX/4S/zfFei6VdsEttTvNI+xC3WHc8LyJFqEskUUu/axRox9pxyCWMMpDKwyCDkEV+dXjj4CfHz9o3/gnBo/7J/iP4Uz+GdTm0nTvB3iP4iNrulz+Fhptm8SzX1jFHcnUpZ5oIB5UE9nAqyS/vJFVMt+h+kaZFomlW1nACILSJIYwTkhVAUc/QVvWjTUpKm7xu+V9WvNaW+5Xu9FYypym4xc1aVveS2T02et+q3eiXe78H/4Jwf8kY8a/wDZVfHn/qVapWt+3f8ABTVfiz8Fk1TwrbrP4/8Ah9ep4o8LoW2fa7qBHWWyLdFS7tpLi1YnhRcb+qCsn/gnB/yRjxr/ANlV8ef+pVqlfQFc7SaszU/Pr9hH4a+I/wDgoN8CLj4q237Q3xh8N6Xr/ifX7fStM0fSfC8dpZ2Frq13a2qBbvRp7gOIIY9/mys2/dnHQeyf8O4vE3/R0Xx//wDBd4M/+UFfQ/gL4c6D8LdDl0zw5pGn6Jp817dalJb2cKxRvc3VxJc3EpUcbpJpZHY92cmtqs1QppW5V9wHy9/w7i8Tf9HRfH//AMF3gz/5QUf8O4vE3/R0Xx//APBd4M/+UFfUNFHsKf8AKvuA439nr4I6V+zX8DfCfgDRLnUbzSfCGlwaVa3F+6PdXCRIFDysiIhdsZO1FXJOFA4rzH9hH/kYvj3/ANlV1L/0h0+voCvn/wDYR/5GL49/9lV1L/0h0+tQPoCsbx38RvD3wt0NNT8Ta7o3h3TZLmCyS71S9jtIGnnkWKGIPIwUvJI6oi5yzMAASQK8S/bX/at8UfCT4mfCb4XfDvT9DuPiJ8ZNUu7awv8AXI5J9L8P2Fjb/ab6+nt4pIpbkqhjRIUli3vMu6VAvzeA/wDBWCz+JemfsDQ2vxN1HwNrepQfFjwUNP1Pwvpt1pcF7anXtMOZbO4nuWglWQyphbmZXVUfKFjGpD3pwXRyjH75Ri7enN6XTW6difuwlLqoyl90ZNX9XHbfrazTP0Eorw79qSf4/wBtpviDUvhfrHwe8PWOg6XLd2UfinRNS1yXXrhYi/lyG3u7MafGrrt3qLwuH3bUKbH+bPHP/BZG+1j9mb9lzxxps/gb4S6Z+0RBLLqXjHxzFNf+HPBc0Nk0/wBjlCXFn5ktxKjxwtJcQLtidjuOIyovmTtunFW/xc3L9/K/Ppa7Sbas15qT/wDAUnLz0uj7m8X/ABU8MfD3WdC07X/Eeg6HqHii8/s/RrXUNQitptXudpfyLdHYGaTapbYgLYBOMCsTxT+098NfA3xb0nwBrfxD8DaP4719Fk0zw3fa9a2+r6irFgrQ2ruJpASjgFVOSjehr55/ag+L2veBPH37JVh4y8K/CXxf4j8Y/EJ9HuNX/sqS7i03bpd/Omo6UZX32U8ot4yQXm8tZZIxJNgTN5h/wTa8JfFOL/gqP+11f6v4w+HV/ZweKNDt/ECWvgm7trzVFGgwm0W1uG1SQWqRB03LJHcbyJCvlbwEqkuacovaLlr/AIfZ6d037Te3lbdqZO0brdpNfNz+VvcfX52PvD4efFTwx8XdGuNR8J+I9B8T6faXk2nz3Wk6hFeww3MLbJoGeNmCyIwKshO5TwQDW9X53fCP/goXefCn9iHUNZ8J/DT4eab438Y/G/Wfhl4V0PQ7L+xtGu9SfWbm1jv74IWdm8qCS4uGT55WjYDZu3L9X+BbP4/+F9G8SxeJtR+D3jjUPssEnh+50zTtR8Kw/aCzrPDdwyT6k3lqojdJY5cuWZDEgUStMHzUlVWzS+bcIzsvlJPXva99By92bg+jf3KTjf74tadm7WPYqzvF3i/Sfh/4W1HXNe1TTtE0TSLd7y/1C/uUtrWygRSzyyyuQqIqgksxAABJNfLvgj9qT4s/B/8A4KDeFfgn8V9S+G3i7TviN4R1HxD4f1zwr4dvvD89jdadNCtzbXNtcX18skbxXEbxypMhDRyK0ZBVq+aP2uP2tPjp+2R/wSR+PXxd8M6T8NU+D+vaB4j07R/DV3b3UPiO40WGOezfV21MXDWolLxy3C2Ys8GIKn2ned1ZYir7OhKtHVKMpeXutxs/+3k16Jv4U2bYekqldUZae9FefvJS0/7dd/w+JpP9QtJ1a11/Sra+sbm3vbK9iSe3uIJBJFPGwDK6MMhlIIIIOCDVivnf4e6d8Xtd/Zq+FFp8N9Y+HHhWzXwfpk11qviXRr3X5JZPssYFuljBdWQRSCH+0G6YgqU8g7vMX56n/wCCw/i/Tv8AgnRpHxH1jR/Bnh7xlB8Sz8K/F2tzJdXHhHwlcQ6lJY3Gtypvjnax/doyxvNGQ1xGjzKA0ldeJpqnXnRjupWXRv3lBPsveaWtt0/h1OTDTdWjCs+qu/L3XO3d+6m9L7W+LQ/Q2ivPf2YvFfiXxv8ACaz1XxJ4k+HvjX7eftOmeI/BccsGk67YyKrxXEcEk9z5X3iuFurhXCLIHXzPKj+fPE/7Xvxq+IX/AAUq+IHwB8AWfw50TSPCvhLR/E48Wa9p13qTae1zJcpJA9jDd25umlMUYQrNAIQkrOZSUjOck1NU+rv+CcnvbZJtp66W30NLrkdR7K34tRW3dtfmfYtFfFPhz/grHeeCP2DPjL8RvH3hW1ufHnwD8RXvgrX9H0SZ7ex17V4ZYIrVrN59zw292bu1YB/MeISsp80pl73xC/an+O37HXiT4U638Ybj4TeIPA3xJ8R2Pg/VLXwzol/pOo+DdR1AhbOT7RcX1zHqNuswaCQiG1f94koUANGCPvSio7S5bPo+ezgl19669LrmtdDl7qk3vHmuu3J8f3f+Ta8t7M+yKK+P/wBnD9qf45ftPfthfGPwtYR/DPwx8Ovgz43XQJ7+60i81HU/EcDWdlc/ZolW9iS1mjEsxa5dZUbzoVW3/dyM32BSWtONTpJJr0aTX3p/1oD0nKH8rafqm0/y/p3t8/8A/BLf/kxLwN/2/wD/AKcLmvoCvn//AIJb/wDJiXgb/t//APThc19AUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXM/Fv4P+Hfjl4Ml0HxNpy6hYPIk8RWR4LiynQ7o7i3mjKyQTxt8ySxsrowBVga6aigDwbTfi/wCI/wBlLUYNF+KuoNrfgueRbfSviE0SRfZyxCpb6yiBUgkJIC3iKtvIeHFu5jWb3gNuGRyD0NQ6lptvrOnT2d5BDdWl1G0M8EyB45kYEMrKeCpBIIPBBrwc+DPEv7FR+0eELLU/GPwlj5uPC0Aa41bwlH3fSwctc2i9TYkmSNQRbFgsdoQD3+sP4kfErQfhB4KvvEXibVbPRdF01A9xdXL7UXJCqoHVnZiqqigs7MqqCSBXC+Lf2yPBWk+AdA1nQLxvHF54yDjwzpGgFLm/1+ROHWJCyiNYzxNLMUjg/wCWrJVH4b/s+ax4v8aWPj34r3FhrHiqwczaJoVk7SaH4PyCubcOqm4vNpKteSKGwWWJIEd1cAxE8B+Jf20XF144sNR8I/CpjutfB8+YdT8Up2k1fHMNs3UWAO51x9pOGe1T3qxsYdMsoba2hit7e3RYoookCJEijAVQOAAAAAKlooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACvn/AP4KWf8AJuvhz/sqvw4/9TfQq+gK+f8A/gpZ/wAm6+HP+yq/Dj/1N9CoA+gKKKKACiivIfj5+3b8MP2aPHVl4Y8V63qn/CRXumS62dN0bw7qWvXVlp0biN766jsLedrS1Dnb59wI4yVfDHY2FdXt/Xf8h2e569XEfFP9nHwZ8avG3gbxH4m0b+0tZ+G2qvrXhy4+1zw/2ddvA9u0u2N1WTMUjrtkDL82cZANfM37AX/BRnTvGf7HnxS+LvxO+Imh3vg3w18RvEmmaZ4gjFv9lfSYL8xWEVv9lT/SmZDGkWwSSzs6BfMZhn3/AOAX7ZXw+/aW8Sa7onhfUdah8Q+Gkhm1PRNf8Oal4c1a0hmB8qc2eo28FwYX2sFlEZjJRgGypAaV1GS3tGS7q6U16NaPyf3ilZc0XteUfJ2bi/VP8Uz1GiuF/aL/AGmPAv7JPwzfxl8R/Edl4T8LQ3tpp82qXiube3luZ0gh8xlU+WhkkUGRsIgJZmVQSPMbb/gqx8CZ/EWqaLL4w1Gw17TbSC/h0fUPC+r2Op69bzyvDDNpVrNapNq0ckiFUawScMSmCd65E77f11/Jp/MHdav1/G356ep9EUV5j8Kf2yvhn8ZvhHr3jnRvFdnb+GfCdxd2niC41mCbRJvDk1rzcRX9vepDNZvGuHK3CIdjK/3WVjh/AT/goT8KP2lPG9r4b8Ma1rsWualpX9uabZ694V1bw7JrNhlQbqy/tC2gF5CN6bntzIqiRCSA6kuzcuXrv8rN3+5N+ibBuy5nte3zva3rfT10PaqK8e+FX7evws+NvxGPhjwxr+o6ldvc3VlaagfD+pQaHq1xas63EFlqklutjeTRmOXdHbzyOBBMcYikK+w0t0pLZ7eYPRuL3W4UUUUAfP8A/wAE4P8AkjHjX/sqvjz/ANSrVK+gK+f/APgnB/yRjxr/ANlV8ef+pVqlfQFABRRRQAUUUUAFfP8A+wj/AMjF8e/+yq6l/wCkOn19AV8//sI/8jF8e/8Asqupf+kOn0AUf24v2S/F3xc+K/wh+K/w11DQYPiL8GNTvJrLTdfuJ7bSfEOnahCttf2U08Mcsls5jVJI5lhm2vEAYyrkjlv2s/2Y/jX+2z+y4fDfiWH4W+FPEMfjjw7r9np+mavf6jZWlhp2pWl3Osl9JaQvPPIIJSgFpCqlkQs3MlfW9FEPdaf8slJeTTUvuur221b3bYT95OL6pxfo01+r1321skj4N/am/wCCZXjD43/te/EHxdqXhD4GfGPwr450fTdN0BvibJc3kvwskijkhu307TjZ3EFykpMVyypcWLySKUaUALIu9+z5+xp8Xv2YP+CdPwp+CS+FvgX8W7Lw/wCHJvD/AIw0TxRql7Y6ZqZMgaKSG4+wXYliC+YrwTWY3mRWEqeWVk+1KKlRSpuktna/y5vl9qWu7vdu6TKlLmqKo91t90V/7bHyVtN3f4I+DX/BJvxl8IPg7+yh4bTxF4XvJPgj4/vvGniBIzcW9lBDd22pA2GlxlHbyYJb5I4xKY8xRFjtJ2V71+y3+yx4g+CP7V37RPjrVbzRrjSfi5r2lapo8NpLI9zbRWulwWcguFaNVVjJExUIzgqQSQeK99orTms5Nfav+PJ/8rj+Pcjl28v0cn+c5f0j4Dg/4JM+OU/Y3l8MQeKPDGl/Ezwl8ZNT+MPgu/Q3FzpX2l9Uury1tb0bI5AkkFy8MxRW8syFk83YA3p/7R/wV/aN/bB/ZM+J/gvUb/4d/B7X9e0mK18OzeFvEuqau7XKTCWX7Tfm1sJILedFFuyQwPIqSSuJGJEY+raKzUEqapfZVtPSMYX8/djFNO60va5optVPbL4rt/fJzt6c0m9NdWr2Pz6/Zw/4JS6/8M/2/wD4b/GK1+GH7OPwW8PeD/D2s6BqXh34d+dNcao90lv5N3LdnT7ITnKSL5TwqYgpYTTGYpDRvP8Agmn8fvBf7BHxJ/Zc8E618KLP4d61Brlt4a8WX93eNq8FhftLOumXGmpaeQhEk8kJvUupCIgHFqX+UfonRRViqsPZy2aa++Tl9927eTa+FtE0X7KSnDdNP5xXKv8AyXT8fi1Pg34/f8E1vHPjn4kfCrWbnwv8Hfjf4V8GfDuPwleeAviJqt1baDp2rxtCV1q0QaffRTTNGJYG822jdYwm2QbnWtb9iH9hb4vfsAfsear4E8O2fwS8Xahf+Pdb1u40aY3uhaDe6PfySsltAY4LlrB03RnyjBdxhEaEMdwmX7corSU3Jyk95c1/Pmnzv8fnbTZK0RioxjBfZtb5RcF+D6W113u38zf8Ewv2IdS/Yl+Hfjy31O28IaBL4+8Y3nimLwp4Q8z/AIR3whDKkUMdlZF4oS42QLI7iCBWkkfbEoAz87hvippv/Bfn42aj8MbfwNrT2vww8MR6poXie+udKh1GN7i/8uWK/t4Ll7eSJlJ2m1lWVZGGYyFcfpDWHYfDPw3pXj/UPFdr4f0S28U6vaw2N/rMVhEmoXtvCWMUMs4XzHjQu5VWYhSzYAyal3dWnPpBNfL2cqaWvZNavXTe+o+VeznDrJp/P2im9u9npt5WPkDRv+CSuo+Pf+CfHxc+GPj/AMZ2qfED46eIbrxt4g17RLNmstF1mWa3nt1tY3ZHmtrVrS2RfMKNMsRJEe/auz4x/ZX+N37Y+p/C3R/jZB8K/DnhH4a+ItP8X6jL4T1u+1e98Z6lYfNaoYrmytU06288+fIoku3YIsQZQWkP2FRTi+WUXHaPLZdE4JKDXnFJevLHmvZDleSknvLmu+/P8f3/AIXfLa54F+xp+yx4g/Z3+L/7QPiDWrzRrqz+K3j0+KdISxlkeW3tf7Os7XZOHjULLvt3OELrtK/NkkD32iil9mMOkVGK9IpRX4IN5Sl3bfzbbf4s+f8A/glv/wAmJeBv+3//ANOFzX0BXz//AMEt/wDkxLwN/wBv/wD6cLmvoCgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACviP8A4KBfAzwT+0F/wUB+EejePfB3hbxvo9p8PfFd7BY6/pMGpW0M41Lw8glWOZGVX2uy7gM4YjOCa+3K+UP2oLaSb/gpH8LWSN2Vfhr4rBIUkAnVPDmP5H8qwxN/ZOwM87/4dl/s3f8ARvvwQ/8ACF0v/wCMV6v/AMEg/D1h4Q/YoGkaTY2el6TpHj/x5YWFlaQrBb2VvF4x1qOKGKNQFSNEVVVVACqoAAArc+yS/wDPKT/vk1R/4JVRNB+ybqCurI3/AAsn4gnDDBwfGmtkVxZe5OTuSj6Oooor1CgooooAKKKKACiiigAooooAKKKKACiiigAooooAK+f/APgpZ/ybr4c/7Kr8OP8A1N9Cr6Ar5/8A+Cln/Juvhz/sqvw4/wDU30KgD6AooooAK+A/DHxv8I/sUf8ABXf9ozU/i9r2keC7P4qaD4Z1DwRqmsMttHrsFjbS2t3p1nIT/pFzHcSK/wBljBmb7ShCMGWvvyilqpKS819/5a217XXVj0cXF9bfg0/0/XofiBo3hzWPEH/BOay8Y6BF438AeDvh/wDtY6t4v8UDStDhGr+E9Eiv7oPdvYTwTIPsbSxSyRPBIIxEzNH+6OPvD9i3wb8Jvil+2Nc/FLwr+0z4o/aL8ZaR4Pl8NTXMd1oF7o+l2E15DcCOabRdNtoVuTLHmOOaUyFDKVQqGZfs+inQtShGEfspW7/wo0n5axj0V9Wr7WVa9STm923ftb2kqiXycu9tE7LW/wAX/wDBeWwg1T9hPTba5hiuLa4+JHguKWKVA6So3iGwDKyngggkEHrTf2g9Ktbr/gux+zfcSW0ElxB8NvGTxStGC8bCfSlyp6g7ZJBx2dh3NfaVFFL3Gn2lOX/gVJU7fK1/PbzFVXOmu8YR/wDAajqX+e3lv5H45ftJ/Czxr8b/AAT/AMFOPDfgS213U9cPjzwtfpp2kRCe+vbeCw0q4uooImBEkrQQviPa3mFQm1t2D9Hfs6J8E/2rv2l/hP450n9rjxz8fPFHgKC/1jRNBiPhyUaVHd2DQTnUoNJ0i3uLUeW4UJdPF++WNMGT5D9/0Uqa5aapvW0Yr5xgoX9Hyp26a663Tn7zk/5pTb9Jycrfi1f8Oh+X/wCxB8YNM/Zx/bC+HHwj/Z2+Mmm/Gj4I+NpNVutS8A3FnHPrXwZgRZ7t5ZLqLZPawNd3EVsLPU4vORsIrbg4H6gUUVd/dUeq6/1+t352skPWbn3/AKv8/Ky62u22UUUVIHz/AP8ABOD/AJIx41/7Kr48/wDUq1SvoCvn/wD4Jwf8kY8a/wDZVfHn/qVapX0BQAUUUUAFFFFABXz/APsI/wDIxfHv/squpf8ApDp9fQFfP/7CP/IxfHv/ALKrqX/pDp9AH0BRRRQAUUUUAeWftRftdeG/2VNP8Nx6pZa34i8TeN9VXRPDHhjQoYp9W8Q3hVpGjhWWSKFEjiV5JJp5YoY1XLSLlQeN8Yf8FD9L+Fnwrt/E3jf4b/FLwRPN4x0vwUdH1WxsWu1utRmgitrhJre7ls57bNwheSC4kKbZEK+YjIOD/wCCi3g7xR4L/az/AGbPjfpnhrX/ABj4R+Fepa1p3irTtB06XU9UsrbVrSO1j1GGzhDTXIgkRRIkEckojmdlUhWrnf8Agoz4k1r9sv8AYmm1/wCH/wAPviZff8K48eeHPFKabqnhi60bVfElrpuo2t3eCysLxYrxnWIShVlhjMrxFYhJuUkpNXi57c8VL+7Hngm//AW3fbW28ZFTTs1Hflk15ytKy+9L3d+rdpI+l/F37VPh7wX+1Z4L+D91Z6zJ4l8daHqfiCwuYoYzYwwWD2yTLK5kDhybqPaFRgQGyVwM+Q61/wAFbvBOiwXniI+CPibP8I9O8Qf8I1dfE+OxsR4YhuRMts8m1rsag9ql0fIa6SzaDerESGNTJXmD+LvEf7Un/BYb4SeMdE+G3xP0b4Z+H/hz4n0o+K9d8NXuhpNfXM2nkwG2u4o7m3KiIbHuIoxMS/lb1iZq8C/ZA/Yg8AfDf9lGD9n346fDT9qrxH4y0O9utCvNH0TXfG8/gvxdA9209tdW09tdx6FBbSRSxMyXElv5UiSCVVYZYoJy5HPfXTq/3so6f9uKPrzKSukyZuK52tlbXtemm3/4Fda6K3K7No+9/wBo3/goh4Z/Z0/aM8IfCh/CXxB8Y+OfHujXms6DYeG9OguUvVtZIkkhaaWeKOBiJd4knaOAKjbpVYorO+Hv/BSH4c+KPgj8SPG/iQ6z8NoPg5e3On+OdL8UQRJqHhmaGNZtsgtZJ4pxLDJDJE1tLMsomRVJfKDzj4t/CvxDdf8ABbf4IeJ7Lw7rc3hLRvhh4k0271iKxlfT7K4kurExQSThTGkjqjFVZgzBWIBwa8E/aN/Yt+Jv7QWhf8FCPD/hrw1qMOreN/EXhfVPCn9owNZWfij7Bp+nTyQQTy7InWRrZ7cvu2K7EORg1nCX7rmlpeM3e23LW5E7dfcbbW7srW15qir1eR9JQW+96alLV7e9s+mt79PtH4KftoXXxauI31T4N/GfwDpl7o0mvabqOvaRZ3EOpW8YjYoI9Ou7ueC4KSoyW91FDNJ86ojPG6rh6R/wUX063+PXgrwJ4u+GHxX+HP8Awsua6tvCOu+I7LTl0vXZ4IftH2f/AEa9nubOeSAPLHFewW8hEUilVkUxiLw1+3Tq/wAcfhbqy/Dj4WfFG28fw+FLvVbWw8Z+CtS8NaZY6mtuDb6fPcXyW6Ts87CMmzeZQEdmdFKM3wrKnxg/ag+M/wCx14m1m4/aX8Yap4e8e2Gu+PNM8SfC6Hwx4d8DXE2m3YdLcnTLa7ljjkaSISi5vIERR5sweWBpOiEb4iNO2l4p+kpOPNdb2turRva/xK2Tclh5VXuoydtrOMU0rPa76O8nrZ+6z7+8Gft/aP8AEz4y654Z8L+A/iP4m8P+FvEj+ENa8aadZWcuh6ZqyBPNtmjN0NQkEbyxxyTRWbwRszF5FSOV097r81vi14J1Dwl+3Lo/iX9nTw9+0H4B+Jfib4hQ2/xF8PXfh6/j+HfiLSxMgv8AWbqeeJ9KEz2dqhhmsblbp2mVHj81pVT9Kazp60ITe+z83aLbXeLbfK/Kz96Lbup7teUFstvS8lZ/3tNem0lpJJFFFFABRRRQB8//APBLf/kxLwN/2/8A/pwua+gK+f8A/glv/wAmJeBv+3//ANOFzX0BQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABXlHivxLp2j/ALY3hu2vNQs7S4ufBerSxRSzqkkqJfaaHZVJyQpdASOhdfUV1/xe+N/gv9nzwa/iPx74v8L+CPD0cqQPqmv6rBptmsj8IhmmZUDNg4GcntXxv4x/ap+GH7T/APwUe+H0vw0+I/gL4hxaJ8NvE6ai/hnxBaastg0uqeHzEJjbyP5Zfy5Nu7G7Y2M4NY4iXLTbC9jjfgx4L+HfxM1Xw/aaz4n8Za7r/wAMrvRYPA+p6p8CfE2hDwxpttqlr5ixT3sW19RvAyQ3d7DJGrQKZDbpDFNn7K/Yw1e11z4O6rcWVzb3dufHHi+MSwSCRCyeJdTRxkcZVlZSOxBHauTrxT/gnz/wUH+AnwL+A2veFvG/xv8AhD4O8T6Z8SfHpvNI1zxlp2n39p5njDWZY/MgmmWRN8ciOuVGVdSOCDWeFxbqx9na1rv5u1/vtd93ru3cb5puX9W7H3fRUOnajb6vp8F3aTw3VrdRrNDNC4eOZGGVZWHBBBBBHBBqauoAooooAKKKKACiiigAooooAKKKKACiiigAooooAK4n9oX4C6L+0v8ACy48Ja/Pq1pYzX+n6pFc6Zdm0vLO6sL6C/tZopRkq0dxbQuOCDtwQQTXbUUAfP8A/wAMCf8AVav2gP8Awr//ALTR/wAMCf8AVav2gP8Awr//ALTX0BRQB8//APDAn/Vav2gP/Cv/APtNH/DAn/Vav2gP/Cv/APtNfQFFAHz/AP8ADAn/AFWr9oD/AMK//wC00f8ADAn/AFWr9oD/AMK//wC019AUUAfP/wDwwJ/1Wr9oD/wr/wD7TR/wwJ/1Wr9oD/wr/wD7TX0BRQB8/wD/AAwJ/wBVq/aA/wDCv/8AtNH/AAwJ/wBVq/aA/wDCv/8AtNfQFFAHz/8A8MCf9Vq/aA/8K/8A+00f8MCf9Vq/aA/8K/8A+019AUUAfP8A/wAMCf8AVav2gP8Awr//ALTR/wAMCf8AVav2gP8Awr//ALTX0BRQBw/7PHwB0X9mj4Zp4X0K51i+tf7QvtVuLvVbw3d5e3V7dy3dzNLIQNzPNPI3AAGcYruKKKACiiigAooooAK+f/2Ef+Ri+Pf/AGVXUv8A0h0+voCvE/HX/BOj4MfEjxzq/iTV/BFrNrWvXAu9QuYL67tftc2xU8xlilVdxVFBOMnHOTQB7ZRXz/8A8Ot/gT/0I3/la1D/AOP0f8Ot/gT/ANCN/wCVrUP/AI/QB9AUV8//APDrf4E/9CN/5WtQ/wDj9H/Drf4E/wDQjf8Ala1D/wCP0AfQFFfP/wDw63+BP/Qjf+VrUP8A4/R/w63+BP8A0I3/AJWtQ/8Aj9AH0BRXz/8A8Ot/gT/0I3/la1D/AOP0f8Ot/gT/ANCN/wCVrUP/AI/QB9AUV8//APDrf4E/9CN/5WtQ/wDj9H/Drf4E/wDQjf8Ala1D/wCP0AfQFFfP/wDw63+BP/Qjf+VrUP8A4/R/w63+BP8A0I3/AJWtQ/8Aj9AH0BRXz/8A8Ot/gT/0I3/la1D/AOP0f8Ot/gT/ANCN/wCVrUP/AI/QB9AUV8//APDrf4E/9CN/5WtQ/wDj9H/Drf4E/wDQjf8Ala1D/wCP0AfQFFfP/wDw63+BP/Qjf+VrUP8A4/R/w63+BP8A0I3/AJWtQ/8Aj9AB/wAEt/8AkxLwN/2//wDpwua+gKwfhh8MNA+C/gDS/C3hbS7XRPD+iQC3sbK3BEdumScDJJOSSSSSSSSTk1vUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQBj+Pvh7oPxW8IX3h/xPomk+ItC1OPyrzTtTtI7u1uk/uvHICrD6ivnLxN+wt4r+DW67+C/i3OmR/N/wAIV4yup77SiP7lpf8Az3lj7BvtUKABUgQc19S0VM4RmrSQHxlYeKPjN4+v20DQ/gzqnhrX4Ds1DU/Fmp20Wgaf6PFNavNNfZHzIkUag42yvbscD0PwD/wTr0K+1a1134s6zc/F7xDayLcW9vqdstt4c0uVTlWtdKVmi3KwBSW5a5nQ/dmA4r6KorKnhqdPWKFYQDaMDgDoKWiitxhRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUV8iS/s1eDf2of2//jLH470278QQ+G9G8NRaXDJql3DFYrNHfPKESKVVG5lBPHJFd3/w63+BP/Qjf+VrUP8A4/QB9AUV8/8A/Drf4E/9CN/5WtQ/+P0f8Ot/gT/0I3/la1D/AOP0AfQFFfP/APw63+BP/Qjf+VrUP/j9H/Drf4E/9CN/5WtQ/wDj9AH0BRXz/wD8Ot/gT/0I3/la1D/4/R/w63+BP/Qjf+VrUP8A4/QB9AUV8/8A/Drf4E/9CN/5WtQ/+P0f8Ot/gT/0I3/la1D/AOP0AfQFFfP/APw63+BP/Qjf+VrUP/j9H/Drf4E/9CN/5WtQ/wDj9AH0BRXz/wD8Ot/gT/0I3/la1D/4/R/w63+BP/Qjf+VrUP8A4/QB9AUV8/8A/Drf4E/9CN/5WtQ/+P0f8Ot/gT/0I3/la1D/AOP0AfQFFfP/APw63+BP/Qjf+VrUP/j9H/Drf4E/9CN/5WtQ/wDj9AH0BRXz/wD8Ot/gT/0I3/la1D/4/R/w63+BP/Qjf+VrUP8A4/QB9AUV8/8A/Drf4E/9CN/5WtQ/+P0f8Ot/gT/0I3/la1D/AOP0AfQFFfP/APw63+BP/Qjf+VrUP/j9H/Drf4E/9CN/5WtQ/wDj9AH0BRXz/wD8Ot/gT/0I3/la1D/4/R/w63+BP/Qjf+VrUP8A4/QB9AUV8/8A/Drf4E/9CN/5WtQ/+P0f8Ot/gT/0I3/la1D/AOP0AfQFFfM37BngHTPhB8cv2iPCHh9Luz8NaF4r0s6dYSXs1zHZedoGmyy+X5rsVDSMzEA9Sa+maACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigD5/8AgR/ykE/aB/7BXhT/ANE39fQFfP8A8CP+Ugn7QP8A2CvCn/om/r6AoAKKKKACvlH/AIKh/Fvx34B1z9n3w34G8b6x4APxM+J1r4X1jU9KsdOu7wWMlhfTMsQvra5hVt8MZ3GInjHc19XV8cf8FbvgFqn7Q3in9mTS7TSPGOo6TY/FyzvdbuvDlzf2VzpNkNN1BGuWu7J0ntUDui+asiYLgbvmwZ1dSmu86d/Tnje/la9/IpWUKj7Qnb15JWt5328yl8Mv2lfHv7Nn/BQfxl8FfGnjbVfi74StPhm/xOs9c1HS9Ptdf0TyboWkunzmwhtrSWOTa0sLGCKQHzVZnCqR2PwG/wCCsngT47fs8X3xjHhX4heE/g5Y6Cuunxpr9lZw2NzhYzLaw28N1LfSzxyO0RKWxieWGRYpZfk3dTZfsT+A/wBnX4SfFO48C+HdSm8UeN9Iuf7V1fUtVv8AxDr+uSJZtDbxS317LPdzIigLHEZCqZOxRuOflnxP+xP8Qf2gv+DbTwR8JNL0LVNO+Itr4F8OSjQNQmfRryS6sJbS6exkaTY1vK/kNGC5TY5UsVAJDUv3crvWKgr2/nlVu7LfkiorvK13vYOW9WFtpN9duVU1u9FzNyeuiv5XX058Hv8AgoXp3xD+MHhrwN4p+GvxQ+Eev+OdLm1bwtH4ytdORPEMcKq88UTWV5deTcRRujtb3IhlCkkIdj7bPgz9v7R/iZ8Zdc8M+F/AfxH8TeH/AAt4kfwhrXjTTrKzl0PTNWQJ5ts0ZuhqEgjeWOOSaKzeCNmYvIqRyunjH7KXwy+CvxY+O3w/8V6T8PP2rW8a+E4bjU7S5+Jl/wCOPsfg2We0aCZSdeuvsk8rrK0GLP7ScnfxGvmjzj4teCdQ8Jfty6P4l/Z08PftB+AfiX4m+IUNv8RfD134ev4/h34i0sTIL/WbqeeJ9KEz2dqhhmsblbp2mVHj81pVTVRXt4U3s7r75JJt/wAqTfNJbNKVuW6MnJ+ynU6q3l9ltpJ68zsuVS7uL1sz6R+Lv/BT7w78M/EfjyLSvh38VPH/AId+FTLF428TeGtPsZdL8NShPNmhYXF3Dc3ckEBWWZbGC5Mauqn95lBr/tH/APBSf4e/s2aR8Jr+6tfFXi6y+Nd+NO8KS+FtM/tM30r2j3UB2Bg+yVVCqyqwBdS5RAzr8XfDf9lPwv8As+/Fb9oDwl8avCn7Uepr458eat4t8P3vgDVvHF14d8UaVqrbxbyx6FMLS3uo3EsMy3iRFgY33vEwZfZP2qv2b7vw58UP2CNI8BeAfE2neD/hr4uEdxYRwyaj/wAIlp8eg3MESXlwjzKoQlIjK0zqz4xI+QTlSvKnS5t5uinbpzte0XrG6SVvd1529C6z5Z1FH7Kq28+VP2b/AO3rXevvacqWp7b8N/2/bf4man8Q/DUfwx+JeifE/wCHVna6leeBNTOjpq+oWd0G+zXVrcJqD6dLE7RzJk3ilHgdXCEpu8U/4Ja/8FL/ABB8Qv8AgmVpXxb+Pum+JfDmnaHoV3rOteP9XXRotL1hIrudD9nttPuHuA6oipsa0iLsuEDsw3d14V+G/iGH/gtX498UvoWsx+GL34OaRpdvrD2Ug0+e7TVr6R7dZ9vltKqOjFA24KykjBFfKnwo/Zx+IHxb/wCCBWvfsy2Hg/xfonxo8A2rx3Gla9oF7pmmavPaa+94kFrqc0aWN3Hcxw7VkhndAs6l9oJpQlek6j0bS01duWdWMpJbvRRk0nrouxfIvbezfwpx1ul8UIStfZauSTtp11Psvwj/AMFKtKvPiB4F0bxh8MPix8LtP+J8/wBj8Ja94ps9NGm61dtGJYbM/ZL24ns7iaLe8cd7DblvKdOJB5Z7/wDbD/a58OfsR/B1fHfi+x1y58Nx6vp+k31xpkMUv9li8uo7VLqcSSJi3SSVN7LuYBshWr5a/ah8Zat/wU0tvgj4K8G+APir4Xu9D8f6F448XX/inwfqPhuz8KWmmSfaZYFnvIYo725klCQRrZNOuS0hdY13N9e/tP8AwH0v9qH9nPxx8OtaUHTPG2h3ejTkj/ViaJkDj/aUkMPQqKdVuFF1EruMnZXvzxSi91peTcoXStpdIilaVSMJuycVd2+GTck9N1ZKMuV666uzRkfFT9rXw98J/wBov4X/AAvudP17U/EvxX/tKXTDp8EcltYQWECTXFxdO0imOL95GilA7M8ijbjJHn3xF/4KWaT8IfFnh8eKvhb8YPDvgXxN4itvC9j461DSrKDRlvLlzFB59s13/alrFJcAQia4sY497xnd5ciSN4p/wST8N/E74zfFnVPih8XvB/iTwl4g+HvgfSPhTpVvrenT2b31zAouNZ1GATKGkt7m5FoqSrlXFqcMcV8v/tcaj+0N+1/+yxBpvi+2/aJvPilB8QdMu/EHw60n4ZQ2ngjQ7Ky8Qwqs9vqcmnGe/iEUcUyta6lcO5kMhiWBJfK1UF7enTTunJXfTldSya6/w7SvtFt82lkQ3L2E6jVmo6LrzcjlbtpP3Lbuy5dz7k8EfGDVdM/4LG/GjRtY8UajB4I8PfCbw7rK2F5qLrpemytfaqLi6EbN5UbGOJN8mASsa5OFGLVn/wAFdvB0vhPw744vPh98VtK+DPivUILDTPibe6dYR+H5VuHMVrdSwC8Op29rPNsjSeexjjzNEzFY3D1558Zf2YfGXxq/b8/aosLDTtW0fTviJ8A9N8J6N4inspk05tQkl1lCiT4CM8XnxO6q25Q6kgZFebfFG/8AGX7T3/BIzSP2VrH4T/E3w98ZNW8O6X4E1e31DwpfWnh3w4bVoIrvUG1pohptxapFbvLGLW4llm3xoiFiwXLDXdKKerWy25r1q3Nr05Uoe9tFSvJONjWqo+2d3aLtd/y2pUradbty03bjZas+rviH/wAFJ9B8I/tbat8ENC+H/wAS/HvxG0fTLHW57DQLXT1gFhdeeDdG4vLy3hRImhVHEjo7NcQiJZcvs1vjR+3lZfDX4uah4D8L/Dn4lfFvxboGkJrmv2Hg+307Hh62kJFv9omv7y0haafbIY4IXkmZYmYxhSpbz79nL4UeIPCH/BW/4563daPrq+G774e+D9N0/WrmzkW01Ke3fUhMkc5GySRA8ZdVJK+YuQMivEPiP+zbpvwT/wCCm/xv8ZfE/Rf2irrwV8YbbRtV8Na98L9V8YvBbT2NmlldWGoWvhuQTJL/AKuWGSeJo2RpFWQMjKRr4Unvzu/dptRX3Lm87NJ+8rTG7lJyWyhpvvGLl62ba00W7VlI+mdR/wCCoXw+1P4e/CvWfBWm+K/iVqfxo+1N4S8PaBbW0GqXy2kbSXrSfb57aC2+zbSsonmQhyEAZyFr3D4X+PD8TPAmn60+ia/4bmu1YT6VrdqLa/0+VGKSRSqrMjFXVgJInkikADxySRsrt8N/Gn9lP4D6F+yh8MPCcvww/aT+HWiaZLquteD/ABH4Ms9c1Xxd4Fv7mR2lkeaxe91GGa7W5ml23MckOAUnEcqxxV9D/wDBMjWPizr/AOxX4RvPjWupDx3cm7kc6raQWeqPYm6lNi17DABFFdm08gyooG18hgGDCtbJ89ls/u8vXun0s07OxF7KDve6+/fX0tbVaJ3T6M98ooorMsKKKKAPn/8AZX/5O7/ad/7GrRf/AFG9Mr6Ar5//AGV/+Tu/2nf+xq0X/wBRvTK+gKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigD5/+BH/KQT9oH/sFeFP/AETf19AV8/8AwI/5SCftA/8AYK8Kf+ib+voCgAorzzwx+138KPG/xhvvh5o3xP8Ah5q/j/TGlS88M2XiOzuNYtGi/wBaJLRJDMhT+LKjb3xUPjP9sv4QfDn4tWfgHxD8VvhtoPjvUZIIbTw5qPiaytdWunnIWBI7V5BKzSEgIApLEjGaFry2+1t5+ncHpe/Tfy9T0miuS+I3x88C/B2Z4/F3jTwn4WePTrjWHXV9Xt7ErZQNGs90RK64hjaWIPJ91TIgJG4Z5bWv27vgf4b+F+i+N9R+MnwqsPBfiSeS10jX7nxbYRaXqk0ZZZI4LlpRFK6lHBVGJBRs9DQtdv6u7L8U16qwa/16X/LX0PVqKwfFnxU8MeArvS7fXPEeg6NPrjSppsd/qEVu+oNFC88ohDsDIUhjkkbbnaiMxwATWJ8JP2nvhr8fvB2o+IvAnxD8DeNfD+kSNDf6noOvWupWdk6oJGWWaF2RCEIYhiMKQehovv5b+Qb2t128/Q7miuTuPj34GtPhvpfjKXxp4Tj8Ia4bVdN1x9XtxpuoG6dY7YQ3G/y5POd0WPax3l1C5JFeDftT/wDBWf4W/sj/ALZPw2+EnizxT4A0c+M7PVLzWtV1fxfaaZ/wiK20EUtr9phl7XZkZYy7x5MZ27+QB6S5Hvdr5pXa9bK4LVXXr8trn1LRXmnxa/bQ+DvwCtdEn8d/Fj4aeCofE1ubrR5Ne8T2WmrqsICkyW5mlUSoA6fMmR86+orqfiL8XvCfwg+H114t8W+KPDvhfwrYpHJc61q+pQ2Wn26yMqRs88rLGoZnRQS3JZQOSKHorv0+fYFq0luzoqK+Z/gt+3kvx5/4KA618PPC2p+C/E/w0h+G2n+M9K1/Rbn7a1/cT6ld2kgW5jlaCSEC3GAi5Dbsseg9P0z9sv4Qa18an+G1n8Vvhtd/EWOaS2fwrD4mspNaWWNDJJGbMSecGVFZmGzIVSTwKaTaTXXm06+7Jxenk0wbSbXa3p70VJfg0ek0Vw3xF/af+Gvwg8faD4U8W/EPwN4X8UeKnSLRdH1fXrWyv9Yd5BEi20EsiyTFpCEARTliAOa7mktVzLb/AC3DZ2YUV534M/a9+E/xG+LWo+AfD3xQ+HeveO9HaZL/AMOad4ks7rVrFoW2TCW1SQyoY2O1tyjaeDg074n/ALW3wq+CM+qR+NPib8PvCMmhpbSakmteIrOwbT1uS4tjMJZF8sSmKQRlsb/LfbnacK6sn0ew7O7XbfyPQqKzvCXi7SvH/hbTtc0LU9P1rRNYto72w1CwuUubW+gkUPHLFKhKujKQVZSQQQQa4344ftdfCj9mO702D4k/E/4efD2fWVkfT4/EviOz0l75UKhzELiRC4UuuducbhnqKbTUuV7iWqutj0OiuW8e/HHwV8KrCxuvFHi/wv4btdTjnms5tU1WCzju0gt3uZmjaRlDrHbxySuRkLHGznCqTWfF+098NZvgmfiWnxC8Dt8OREZj4qXXrU6IIxJ5Rf7Zv8jb5nyZ343cdeKHpdvpuC1tbrt5+h3NFfJv7bX/AAUo0/4Yfsn+HPib8GfEPw8+I2n61480Lwm+oWl+ur6Z5d5qMNrc7JLScL5yJIcfOQrY3KRxXvfwo/aY+HHx51vXtM8DfEDwT4z1LwrMLfWrTQtdtdRn0eUs6iO5SF2aFi0cgAcA5jYfwmiN5JtdG19yi/utOOvmJtJ28k/vcl994s7eivDv2Zf2k7nW/DsVr8S/H3wOu/FWveKtX0Xw9D4M1wyW+ox2kr4tAtw/mSahDGjfaY48iNkbgAV6rc/E3w3Z/EO18IzeINEi8WX1jJqdtor30S6jcWiOqPcJblvMaJXZVLhdoLAE5NC1Sa6pP8Ob8FqxvRtPp/nb89jcorzb4Vftl/CD47ePL/wt4I+K3w28ZeJ9LjkmvdI0PxNZajf2aRuscjyQQyNIiq7KrFgAGYA8mvSaN0pLZ7B1a6o+f/2V/wDk7v8Aad/7GrRf/Ub0yvoCvn/9lf8A5O7/AGnf+xq0X/1G9Mr6AoAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAPn/4Ef8AKQT9oH/sFeFP/RN/XaftoXXiey/Y9+K03glbx/GUXg/Vn0JbQkXDXws5TbiMggh/M2bcEHOOa4v4Ef8AKQT9oH/sFeFP/RN/X0BWOJo+2ozpXtzJr71Y1oVfZVY1LXs07d7HyZ/wTFPwiP8AwTS/Zpl0f/hEDpaaJp58OPd/Z/MGuNZyrc+QX5+3F2vhJs/ektcA9Xr5k/Y9u/h7qH/BvJ8XpviWNGGpyp4xb4nLrTK1wviU3l1vF0WJc3XmfZfK5Mn/AB77DnZX6D+EP2PPhH8Pvi7ffEDQfhZ8OdE8eapJPLeeJNP8NWVtq92853TNJdJGJnMhJLksdxPOaXxD+x78JPFvxltviLqvwt+HOp/EGykimt/E934asptZt3iAETJdtGZlZAAFIfKgDGK3x3+1SquXu+1ve28bu+nfz2u1Dbl97PBv6v7O3vezcWr9eW+/Zu++tve35tPgL4OfDrVvGf7cn/BPg/FvSf7Q8d+HPgfrWp3i6rDvubLVkh0aNpnDdLhfMkBJ5VyxGCAR2X7EXwR8IWv7Uf8AwUJvF8NaMbi88WGwmZ7VHD291oFldXUWCCAk08jSSgDEjbS27auPvi/+HPh7VfHOneKLrQdGufEuj201lYavLZRvfWMExQzRRTFd6JIY4yyqQGKLkHAqPRPhf4Z8Nal4gvNO8O6FYXfiycXWuT21hFFJrMoiWESXLKoMziJETc+TtVV6ACpxd68ai2c41V6OpV5/wXuvv6CwqVGUG9VGVJ+qp0+T89V2PxV0/XvCGuf8E9v+CVE/xXvdLk8If8JPBa6jLrcwWy2R6ddxW0c7MQvlB1gQh/kKjDfLmvsf4m2Q8U/8Fm/ipYeC/wDSJm/Z6ltPGltZKCkupSXzf2QtxtGPtP2drvZuO/ymGPlxXqv7VP8AwTrh+K3j/wDZri8F6N4D8PfD74L+KLrU9T8PG0FpaSadNpt1aG2tbWKFoTlpxmNtibS3JPB96+Cv7PPgD9mzwxcaJ8OvA3g/wDo13ctez2HhzRrbSrWacqqGVo4ERS5VEUsRnCKM8CnirYl1pS05p1WuulSjGn9yu35tdNyMMnQVOK1tCkm/OFVz/GyXS1767H5T/En41eE7L/g3k/ZA0eTxBpi6r4m1v4f6Vpdp5wM17dWWrWTXcSr1zCIJQ+eFK4OCQD9d/tmzpZ/8Fif2LXldYkmsfHdvGznaHlbTLNggJ6sVRyB1wjHsa9+079iP4MaRr2u6pafCL4YW2p+KbuO/1q8h8K2KT6vcR3Auo5rhxFumkW4VZlZySJAHB3DNdD8aP2fvAf7SHhWLQviJ4J8I+PdEguVvYtO8R6Pb6paxzqrKsqxTo6BwruAwGQHYZ5NbVK8p1ZV2tZTnO3bngoNX62s3eyvt5ip0YwUYJ6Rgor/t1yab/wDArNeW58nftL/tWx6z8Vv2jPAPgn4T/C3Uv+EI8LWknxR8ReO/E6eGbS9tbuwmktk/dafevewx23nBmuRDEnzIpb59vyn+x9rcviT4Df8ABKM+M5ft3gdl1GFvtsjG1/tyHTJE0QOCChkQJOId3IcLtORX6j+M/wBkD4S/Ebx5oHirxD8Lvh1r3ifwpHBFomsaj4bs7q/0ZIHMkC207xmSERuSyBGG1iSMGr2s/sz/AA38RfBpfhzqHw/8EX3w9REiXwvcaFay6MqJIJEUWjIYcK4DgbOGAI5rGl+7u1q7wf8A4A56+T9+8dPdaTbkaVPfaWy5Zr5zjFP1V466+8nb3T85/GH9i6Z/wVN/bVuvgdBp/wDwnx+AiyX39gDbLL4mWW+2bihA+14FqGKkNuxuO/NUvj3H8P7z/g2T+HK/D5dKl1STQvDQ+H4tArX7+LzdW/l/ZdvznUPtv2jft+fPn7/l8yv0t+Hn7P3gP4RXFpL4T8EeEfDEthpaaJavpOj29k1tYJI8qWiGNF2wLJJI4iHyBnYgZJNZHhz9j34SeD/jJdfEXSfhb8OdL+IN7JNLc+J7Pw1ZQazcPKCJWe7WMTMXBIYlvmBOc0qMVTpKj093VaNctSrNcvb+Jo/sOKfvFTm3V9tZXvs9neFOL5u9uT/t5Sa9258O/twR+JP2HvG3xn/aT8Pal8KPip4K1rTtG0X4z/DvxG+L2F7W08mK3066TzI0llW+t2+wXcBWQS7lYNOK+yP22tT8VXX7BvxVvPAMOoQeMpPA+qTaDFDEftkd4bGUwKiLk+cH2hQM/Nitzxf+yJ8J/iF8XNP8f6/8MPh5rnjzSWhax8Sah4cs7nV7IwtuiMV08ZmQoxJXaw2k5GK9ErPEU/bYaVB6OV16Jr8ddUnstE2h4efsq8Kq15bb9bW3+Ss39rdpPf8ALv8AZQ/ZW1v49/sT/sh6/wD8Lv8AgR4e8A+BtT8OeIPDX/CP/Du407U5LxAI59KW+l1x4/tFyz3FvPi2DySPJmMNlK9S+DPgbRtc/wCDiD44a5e6Vp93rGh/Crw1Hp17Nbq89is9xfLMInIym8IobbjIGDxX1T4V/Y0+EHgT4wXXxC0T4U/DbRvH19NPcXPiax8M2VvrFxJPnz3e7SMTM0m5t5LZbcc5zXW2Hwz8N6V4/wBQ8V2vh/RLbxTq9rDY3+sxWESahe28JYxQyzhfMeNC7lVZiFLNgDJrrqVeeuq1rLmnJrznTcN9301dtFtfV80aPLRdK99IRT8oTU9tl10XV79F8of8EQbdNG/Zw+Jui2iLb6T4e+MXjTT9MtIxthsLZdXmdYY16KgZ3IUcDcayv2l/2rY9Z+K37RngHwT8J/hbqX/CEeFrST4o+IvHfidPDNpe2t3YTSWyfutPvXvYY7bzgzXIhiT5kUt8+37H8FfDrw/8NbK8tvDmhaNoFvqN9Pqd3FptlHapdXc775riQRqA0sjEs7nLMTkkmuY+IX7J/wALPi58S9G8aeK/hr4A8T+MfDvlf2TrureHrS91PTPKkMsXkXEkbSxbJGZ12MNrEkYJzXFWpOrRjRk/sKL9VBRv5662un59H2UqvJWnWS1c3JeV581vu0vZ+h+VH7PXhbT/AI5fsd/8EpNN8X2sfiGwk1qbzLe9zJHKLTRNQa3Vx0ZUMMXynKkIAQRkV9e/8FI7y0j/AOCiP7FUHi0D/hXsvivW2Zrpv9A/4SJdM/4kwkBO0y7jdeTkZ8wDaQ3X6r0n9n3wFoGn+FrSx8EeELK18DSPN4bhg0a3jj8PO6PG72ahALdmSSRSY9pKuw6E1ofE74V+F/jZ4HvvDHjPw3oPi7w1qYQXmk61p8V/Y3YR1kTzIZVZH2uqsMg4Kg9QK7q9bnrOslvUc7eqivvVrp9JJPW1jko0VCEabe1NQv8A+B627e9qr6q6ur3PhL/gvV4b8B+K/wBkabRraz8K317q/wAW/BMHjC0t44Hnunlv7ONFv1X5mZrQRKvm8mIIB8oFdz8c9AsfA3/Bar9mH+xbK10oX3w88X6LOtpCsKy2VudMlgtiFAHlRyfMi9FJOMZNfR/h/wDZE+E/hP4aW3gvS/hh8PNN8HWWoJq9voNp4cs4dMt71JBKl0lusYiWZZFVxIF3BlBByK6vVPh14f1vxrpXiW90LRrvxFoUM9vpuqz2Ucl7p0U+3zkhmK741k2JvCkBti5zgVhRXs2mv5pS++iqaXy1+TNal5vX+WMfuqOf46fNH5OeEfhjd69/wSX8efEXQraWbxh8A/j14s+JeiGEEysdO8RXj3cIAI3CWxa6jKnglx6AiH9tnXZf2q/2aP21f2j/AAqL3UNHXSNL+F/h2/08sJpfDFlc29zr88RUgmORru8Rv9myPYkH9YvC/wAKPC3gjw1f6NovhrQNI0fVLi5u72wstPht7a8muXZ7iSSNFCu8ruzOzAlyxLZJNJ8LvhJ4V+B3gaz8MeCvDPh/wf4a07f9k0nRNOh0+xtd7tI/lwxKqLudmY4AyzEnk1FOPLTUHryxil25lCEJN91KEFHl0vGUrvUty/ee0XWbb/w87qRS7NTfNfXWx8S/8FKYfCs/hH9jcfChNAfxMPid4cb4fDSghlGhiEnUTbeXz9hGmbvOx+78vZnny6++685+D/7H3wk/Z68U6jrngD4W/DnwPresRmG/1Dw/4as9Mur5C4crLLDGrOC4DYYkZAPWvRq25vda/mk5a67qKtfr8N76Xbehio2ce0YqP3OTvbp8VrXdrb66fP8A+yv/AMnd/tO/9jVov/qN6ZX0BXz/APsr/wDJ3f7Tv/Y1aL/6jemV9AVBYUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAfP/wACP+Ugn7QP/YK8Kf8Aom/r6Ar5/wDgR/ykE/aB/wCwV4U/9E39fQFABRRRQAUUUUAFFFFABRRRQAUUUUAeV/tL/te+GP2YZvDWmaja634j8Y+OLt7Dwv4U0C2W51jxDNGoeUQo7xxRxRId8k9xLFBECN8i7lz5v8VP+CnWnfs9/CHxD4y+JXwh+M3w/wBO8Lz6d9vTUtP0y7WO0vbpbVb0XVlfXFmY4pDmWP7R9ojQB/JKMjN5542zpH/Bw/4Fn1z5dP1j4HalZeF2n/1bahFrEMt8kOePNNsYGYLltiHOB1+qP2lPgNov7Uf7PvjT4c+IkZtE8b6NdaNeFQN8aTxMm9c8blJDKexUVm5SWHjiEuZvmfL/AIZyjZPvLl1euj0V1d6JQdd0ZOyXLr/ijGTfyUrW7q7bT5Vm/tbftUeF/wBi/wDZr8VfFPxd9vn8OeErMXc8WnRpLd3hZ1jihgV3RWlkkdEQF1BZxkjrXB+Lf+ChdhafFybwB4P+GfxL+JvjfSNFs9d8R6P4eGkQP4ThuwTax3s2oX9rb+fLtlKxQyyuBEzEBSjN8D/A34zaz/wUJ8Bfso/s0eLrh7vxX4A8VajdfF2JSC3leDphbwpOMcrd30mnPyPmAavsL9pr/gn38Q7P9o3W/jl+zr8ULP4cfEjxDYW1t4m8P+ItIXVvCfjr7FFItmLtFKXNpKNyRm5tpCwiQARk7t2tRRj+8TvBt8rXWCWk/RybTS193TcyhzNcklaaWq7T5rOPqoptO9nzK+i1+oPhv46h+JngTS9eg0/W9Jj1SBZxZaxp8un39oTwY5oJAGR1IIPY4ypKkE7deD/8E0/2yLr9vP8AY68NfEnUdAi8NarqE97p2o2MFz9ptVurK7mtJnt5cAvC8kLMhIyAcHJGT7xVVFaTX5Ci9Nd1o773Wjv53CiiioKCiiigAooooAKKKKACiiigD5//AGV/+Tu/2nf+xq0X/wBRvTK+gK+f/wBlf/k7v9p3/satF/8AUb0yvoCgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA+f/gR/ykE/aB/7BXhT/wBE39fQFfOfjH4H/GPwV+1B428c/Di/+GdxpnjnTtKtrm18SR3yz2ktktwmUaA7WVhODzggg/Wrn/GU/wD1b/8A+VegD6Aor5//AOMp/wDq3/8A8q9H/GU//Vv/AP5V6APoCivn/wD4yn/6t/8A/KvR/wAZT/8AVv8A/wCVegD6Aor5/wD+Mp/+rf8A/wAq9H/GU/8A1b//AOVegD6Aor5//wCMp/8Aq3//AMq9H/GU/wD1b/8A+VegD6Aor5//AOMp/wDq3/8A8q9H/GU//Vv/AP5V6APQP2if2W/Af7V3hKy0bx5oK6xb6Tfw6rpl1Ddz2Go6PeQsGjurO8tnjubWZcY8yCRG2syk7WYGn8Df2Q/Av7PGv6jrHh+z1+917VoEs7nWfEfibU/EuqtbIxdLZbzUri4nS3V2ZxCjiMO7Nt3MSeL/AOMp/wDq3/8A8q9H/GU//Vv/AP5V6I+78P8AWlvy09Aeqszq/hl+xH8LPg3+0h46+LvhrwhZaX8RviVHBD4j1pbieSTUEhVQihHcxxD5VLeUqbyql9xANYnxU/4J2/C34wePNd8SajaeNtJ1XxTHHHrf/CM+Pdf8NW2tbI/KV7q3069ghuJPKAjMkiM5REQsVVQM/wD4yn/6t/8A/KvR/wAZT/8AVv8A/wCVelZWUekVZeS7Lsh8z5nLq9X5vzPZvh58OvD/AMI/BOm+GvCuh6T4b8O6NCLew0zS7SO0s7KMdEjijARF5PAA6mtqvn//AIyn/wCrf/8Ayr0f8ZT/APVv/wD5V6ptt3ZMUorljsfQFFfP/wDxlP8A9W//APlXo/4yn/6t/wD/ACr0hn0BRXz/AP8AGU//AFb/AP8AlXo/4yn/AOrf/wDyr0AfQFFfP/8AxlP/ANW//wDlXo/4yn/6t/8A/KvQB9AUV8//APGU/wD1b/8A+Vej/jKf/q3/AP8AKvQB9AUV8/8A/GU//Vv/AP5V6P8AjKf/AKt//wDKvQAfsr/8nd/tO/8AY1aL/wCo3plfQFeK/slfA/xv8NfF3xO8U+P7/wAK3Wv/ABF1u11I2/h+O4WzsorfTbWyVd053szfZyx4wNwr2qgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigD/9k=) ###Code import warnings warnings.filterwarnings('ignore') import ee ee.Authenticate() ee.Initialize() ###Output To authorize access needed by Earth Engine, open the following URL in a web browser and follow the instructions. If the web browser does not start automatically, please manually browse the URL below. https://accounts.google.com/o/oauth2/auth?client_id=517222506229-vsmmajv00ul0bs7p89v5m89qs8eb9359.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fearthengine+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdevstorage.full_control&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&code_challenge=qdq92w36_TS6pG_mguVC1FHeVNfG7qTGU4GAWx5vg1A&code_challenge_method=S256 The authorization workflow will generate a code, which you should paste in the box below. Enter verification code: 4/1AY0e-g5NJV5-JL1t6uaazhKjeGqprFnHBbiW9-tT7RkcOfY9Zo3mjKjFXJQ Successfully saved authorization token. ###Markdown Accedamos a los servicios de almacenamiento de Google Drive. ###Code # Authenticate to Google Drive # Mount Google Drive from google.colab import drive # import drive from google colab # default location for the drive ROOT = "/content/drive" print(ROOT) # print content of ROOT (optional) drive.mount(ROOT) # we mount the google drive at /content/drive ###Output /content/drive Mounted at /content/drive ###Markdown Cambiémonos a nuestra carpeta de trabajo. Se debe crear una carpeta con el nombre Colab_Notebooks y dentro de esta una carpeta con el nombre Taller_GEE_Inc_2021, en caso contrario configure su ruta particular. ###Code %cd "drive/MyDrive/Colab_Notebooks/Taller_GEE_Inc_2021" ###Output /content/drive/MyDrive/Colab_Notebooks/Taller_GEE_Inc_2021 ###Markdown Elección de tipo de cuenca mediante producto WWF HydroSHEDS Basins level 2 o 5 ###Code basin_id = [6020029280, 6020017370, 6020021870, 6020008320, 6020014330, 6020000010, 6020006540]; geometry = ee.Geometry.Polygon( [[[-72.38057697749733, 15.92409528792919], [-78.00557697749733, 4.526872170566439], [-86.97042072749733, -6.87761574728738], [-76.59932697749733, -29.568915186977126], [-77.01620465827378, -57.81857384216577], [-59.08651715827378, -57.15724624674272], [-45.37557965827378, -36.9105889495516], [-30.43417340827378, -7.7493629697489705], [-37.64120465827378, 2.948256110220539], [-54.69198590827378, 13.031053599052584]]] ) basins_l5 = ( ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_5").filterBounds(geometry)) basins_l2 = ( ee.FeatureCollection("WWF/HydroSHEDS/v1/Basins/hybas_2") .filter(ee.Filter.inList('HYBAS_ID',basin_id)) ) # Elección de nivel de detalle de cuencas si se procesan cuencas nivel 5 asignas a variable basin = basins_l5 # si se procesan cuenca a nivel 2 asignar a variable basin = basins_l2 basins = basins_l2 # Función para filtrar colección FIRMS entre fechas requeridas def compute_burned_area(start_date,end_date,basins): burned_area_collection = ( ee.ImageCollection("FIRMS") .filterDate(start_date,end_date) .select('T21') ) def filter_burned_area(img): return img.gt(0) burned_area_collection = burned_area_collection.map(filter_burned_area) burned_area_img = burned_area_collection.max() result = burned_area_img.reduceRegions(**{ 'collection' : basins, 'reducer' : ee.Reducer.count(), 'scale' : 1000 }) data = result.getInfo() return data # Función tomar datos calculados y juntarlos en 1 diccionario def format_columns(result_data): result_values = {} for feature in result_data['features']: prop = feature['properties'] result_values[prop['HYBAS_ID']] = prop['count'] return result_values # Importo módulos a utilizar para separar colecciones mensuales y medir tiempos de procesamiento from dateutil import rrule import datetime import time # Defino fechas de interés start_date = datetime.datetime(2001,2,1) end_date = datetime.datetime(2001,4,30) start = '2001-01' array = [] index = [] # Defino bucle para recorrer mensualmente el período de interés for dt in rrule.rrule(rrule.MONTHLY, dtstart=start_date, until=end_date): # Recorro el objeto iterable mediante dt mensualmente entre la fecha de inicio y de fin start_time = time.time() # Guardo el momento en cual arrancó el procesamiento end = dt.strftime('%Y-%m') # Paso a str en formato YYYY-MM seleccionado data = compute_burned_area(start,end,basins) # Calculo el área quemada en ese período de tiempo values = format_columns(data) # Pongo los datos generados en formato... total_time = time.time() - start_time print("time=%s"%(total_time)) index.append(start) array.append(values) start = end # Ordeno los datos de focos de calor utilizando módulo pandas import pandas as pd df = pd.DataFrame(array,index=index) df.to_csv('partial_burned_area.csv') df # Ordeno datos para entregar a compilador de datos "Superset" columns = list(df.columns) array = [] area = 1 #250 * 250 / 1000000 #km2 for c in columns: subset = df[ [ c ] ] subset.rename(columns={c : 'foco_calor'},inplace=True) subset['basin_id'] = c subset['foco_calor'] = subset['foco_calor'] * area array.append(subset) total = pd.concat(array) total.to_csv('hot_spot.csv') total ###Output _____no_output_____
course_4/assessment_2.ipynb
###Markdown The class, `Pokemon`, is provided below and describes a Pokemon and its leveling and evolving characteristics. An instance of the class is one pokemon that you create.`Grass_Pokemon` is a subclass that inherits from `Pokemon` but changes some aspects, for instance, the boost values are different.For the subclass `Grass_Pokemon`, add another method called `action` that returns the string `"[name of pokemon] knows a lot of different moves!"`. Create an instance of this class with the `name` as `"Belle"`. Assign this instance to the variable `p1`. ###Code class Pokemon(object): attack = 12 defense = 10 health = 15 p_type = "Normal" def __init__(self, name, level = 5): self.name = name self.level = level def train(self): self.update() self.attack_up() self.defense_up() self.health_up() self.level = self.level + 1 if self.level%self.evolve == 0: return self.level, "Evolved!" else: return self.level def attack_up(self): self.attack = self.attack + self.attack_boost return self.attack def defense_up(self): self.defense = self.defense + self.defense_boost return self.defense def health_up(self): self.health = self.health + self.health_boost return self.health def update(self): self.health_boost = 5 self.attack_boost = 3 self.defense_boost = 2 self.evolve = 10 def __str__(self): self.update() return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level) class Grass_Pokemon(Pokemon): attack = 15 defense = 14 health = 12 def update(self): self.health_boost = 6 self.attack_boost = 2 self.defense_boost = 3 self.evolve = 12 def moves(self): self.p_moves = ["razor leaf", "synthesis", "petal dance"] def action(self): return "{0} knows a lot of different moves!".format(self.name) p1 = Grass_Pokemon("Belle") ###Output _____no_output_____
_solved/case4_air_quality_processing.ipynb
###Markdown CASE - air quality data of European monitoring stations (AirBase)> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- **AirBase** is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The [air quality database](https://www.eea.europa.eu/data-and-maps/data/aqereporting-8/air-quality-zone-geometries) consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. Some of the data files that are available from AirBase were included in the data folder: the **hourly concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output _____no_output_____ ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify our own column names (for how the column names are made up, see http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). **EXERCISE**:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. Reshaping using `melt`: ###Code data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') data_stacked.head() ###Output _____no_output_____ ###Markdown Reshaping using `stack`: ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data_stacked = data.set_index('date').stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked = data_stacked.rename(columns={'level_1': 'hour'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Combine date and hour: ###Code # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station # (this is 0 or 'value' depending on which method was used) data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function read_airbase_file(filename, station), using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] flags = ['flag' + str(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, flags) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked = data_stacked.rename(columns={'value': station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code import os filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = os.path.split(filename)[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. **EXERCISE**:Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.Hints- The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path("./data")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say "any characters"). The output is a Python generator, which you can collect as a `list()`. ###Code from pathlib import Path data_folder = Path("./data") data_files = list(data_folder.glob("*0008001*")) data_files ###Output _____no_output_____ ###Markdown **EXERCISE**:* Loop over the data files, read and process the file using our defined function, and append the dataframe to a list.* Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`.Hints- The `data_files` list contains `Path` objects (from the pathlib module). To get the actual file name as a string, use the `.name` attribute.- The station name is always first 7 characters of the file name. ###Code dfs = [] for filename in data_files: station = filename.name[:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code # let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("airbase_data_processed.csv") ###Output _____no_output_____ ###Markdown CASE - air quality data of European monitoring stations (AirBase)> *DS Data manipulation, analysis and visualization in Python* > *May/June, 2021*>> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- **AirBase** is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The [air quality database](https://www.eea.europa.eu/data-and-maps/data/aqereporting-8/air-quality-zone-geometries) consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. Some of the data files that are available from AirBase were included in the data folder: the **hourly concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output 1990-01-01 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify our own column names (for how the column names are made up, see http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). EXERCISE:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. Reshaping using `melt`: ###Code data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') data_stacked.head() ###Output _____no_output_____ ###Markdown Reshaping using `stack`: ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data_stacked = data.set_index('date').stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked = data_stacked.rename(columns={'level_1': 'hour'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Combine date and hour: ###Code # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station # (this is 0 or 'value' depending on which method was used) data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function read_airbase_file(filename, station), using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] flags = ['flag' + str(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, flags) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked = data_stacked.rename(columns={'value': station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code import os filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = os.path.split(filename)[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. EXERCISE: Use the glob.glob function to list all 4 AirBase data files that are included in the 'data' directory, and call the result data_files. ###Code import glob data_files = glob.glob("data/*0008001*") data_files ###Output _____no_output_____ ###Markdown EXERCISE: Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result combined_data. ###Code dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code # let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("airbase_data_processed.csv") ###Output _____no_output_____ ###Markdown CASE - air quality data of European monitoring stations (AirBase)> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- **AirBase** is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The [air quality database](https://www.eea.europa.eu/data-and-maps/data/aqereporting-8/air-quality-zone-geometries) consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. Some of the data files that are available from AirBase were included in the data folder: the **hourly concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output _____no_output_____ ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify our own column names (for how the column names are made up, see http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). EXERCISE:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. Reshaping using `melt`: ###Code data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') data_stacked.head() ###Output _____no_output_____ ###Markdown Reshaping using `stack`: ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data_stacked = data.set_index('date').stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked = data_stacked.rename(columns={'level_1': 'hour'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Combine date and hour: ###Code # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station # (this is 0 or 'value' depending on which method was used) data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function read_airbase_file(filename, station), using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] flags = ['flag' + str(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, flags) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked = data_stacked.rename(columns={'value': station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code import os filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = os.path.split(filename)[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. EXERCISE: Use the glob.glob function to list all 4 AirBase data files that are included in the 'data' directory, and call the result data_files. ###Code import glob data_files = glob.glob("data/*0008001*") data_files ###Output _____no_output_____ ###Markdown EXERCISE: Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result combined_data. ###Code dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code # let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("airbase_data_processed.csv") ###Output _____no_output_____ ###Markdown CASE - air quality data of European monitoring stations (AirBase)> *© 2021, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- **AirBase** is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The [air quality database](https://www.eea.europa.eu/data-and-maps/data/aqereporting-8/air-quality-zone-geometries) consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. Some of the data files that are available from AirBase were included in the data folder: the **hourly concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output 1990-01-01 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `pd.read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify our own column names (for how the column names are made up, see http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] data = pd.read_csv("data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). **EXERCISE**:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. Reshaping using `melt`: ###Code data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') data_stacked.head() ###Output _____no_output_____ ###Markdown Reshaping using `stack`: ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data_stacked = data.set_index('date').stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked = data_stacked.rename(columns={'level_1': 'hour'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Combine date and hour: ###Code # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station # (this is 0 or 'value' depending on which method was used) data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function read_airbase_file(filename, station), using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] flags = ['flag' + str(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, flags) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked = data_stacked.rename(columns={'value': station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code import os filename = "data/BETR8010000800100hour.1-1-1990.31-12-2012" station = os.path.split(filename)[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. **EXERCISE**:Use the [pathlib module](https://docs.python.org/3/library/pathlib.html) `Path` class in combination with the `glob` method to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`.Hints- The pathlib module provides a object oriented way to handle file paths. First, create a `Path` object of the data folder, `pathlib.Path("./data")`. Next, apply the `glob` function to extract all the files containing `*0008001*` (use wildcard * to say "any characters"). The output is a Python generator, which you can collect as a `list()`. ###Code from pathlib import Path data_folder = Path("./data") data_files = list(data_folder.glob("*0008001*")) data_files ###Output _____no_output_____ ###Markdown EXERCISE: Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result combined_data. ###Code dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code # let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("airbase_data_processed.csv") ###Output _____no_output_____ ###Markdown Case study: air quality data of European monitoring stations (AirBase)**AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe. **> *DS Data manipulation, analysis and visualisation in Python* > *December, 2017*> *© 2016, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. ###Code from IPython.display import HTML HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>') ###Output _____no_output_____ ###Markdown Some of the data files that are available from AirBase were included in the data folder: the hourly **concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.options.display.max_rows = 8 plt.style.use("seaborn-whitegrid") ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("../data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output 1990-01-01 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("../data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify are own column names (for how the column names are made up, see See http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag']*24) for item in pair] data = pd.read_csv("../data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). EXERCISE:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` and `unstack`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data2 = data.set_index('date') data_stacked = data2.stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked.head() # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function `read_airbase_file(filename, station)`, using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, ['flag']*24) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data = data.set_index('date') data_stacked = data.stack() data_stacked = data_stacked.reset_index() # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked = data_stacked.rename(columns={0: station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code filename = "../data/BETR8010000800100hour.1-1-1990.31-12-2012" station = filename.split("/")[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. EXERCISE: Use the `glob.glob` function to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`. ###Code import glob data_files = glob.glob("../data/*0008001*") data_files ###Output _____no_output_____ ###Markdown EXERCISE: Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`. ###Code dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code combined_data.to_csv("airbase_data.csv") ###Output _____no_output_____ ###Markdown Case study: air quality data of European monitoring stations (AirBase)**AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe. **> *DS Data manipulation, analysis and visualisation in Python* > *December, 2017*> *© 2016, Joris Van den Bossche and Stijn Van Hoey (, ). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*--- AirBase is the European air quality database maintained by the European Environment Agency (EEA). It contains air quality monitoring data and information submitted by participating countries throughout Europe. The air quality database consists of a multi-annual time series of air quality measurement data and statistics for a number of air pollutants. ###Code from IPython.display import HTML HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>') ###Output _____no_output_____ ###Markdown Some of the data files that are available from AirBase were included in the data folder: the hourly **concentrations of nitrogen dioxide (NO2)** for 4 different measurement stations:- FR04037 (PARIS 13eme): urban background site at Square de Choisy- FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia- BETR802: urban traffic site in Antwerp, Belgium- BETN029: rural background site in Houtem, BelgiumSee http://www.eea.europa.eu/themes/air/interactive/no2 ###Code %matplotlib inline import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.options.display.max_rows = 8 plt.style.use("seaborn-whitegrid") ###Output _____no_output_____ ###Markdown Processing a single fileWe will start with processing one of the downloaded files (`BETR8010000800100hour.1-1-1990.31-12-2012`). Looking at the data, you will see it does not look like a nice csv file: ###Code with open("../data/BETR8010000800100hour.1-1-1990.31-12-2012") as f: print(f.readline()) ###Output 1990-01-01 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 -999.000 0 ###Markdown So we will need to do some manual processing. Just reading the tab-delimited data: ###Code data = pd.read_csv("../data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t')#, header=None) data.head() ###Output _____no_output_____ ###Markdown The above data is clearly not ready to be used! Each row contains the 24 measurements for each hour of the day, and also contains a flag (0/1) indicating the quality of the data. Furthermore, there is no header row with column names. EXERCISE: Clean up this dataframe by using more options of `read_csv` (see its [docstring](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html)) specify the correct delimiter specify that the values of -999 and -9999 should be regarded as NaN specify are own column names (for how the column names are made up, see See http://stackoverflow.com/questions/6356041/python-intertwining-two-lists) ###Code # Column names: list consisting of 'date' and then intertwined the hour of the day and 'flag' hours = ["{:02d}".format(i) for i in range(24)] column_names = ['date'] + [item for pair in zip(hours, ['flag' + str(i) for i in range(24)]) for item in pair] data = pd.read_csv("../data/BETR8010000800100hour.1-1-1990.31-12-2012", sep='\t', header=None, names=column_names, na_values=[-999, -9999]) data.head() ###Output _____no_output_____ ###Markdown For the sake of this tutorial, we will disregard the 'flag' columns (indicating the quality of the data). EXERCISE:Drop all 'flag' columns ('flag1', 'flag2', ...) ###Code flag_columns = [col for col in data.columns if 'flag' in col] # we can now use this list to drop these columns data = data.drop(flag_columns, axis=1) data.head() ###Output _____no_output_____ ###Markdown Now, we want to reshape it: our goal is to have the different hours as row indices, merged with the date into a datetime-index. Here we have a wide and long dataframe, and want to make this a long, narrow timeseries. REMEMBER: Recap: reshaping your data with [`stack` / `melt` and `unstack` / `pivot`](./pandas_07_reshaping_data.ipynb) EXERCISE:Reshape the dataframe to a timeseries. The end result should look like: BETR801 1990-01-02 09:00:00 48.0 1990-01-02 12:00:00 48.0 1990-01-02 13:00:00 50.0 1990-01-02 14:00:00 55.0 ... ... 2012-12-31 20:00:00 16.5 2012-12-31 21:00:00 14.5 2012-12-31 22:00:00 16.5 2012-12-31 23:00:00 15.0 170794 rows × 1 columns Reshape the dataframe so that each row consists of one observation for one date + hour combination When you have the date and hour values as two columns, combine these columns into a datetime (tip: string columns can be summed to concatenate the strings) and remove the original columns Set the new datetime values as the index, and remove the original columns with date and hour values**NOTE**: This is an advanced exercise. Do not spend too much time on it and don't hesitate to look at the solutions. Reshaping using `melt`: ###Code data_stacked = pd.melt(data, id_vars=['date'], var_name='hour') data_stacked.head() ###Output _____no_output_____ ###Markdown Reshaping using `stack`: ###Code # we use stack to reshape the data to move the hours (the column labels) into a column. # But we don't want to move the 'date' column label, therefore we first set this as the index. # You can check the difference with "data.stack()" data_stacked = data.set_index('date').stack() data_stacked.head() # We reset the index to have the date and hours available as columns data_stacked = data_stacked.reset_index() data_stacked = data_stacked.rename(columns={'level_1': 'hour'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Combine date and hour: ###Code # Now we combine the dates and the hours into a datetime, and set this as the index data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['hour'], format="%Y-%m-%d%H") # Drop the origal date and hour columns data_stacked = data_stacked.drop(['date', 'hour'], axis=1) data_stacked.head() # rename the remaining column to the name of the measurement station # (this is 0 or 'value' depending on which method was used) data_stacked = data_stacked.rename(columns={0: 'BETR801'}) data_stacked.head() ###Output _____no_output_____ ###Markdown Our final data is now a time series. In pandas, this means that the index is a `DatetimeIndex`: ###Code data_stacked.index data_stacked.plot() ###Output _____no_output_____ ###Markdown Processing a collection of files We now have seen the code steps to process one of the files. We have however multiple files for the different stations with the same structure. Therefore, to not have to repeat the actual code, let's make a function from the steps we have seen above. EXERCISE: Write a function `read_airbase_file(filename, station)`, using the above steps the read in and process the data, and that returns a processed timeseries. ###Code def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ ... return ... def read_airbase_file(filename, station): """ Read hourly AirBase data files. Parameters ---------- filename : string Path to the data file. station : string Name of the station. Returns ------- DataFrame Processed dataframe. """ # construct the column names hours = ["{:02d}".format(i) for i in range(24)] flags = ['flag' + str(i) for i in range(24)] colnames = ['date'] + [item for pair in zip(hours, flags) for item in pair] # read the actual data data = pd.read_csv(filename, sep='\t', header=None, na_values=[-999, -9999], names=colnames) # drop the 'flag' columns data = data.drop([col for col in data.columns if 'flag' in col], axis=1) # reshape data = data.set_index('date') data_stacked = data.stack() data_stacked = data_stacked.reset_index() # parse to datetime and remove redundant columns data_stacked.index = pd.to_datetime(data_stacked['date'] + data_stacked['level_1'], format="%Y-%m-%d%H") data_stacked = data_stacked.drop(['date', 'level_1'], axis=1) data_stacked = data_stacked.rename(columns={0: station}) return data_stacked ###Output _____no_output_____ ###Markdown Test the function on the data file from above: ###Code filename = "../data/BETR8010000800100hour.1-1-1990.31-12-2012" station = filename.split("/")[-1][:7] station test = read_airbase_file(filename, station) test.head() ###Output _____no_output_____ ###Markdown We now want to use this function to read in all the different data files from AirBase, and combine them into one Dataframe. EXERCISE: Use the `glob.glob` function to list all 4 AirBase data files that are included in the 'data' directory, and call the result `data_files`. ###Code import glob data_files = glob.glob("../data/*0008001*") data_files ###Output _____no_output_____ ###Markdown EXERCISE: Loop over the data files, read and process the file using our defined function, and append the dataframe to a list. Combine the the different DataFrames in the list into a single DataFrame where the different columns are the different stations. Call the result `combined_data`. ###Code dfs = [] for filename in data_files: station = filename.split("/")[-1][:7] df = read_airbase_file(filename, station) dfs.append(df) combined_data = pd.concat(dfs, axis=1) combined_data.head() ###Output _____no_output_____ ###Markdown Finally, we don't want to have to repeat this each time we use the data. Therefore, let's save the processed data to a csv file. ###Code # let's first give the index a descriptive name combined_data.index.name = 'datetime' combined_data.to_csv("../data/airbase_data_processed.csv") ###Output _____no_output_____
term1/prjt1-Lanelines/.ipynb_checkpoints/P1-checkpoint.ipynb
###Markdown Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages ###Code #importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline ###Output _____no_output_____ ###Markdown Read in an Image ###Code #reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') ###Output This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3) ###Markdown Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images`cv2.cvtColor()` to grayscale or change color`cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! ###Code import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=2): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the line segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4). Think about things like separating line segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws `lines` with `color` and `thickness`. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ imgwidth, imglength, _ = img.shape agg_lines = [] left_lines = [] right_lines = [] for line in lines: for x1,y1,x2,y2 in line: if (y2-y1)/(x2-x1) < 0: left_lines.append(line) elif (y2-y1)/(x2-x1) > 0: right_lines.append(line) # Average lines and extrapolate to solid lines if left_lines: x1_mean, y1_mean, x2_mean, y2_mean = np.mean(left_lines, axis=0, dtype=int)[0] grad_left = (y2_mean - y1_mean) / (x2_mean - x1_mean) intercept_left = y1_mean - (grad_left * x1_mean) y1_left = imglength x1_left = (y1_left - intercept_left) / grad_left x2_left = 0.45 * imglength y2_left = (grad_left * x2_left) + intercept_left agg_lines.append(np.array([[x1_left, y1_left, x2_left, y2_left]], dtype=int)) if right_lines: x1_mean, y1_mean, x2_mean, y2_mean = np.mean(right_lines, axis=0, dtype=int)[0] grad_right = (y2_mean - y1_mean) / (x2_mean - x1_mean) intercept_right = y1_mean - (grad_right * x1_mean) y1_right = imglength x1_right = (y1_right - intercept_right) / grad_right x2_right = 0.55 * imglength y2_right = (grad_right * x2_right) + intercept_right agg_lines.append(np.array([[x1_right, y1_right, x2_right, y2_right]], dtype=int)) for line in agg_lines: for x1,y1,x2,y2 in line: cv2.line(img, (x1, y1), (x2, y2), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., λ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + λ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, λ) ###Output _____no_output_____ ###Markdown Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** ###Code import os os.listdir("test_images/") ###Output _____no_output_____ ###Markdown Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. ###Code # TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. ## define some parameters kernel_size = 9 low_threshold = 100 high_threshold = 200 ignore_mask_color = 255 # Hough transform parameters rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 50 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 100 # maximum gap in pixels between connectable line segments for img in os.listdir("test_images/"): image = mpimg.imread("test_images/" + img) imshape = image.shape w = imshape[0] l = imshape[1] # 1. convert to grayscale gray = grayscale(image) # 2. blur image to reduce noise gray = gaussian_blur(gray, kernel_size) # 3. Apply canny edge detector edges = canny(gray, low_threshold, high_threshold) # 4. Select region of interest # Region is a triangle with the apex at the middle of the image # and the base at 1/10 of the length of the image mask = np.zeros_like(edges) vertices = np.array([[(l/10, w),(l/2, w/2), (9*l/10, w)]], dtype=np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges, mask) # 5. Use the probabilistic Hough tranform to detect lines lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) # 6. Draw lines on the image and save image draw_lines(image, lines, color=[0, 0, 255], thickness=2) output_img_name = "test_images_output/" + img[:-4] + '_output.jpg' cv2.imwrite(output_img_name, image) ###Output _____no_output_____ ###Markdown Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** ###Code # Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): ###----Parameters--------------------- kernel_size = 9 low_threshold = 100 high_threshold = 200 ignore_mask_color = 255 # Hough transform parameters rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 50 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 100 # maximum gap in pixels between connectable line segments imshape = image.shape w = imshape[0] l = imshape[1] #-------------------------------------- # 1. convert to grayscale gray = grayscale(image) # 2. blur image to reduce noise gray = gaussian_blur(gray, kernel_size) # 3. Apply canny edge detector edges = canny(gray, low_threshold, high_threshold) # 4. Select region of interest # Region is a triangle with the apex at the middle of the image # and the base at 1/10 of the length of the image mask = np.zeros_like(edges) vertices = np.array([[(l/10, w),(l/2, w/2), (9*l/10, w)]], dtype=np.int32) cv2.fillPoly(mask, vertices, ignore_mask_color) masked_edges = cv2.bitwise_and(edges, mask) # 5. Use the probabilistic Hough tranform to detect lines lines = cv2.HoughLinesP(masked_edges, rho, theta, threshold, np.array([]), min_line_length, max_line_gap) # 6. Draw lines on the image and save image # draw_lines(image, lines, thickness=2) # before extrapolation image_zeros = np.zeros_like(image) draw_lines(image_zeros, lines, thickness=8) result = weighted_img(image_zeros, image) return result ###Output _____no_output_____ ###Markdown Let's try the one with the solid white lane on the right first ... ###Code white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False) ###Output [MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4 [MoviePy] Writing video test_videos_output/solidWhiteRight.mp4 ###Markdown Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. ###Code HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output)) ###Output _____no_output_____ ###Markdown Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! ###Code yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output)) ###Output _____no_output_____ ###Markdown Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! ###Code challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output)) ###Output _____no_output_____
examples/data_cleaning/data_combining.ipynb
###Markdown Data CleaningThis notebook is for gathering and combining our data. The output should be a combination of all our data in a csv format.The code involves importing the weather data and the solar generation data, joining the two datasets, then outputting the dataset for future usage.The code was run on Google Colaboratory, which is why there are odd import syntax. ###Code # Link that has all the property description. https://nsrdb.nrel.gov/about/u-s-data.html # Cloud type: https://www.ncdc.noaa.gov/cdr/atmospheric/avhrr-cloud-properties-patmos-x # Real-Time Data: https://midcdmz.nrel.gov/ import glob import pandas as pd import numpy as np import matplotlib.pyplot as plt # Weather dataset dataset = glob.glob('./NREL_Weather_Data/*.csv') dataset df = pd.DataFrame() # We only want data from 2013 onwards since the solar data before 2013 is poorly documented for csv in dataset[6:]: temp_df = pd.read_csv(csv,skiprows=2) df = df.append(temp_df, ignore_index=True) df df.columns # Importing the cleaned_dataset from the google drive file_location = './Fuel_Generation_data/cleaned_solar_generation_data.csv' cleaned_dataset=pd.read_csv(file_location, sep='\t') # Solar generation dataset cleaned_dataset str(df['Month'][0]).zfill(2) # Formatting the Date to match the solar data df['Date'] = df.apply(lambda row: str(row['Month']).zfill(2)+'/'+ str(row['Day']).zfill(2)+'/'+ str(row['Year']), axis=1) # FOR SOME REASON NEEDS TO BE RAN TWICE. HELP! df['Date'] = df.apply(lambda row: str(row['Month']).zfill(2)+'/'+ str(row['Day']).zfill(2)+'/'+ str(row['Year']), axis=1) # Formatting the hhmm to match the solar data df['hhmm'] = df.apply(lambda row: str(row['Hour']).zfill(2)+':'+ str(row['Minute']).zfill(2), axis=1) df # Inner joining the weather and solar data on date. joined_df = pd.merge(df, cleaned_dataset, on=['Date', 'hhmm']) joined_df # Plotting the solar generation plt.plot(joined_df['Solar']) ###Output _____no_output_____ ###Markdown Because the above graph shows a trend of solar generation increasing, we realized that there could be increase in capacity over the years due to clean energy efforts. We needed to find a capacity data to normalize these values to the potential capacity. ###Code # Creating a dataframe to interpolate our solar generation data installed_capacity = pd.DataFrame() for years in range(2013,2019): for months in range(1,13): row = [{"Year": years, "Month": months}] installed_capacity = installed_capacity.append(row, ignore_index=True) installed_capacity # "installed" is the installed capacity value installed = [121,193,288,566,1069,1858] installed_capacity_no_inter = pd.DataFrame({"Year":range(2013,2019), "Installed": installed}) installed_capacity_no_inter # In order to interpolate the data, we needed the previous year (2012) data installed = [82,121,193,288,566,1069,1858] # We used numpy linear method to interpolate installed_row_lin = [] size = len(installed) - 1 for i in range(size): low = installed[i] high = installed[i+1] installed_row_lin.append(np.linspace(low, high, 12)) installed_row_lin = np.concatenate(installed_row_lin) # We also used numpy geometric method to interpolate installed_row_geom = [] size = len(installed) - 1 for i in range(size): low = installed[i] high = installed[i+1] installed_row_geom.append(np.geomspace(low, high, 12)) installed_row_geom = np.concatenate(installed_row_geom) # Comibining the linear and geometric interpolation values installed_capacity["installed_lin"] = installed_row_lin installed_capacity["installed_geom"] = installed_row_geom installed_capacity # Merging the installed capacity to the big df capacity_joined_df = joined_df.merge(installed_capacity, on=['Year','Month']) capacity_joined_df = capacity_joined_df.merge(installed_capacity_no_inter, on='Year') capacity_joined_df # Creating a new column with the normalized value of solar generation capacity_joined_df['Normalized_Solar'] = capacity_joined_df['Solar']/capacity_joined_df['Installed'] capacity_joined_df['Normalized_Solar_lin'] = capacity_joined_df['Solar']/capacity_joined_df['installed_lin'] capacity_joined_df['Normalized_Solar_geom'] = capacity_joined_df['Solar']/capacity_joined_df['installed_geom'] capacity_joined_df # Plotting the solar generation data # Here we show comparison among static, linear, and geometric interpolations fig, ax = plt.subplots(1, 3, figsize=(30,6)) x = range(capacity_joined_df.shape[0]) ax[0].plot(x,capacity_joined_df['Normalized_Solar']) ax[1].plot(x,capacity_joined_df['Normalized_Solar_lin']) ax[2].plot(x,capacity_joined_df['Normalized_Solar_geom']) # The dataframe index where the year changes. # Might be better to comb through the data through regex. year_index = [17519, 35039, 52559, 70079, 80299] for ax_i in range(3): for i in range(5): ax[ax_i].axvline(year_index[i], 0, 1, c="r") ax[0].set_title("No Interpolation") ax[1].set_title("linear") ax[2].set_title("geometric") ###Output _____no_output_____ ###Markdown This is better. It shows a normalized solar generation depending on the capacity change throughout the years. Here we have comparison among "No Interpolation", "Linear Interpolation", and "Geometric Interpolation" for our installed capacity data. Linear interpolation was accomplished with np.linspace whereas the geomtric interpolation was done with np.geomspace. The red lines indicate when the "Year" changes. One can see that in the "No Interpolation", the drops are correlated with the change in years. However, this behavior is less apparent in the linear and geometric interpolation graphs. ###Code capacity_joined_df.columns # Exporting the data to csv format path='/content/drive/Shared drives/EnergyForecaster/Dataset/' capacity_joined_df.to_csv(path+'with0_with_interpolation.csv', na_rep='NA', columns=['Date','hhmm', 'DHI', 'DNI', 'GHI', 'Clearsky DHI', 'Clearsky DNI', 'Clearsky GHI', 'Cloud Type', 'Dew Point', 'Solar Zenith Angle', 'Fill Flag', 'Surface Albedo', 'Wind Speed', 'Precipitable Water', 'Wind Direction', 'Relative Humidity', 'Temperature', 'Pressure', 'Solar', 'Normalized_Solar_lin', 'Normalized_Solar_geom', 'Normalized_Solar']) ###Output _____no_output_____ ###Markdown Data points where the solar generation is 0 is removed to prevent our machine learning models from receiving unnecessary info. The Random Forest algorithm showed much lower deviation after removing the 0s. ###Code no0Solar_df = capacity_joined_df[capacity_joined_df.Normalized_Solar != 0] no0Solar_df.dropna(inplace=True) no0Solar_df # Exporting the data to csv format path='/content/drive/Shared drives/EnergyForecaster/Dataset/' no0Solar_df.to_csv(path+'no0_with_interpolation.csv', na_rep='NA', columns=['Date','hhmm', 'DHI', 'DNI', 'GHI', 'Clearsky DHI', 'Clearsky DNI', 'Clearsky GHI', 'Cloud Type', 'Dew Point', 'Solar Zenith Angle', 'Fill Flag', 'Surface Albedo', 'Wind Speed', 'Precipitable Water', 'Wind Direction', 'Relative Humidity', 'Temperature', 'Pressure', 'Solar', 'Normalized_Solar_lin', 'Normalized_Solar_geom', 'Normalized_Solar']) ###Output _____no_output_____
week-6/week-6-1-election-prediction.ipynb
###Markdown Week 6-1: Election Prediction through simulationThis is the first of two classes on election prediction. We'll be using simulation throughout to build our model.All data is downloaded from [Huffington Post Pollster](http://elections.huffingtonpost.com/pollsterhistorical-charts)Further references for your enjoyment:- [The Real Story of 2016](https://fivethirtyeight.com/features/the-real-story-of-2016/) - Fivethirtyeight- Buzzfeed's post-election [forecast grades](https://www.buzzfeednews.com/article/jsvine/2016-election-forecast-grades)- [Putting the Polling Miss of the 2016 Election in Perspective](https://www.nytimes.com/interactive/2016/11/13/upshot/putting-the-polling-miss-of-2016-in-perspective.html) - The Upshot- [After 2016, Can We Ever Trust the Polls Again?](https://newrepublic.com/article/139158/2016-can-ever-trust-polls-again) - The New Republic And finally, the single biggest reason that the simple election prediction model in this file misses so badly (predicting Clinton's chances in the high 90s): it does not take into account the [correlations between polling errors](https://www.quantamagazine.org/why-nate-silver-sam-wang-and-everyone-else-were-wrong-part-2-20161111/) in different states. If we fix this one factor, even our simple model will give Trump substantially higher chances.To see what goes into a much more realistic election model, check out this [notebook recreation of 538's 2012 model](http://nbviewer.jupyter.org/github/jseabold/538model/blob/master/silver_model.ipynb) by by Skipper Seabold. Part 1: Simulating one pollHere we'll produce simulated election outcomes from a single poll. We are uncritically taking the poll results as an unbiased inidicator of results. This assumes that the people who are polled ("likely voters") are a good representation of the people who actually vote. It is possible to adjust these sorts of factors later, but let's begin with the basics. ###Code import math import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Set a random seed so this whole notebook becomes deterministic (for teaching purposes) np.random.seed(999) # Load national polling data. It's a TSV file, so we have to tell read_csv it's separated by tabs uspolls = pd.read_csv('data/US.tsv', sep='\t') uspolls.head() # keep only polls of "likely voters" (as opposed to registered voters, or republicans/democracts) uspolls=uspolls[uspolls.sample_subpopulation == 'Likely Voters'] # We need a margin of error to do our simulation, so drop any rows that don't have it uspolls=uspolls[~pd.isnull(uspolls.margin_of_error)].reset_index(drop=True) len(uspolls) ###Output _____no_output_____ ###Markdown There are lot of polls here! Let's pick just one, near the end of the polling period (just before the election) and look at the outcomes it implies. ###Code uspolls.tail() ###Output _____no_output_____ ###Markdown We will pick a poll which showed a close race, because it better demonstrates how the margin of error works. ###Code poll = uspolls.iloc[384] poll ###Output _____no_output_____ ###Markdown The poll results and the margin of error define a probability distribution of "true" survey results -- that is, the result that the pollster would get if they could ask every single "likely voter" in the country. This distribution is a "normal" distribution. ###Code # We reduce the problem to the difference between the two poll results, because that's what actually matters mean = poll.Clinton - poll.Trump # Some subtlety in calculating the stddev from the margin of error (MOE) # - MOE is reported as 95% width, so we'd normally divide by 1.96 for standard deviation # - But we want the stddev for a difference between two poll questions which are not independent. # One more vote for Clinton is (almost always) one less vote for Trump. We need to multiply by nearly 2. # - These almost perfectly cancel out, and stddev of the difference is near exactly MOE # See http://abcnews.go.com/images/PollingUnit/MOEFranklin.pdf stddev = poll.margin_of_error # For more general discussion of the MOE see # http://www.pewresearch.org/fact-tank/2016/09/08/understanding-the-margin-of-error-in-election-polls/ ###Output _____no_output_____ ###Markdown Now we can take samples from a normal distribution with this mean and standard deviation, to simulate what the underlying "true" voting pattern would be. For example, ###Code np.random.normal(mean, stddev) ###Output _____no_output_____ ###Markdown To interpret this number, recall that we're simulating the Clinton-Trump difference. So positive means it goes for Clinton, who our poll says is ahead 46-44. Given this, we would expect more of the simulation results to go for Clinton. Let's make 1000 and see what happens. ###Code results = np.random.normal(mean, stddev, 1000) plt.hist(results, bins=20); ###Output _____no_output_____ ###Markdown Sure enough, the center of this distribution is at 2, the lead given by the polls. But many values are negative as well, meaning that Clinton doesn't always win (again, assuming the actual voters splt 46-44, as this poll suggests.)Let's see how often Trump wins, according to this model ###Code (results<=0).mean() ###Output _____no_output_____ ###Markdown So about 24% according to this model. This makes sense becuase the margin of error (2.8%) is pretty wide relative to the difference between the polls (2%) If we run the simulation again, we'll get slightly different results. ###Code results = np.random.normal(mean, stddev, 1000) plt.hist(results, bins=20); (results<=0).mean() ###Output _____no_output_____ ###Markdown The more samples we take, the less variation we'll see in this number. To demonstrate this, let's plot a histogram of the results for various numbers of samples. ###Code plt.hist(np.random.normal(mean, stddev, 100), bins=20, density=True) plt.title('100 samples') plt.show() plt.hist(np.random.normal(mean, stddev, 1000), bins=20, density=True) plt.title('1,000 samples') plt.show() plt.hist(np.random.normal(mean, stddev, 10000), bins=20, density=True) plt.title('10,000 samples') plt.show() plt.hist(np.random.normal(mean, stddev, 100000), bins=20, density=True) plt.title('100,000 samples') plt.show() ###Output _____no_output_____ ###Markdown We can get a reliable win percentage by counting the wins in a large sample: ###Code results = np.random.normal(mean, stddev, 100000) (results<=0).mean() ###Output _____no_output_____ ###Markdown Part 2: The Electoral collegeThis shows how to interpret the uncertainty in a single poll -- at least the uncertainty in the margin of error. There are two major directions to go from here:1) In a real election prediction model, we would combine all the polls according to a weighted average of poll reliability. What "reliability" really means is how well the poll matched (predicted) previous election results. We can figure out the optimal combination of poll weights using, for example, linear regression.2) The US uses an electoral college system, where each state contributes a fixed number of votes (out of a total of 538). We definitely need to simulate that to get anything like a reasonable election prediction.So for the next step, let's see how to combine polls in the electoral college.Our first task will be to pick out one poll in each state. We'll use the last dated "Likely Voter" poll. ###Code # Load a CSV of electoral college votes for each state. # Ref: https://www.archives.gov/federal-register/electoral-college/allocation.html states = pd.read_csv('data/states.csv') states.head() # We'll use a little Pandas trick to make merging in the poll data easier: # set the index to the abbreviation states = states.set_index(states.abbr) # And add the columns we'll need: Trump, Clinton, margin_of_error, all initially blank states['Trump'] = np.nan states['Clinton'] = np.nan states['margin_of_error'] = np.nan states.head() # Not all polls have reported margins of error, but we can figure it out if we know the number of people surveyed. # This function salculate the 95% margin of error, using the classic formula. # Ref: https://onlinecourses.science.psu.edu/stat100/node/56/ def calc_moe(sample_size, proportion): return 100 * 1.96 * math.sqrt((proportion*(1-proportion)/sample_size)) # Now we'll load polls for each state and pick one poll for abbr in states.abbr: polls = pd.read_csv('data/' + abbr + '.tsv', sep='\t') polls = polls[polls.sample_subpopulation == 'Likely Voters'] poll = polls.tail(1).squeeze() states.loc[abbr,'Trump'] = poll.Trump states.loc[abbr,'Clinton'] = poll.Clinton # There may be no MOE reported for this poll. If not, calculate it moe = poll.margin_of_error if pd.isnull(moe): proportion = poll.Trump / 100 # or Clinton, will give nearly same result moe = calc_moe(poll.observations, proportion) states.loc[abbr,'margin_of_error'] = moe states.head() ###Output _____no_output_____ ###Markdown Now we simulate an election by drawing a sample from each state election indpendently, then tallying the electoral college votes. Instead of looking at the distribution of Clinton-Trump vote, we'll just look at the distribution of EC votes for Clinton. ###Code def simulate_election(n_times): # Start with 3 votes for DC (for which we have no polls, but went solidly Clinton) clinton_ec_votes = np.zeros(n_times) + 3 # run n_times simulated 'elections' for each state for abbr in states.abbr: mean = states['Clinton'][abbr] - states['Trump'][abbr] stddev = states['margin_of_error'][abbr] results = np.random.normal(mean, stddev, n_times) # Add ec votes for every election where Clinton won this state clinton_ec_votes[results>0] += states['electoral_votes'][abbr] return clinton_ec_votes # Run 10 simulated elections and look at the results simulate_election(10) # Run many, many simulated elections and plot histogram of results results = simulate_election(100000) plt.hist(results, bins=range(220, 420, 10), density=True) plt.axvline(270, color='black', linestyle='dashed'); ###Output _____no_output_____ ###Markdown To get a Clinton win probability out of this, we can calculate the percentage where she receives 270 or more. ###Code (results>=270).mean() ###Output _____no_output_____ ###Markdown Part 3: Correlated errors ###Code # Let's start with some random numbers mean=0 stddev=1 n=10 np.random.normal(mean, stddev, n) np.random.normal(mean, stddev, n).sum() def plot_distribution_of_sums(make_a_sum_function, n_times): sums = pd.DataFrame(np.zeros(n_times)) sums = sums.applymap(make_a_sum_function) sums.plot(kind='hist', bins=20) print("standard deviation") print(float(sums.std())) # If take the sum of these random numbers 1000 times, what do we get? def uncorrelated_sum(dummy): return np.random.normal(mean, stddev, n).sum() plot_distribution_of_sums(uncorrelated_sum, 10000) ###Output standard deviation 3.097786172059002 ###Markdown But suppose that half of these random numbers were actually the *same* random number... ###Code def correlated_randoms(): numbers = np.random.normal(mean, stddev, n) numbers[6:10] = numbers[5] return numbers correlated_randoms() def correlated_sum(dummmy): return correlated_randoms().sum() plot_distribution_of_sums(correlated_sum, 10000) ###Output standard deviation 5.549257744697144 ###Markdown Part 4: Elections with correlated errorsA poll is meant to tell us how people will vote in an election. But there will be some difference between the last latest polls before an election and the actual election results. According to [this research](http://www.stat.columbia.edu/~gelman/research/unpublished/polling-errors.pdf), US state level presidentail polls in the last three weeks before an election are off by an average of 2%.So we could simply add 2% to our margins of error. But this isn't quite right: when a poll is off in one state, it's often off in other states for similar reasons. The polling error is *correlated.* Not taking into account correlated polling errors were the [biggest reason](https://fivethirtyeight.com/features/election-update-why-our-model-is-more-bullish-than-others-on-trump/) that many 2016 election predictions were so badly off. First, let's see what simply doubling the margin of error on every state does. This increases error, but not *correlated* error. ###Code # Helper function to interpret results def plot_results(results): plt.hist(results, bins=range(150, 500, 10), density=True) plt.axvline(270, color='black', linestyle='dashed'); print("Clinton win probability: " + str((results>=270).mean())) states.margin_of_error *= 2 results = simulate_election(100000) states.margin_of_error /= 2 plot_results(results) ###Output Clinton win probability: 0.94563 ###Markdown Instead, we need to add polling error that is similar between states. The simplest way to do this is just to add the same polling error to every state (perfectly correlated across all states!) ###Code def simulate_election_national_error(n_times, polling_error_stddev): # Start with 3 votes for DC (for which we have no polls, but went solidly Clinton) clinton_ec_votes = np.zeros(n_times) + 3 # For each "election", add in the same random polling error for every state polling_errors = np.random.normal(0, polling_error_stddev, n_times) # run n_times simulated 'elections' for each state for abbr in states.abbr: mean = states['Clinton'][abbr] - states['Trump'][abbr] stddev = states['margin_of_error'][abbr] results = np.random.normal(mean, stddev, n_times) results += polling_errors # Add ec votes for every election where Clinton won this state clinton_ec_votes[results>0] += states['electoral_votes'][abbr] return clinton_ec_votes # What does the distribution of electoral college outcomes look like with 2% correlated national polling error? national_polling_error_stddev = 2 results = simulate_election_national_error(100000, national_polling_error_stddev) plot_results(results) ###Output Clinton win probability: 0.85607
dahu/mpi/saturation.ipynb
###Markdown Network saturation of a Grid'5000 clusterThe goal here is to find the `bb_bw` term of the Simgrid platform file. ###Code import io import zipfile import os import pandas from plotnine import * import plotnine plotnine.options.figure_size = (12, 8) import yaml import warnings import re warnings.simplefilter(action='ignore') def get_yaml(archive_name, yaml_name): archive = zipfile.ZipFile(archive_name) return yaml.load(io.BytesIO(archive.read(yaml_name))) def read_csv(archive_name, file_name, columns): archive = zipfile.ZipFile(archive_name) res = pandas.read_csv(io.BytesIO(archive.read(file_name)), names=columns) res['archive_name'] = archive_name res['file_name'] = file_name return res def read_result(archive_name, file_name): res = read_csv(archive_name, file_name, columns=['rank', 'operation', 'size', 'start', 'duration']) res['start'] *= 1e-3 res['duration'] *= 1e-3 res['stop'] = res['start'] + res['duration'] return res def read_all_result(archive_name, dir_name): return pandas.concat([ read_result(archive_name, os.path.join(dir_name, 'load_alltoall.csv')), read_result(archive_name, os.path.join(dir_name, 'load_send.csv')), read_result(archive_name, os.path.join(dir_name, 'load_sendrecv_diff.csv')), read_result(archive_name, os.path.join(dir_name, 'load_sendrecv_same.csv')), ]) df = read_all_result('saturation/1/grenoble_2020-07-03_1938926.zip', 'exp_monocore') print(len(df)) df.head() ###Output 5950 ###Markdown Overview of the experiment (Gantt chart) ###Code df['ymin'] = df['rank'] - 0.5 df['ymax'] = df['rank'] + 0.5 (ggplot(df) + aes(xmin='start', xmax='stop', ymin='ymin', ymax='ymax', fill='file_name', color='file_name') + geom_rect() + theme_bw() + xlab('Time (seconds)') + ylab('MPI rank') ) df['exp_id'] = -1 for fname in df['file_name'].unique(): total_exp = len(df[(df['file_name'] == fname) & (df['rank'] == 0)]) print(total_exp) for rank in df['rank'].unique(): thisrank_exp = len(df[(df['file_name'] == fname) & (df['rank'] == rank)]) df.loc[(df['file_name'] == fname) & (df['rank'] == rank), 'exp_id'] = range(total_exp-thisrank_exp, total_exp) nb_nodes = df.groupby(['file_name', 'exp_id'])[['operation']].count() nb_nodes df = df.set_index(['file_name', 'exp_id']).join(nb_nodes, rsuffix='_count').reset_index() df.head() ###Output _____no_output_____ ###Markdown Evolution of the node bandwidth for each part of the experiment ###Code df['bw'] = df['size'] / df['duration'] * 8 * 1e-9 grouped = df.groupby(['file_name', 'exp_id'])[['bw']].agg(['count', 'sum']) grouped.columns = grouped.columns.droplevel() grouped = grouped.reset_index() grouped['bw'] = grouped['sum'] grouped.loc[grouped['file_name'].str.contains('alltoall'), 'bw'] *= grouped['count'] # this is an alltoall, so each node sends its buffer N times, so we have to multiply the bandwidth by N grouped['theoretical_bw'] = 100 * grouped['count'] grouped.head() def do_plot(df, filename): return (ggplot(df[df['file_name'] == filename]) + aes(x='count', y='bw') + geom_point() + geom_line(aes(y='theoretical_bw'), linetype='dashed') + theme_bw() + ylab('Bandwidth (Gbps)') + xlab('Number of nodes') + scale_x_continuous(breaks=df['count'].unique()) + expand_limits(y=0) + ggtitle(filename) ) do_plot(grouped, 'exp_monocore/load_send.csv') do_plot(grouped, 'exp_monocore/load_sendrecv_diff.csv') do_plot(grouped, 'exp_monocore/load_sendrecv_same.csv') do_plot(grouped, 'exp_monocore/load_alltoall.csv') ###Output _____no_output_____
90_workshops/202104_egu_short_course/functions.ipynb
###Markdown LTPy functions This notebook lists all `functions` that are defined and used throughout the `LTPy course`.The following functions are listed:**[Data loading and re-shaping functions](load_reshape)*** [generate_xr_from_1D_vec](generate_xr_from_1D_vec)* [load_l2_data_xr](load_l2_data_xr)* [generate_geographical_subset](generate_geographical_subset)* [generate_masked_array](generate_masked_array)* [load_masked_l2_da](load_masked_l2_da)* [select_channels_for_rgb](rgb_channels)* [normalize](normalize)* [slstr_frp_gridding](slstr_frp_gridding)* [df_subset](df_subset)**[Data visualization functions](visualization)*** [visualize_scatter](visualize_scatter)* [visualize_pcolormesh](visualize_pcolormesh)* [visualize_s3_pcolormesh](visualize_s3_pcolormesh)* [visualize_s3_frp](visualize_s3_frp)* [viusalize_s3_aod](visualize_s3_aod) Load required libraries ###Code import os from matplotlib import pyplot as plt import xarray as xr from netCDF4 import Dataset import numpy as np import glob from matplotlib import pyplot as plt import matplotlib.colors from matplotlib.colors import LogNorm import cartopy.crs as ccrs from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER import cartopy.feature as cfeature import matplotlib.cm as cm import warnings warnings.simplefilter(action = "ignore", category = RuntimeWarning) warnings.simplefilter(action = "ignore", category = FutureWarning) ###Output _____no_output_____ ###Markdown Data loading and re-shaping functions `generate_xr_from_1D_vec` ###Code def generate_xr_from_1D_vec(file, lat_path, lon_path, variable, parameter_name, longname, no_of_dims, unit): """ Takes a netCDF4.Dataset or xarray.DataArray object and returns a xarray.DataArray object with latitude / longitude information as coordinate information Parameters: file(netCDF4 data file or xarray.Dataset): AC SAF or IASI Level 2 data file, loaded a netCDF4.Dataset or xarray.DataArray lat_path(str): internal path of the data file to the latitude information, e.g. 'GEOLOCATION/LatitudeCentre' lon_path(str): internal path of the data file to the longitude information, e.g. 'GEOLOCATION/LongitudeCentre' variable(array): extracted variable of interested parameter_name(str): parameter name, preferably extracted from the data file longname(str): Long name of the parameter, preferably extracted from the data file no_of_dims(int): Define the number of dimensions of your input array unit(str): Unit of the parameter, preferably extracted from the data file Returns: 1 or 2-dimensional (depending on the given number of dimensions) xarray.DataArray with latitude / longitude information as coordinate information """ latitude = file[lat_path] longitude = file[lon_path] param = variable if (no_of_dims==1): param_da = xr.DataArray( param[:], dims=('ground_pixel'), coords={ 'latitude': ('ground_pixel', latitude[:]), 'longitude': ('ground_pixel', longitude[:]) }, attrs={'long_name': longname, 'units': unit}, name=parameter_name ) else: param_da = xr.DataArray( param[:], dims=["x","y"], coords={ 'latitude':(['x','y'],latitude[:]), 'longitude':(['x','y'],longitude[:]) }, attrs={'long_name': longname, 'units': unit}, name=parameter_name ) return param_da ###Output _____no_output_____ ###Markdown `load_l2_data_xr` ###Code def load_l2_data_xr(directory, internal_filepath, parameter, lat_path, lon_path, no_of_dims, paramname, unit, longname): """ Loads a Metop-A/B Level 2 dataset in HDF format and returns a xarray.DataArray with all the ground pixels of all directory files. Uses function 'generate_xr_from_1D_vec' to generate the xarray.DataArray. Parameters: directory(str): directory where the HDF files are stored internal_filepath(str): internal path of the data file that is of interest, e.g. TOTAL_COLUMNS parameter(str): paramter that is of interest, e.g. NO2 lat_path(str): name of latitude variable lon_path(str): name of longitude variable no_of_dims(int): number of dimensions of input array paramname(str): name of parameter unit(str): unit of the parameter, preferably taken from the data file longname(str): longname of the parameter, preferably taken from the data file Returns: 1 or 2-dimensional xarray.DataArray with latitude / longitude information as coordinate information """ fileList = glob.glob(directory+'/*') datasets = [] for i in fileList: tmp=Dataset(i) param=tmp[internal_filepath+'/'+parameter] da_tmp= generate_xr_from_1D_vec(tmp,lat_path, lon_path, param, paramname, longname, no_of_dims, unit) if(no_of_dims==1): datasets.append(da_tmp) else: da_tmp_st = da_tmp.stack(ground_pixel=('x','y')) datasets.append(da_tmp_st) return xr.concat(datasets, dim='ground_pixel') ###Output _____no_output_____ ###Markdown `generate_geographical_subset` ###Code def generate_geographical_subset(xarray, latmin, latmax, lonmin, lonmax, reassign=False): """ Generates a geographical subset of a xarray.DataArray and if kwarg reassign=True, shifts the longitude grid from a 0-360 to a -180 to 180 deg grid. Parameters: xarray(xarray.DataArray): a xarray DataArray with latitude and longitude coordinates latmin, latmax, lonmin, lonmax(int): lat/lon boundaries of the geographical subset reassign(boolean): default is False Returns: Geographical subset of a xarray.DataArray. """ if(reassign): xarray = xarray.assign_coords(longitude=(((xarray.longitude + 180) % 360) - 180)) return xarray.where((xarray.latitude < latmax) & (xarray.latitude > latmin) & (xarray.longitude < lonmax) & (xarray.longitude > lonmin),drop=True) ###Output _____no_output_____ ###Markdown `generate_masked_array` ###Code def generate_masked_array(xarray, mask, threshold, operator, drop=True): """ Applies a mask (e.g. a cloud mask) onto a given xarray.DataArray, based on a given threshold and operator. Parameters: xarray(xarray DataArray): a three-dimensional xarray.DataArray object mask(xarray DataArray): 1-dimensional xarray.DataArray, e.g. cloud fraction values threshold(float): any number specifying the threshold operator(str): operator how to mask the array, e.g. '<', '>' or '!=' drop(boolean): default is True Returns: Masked xarray.DataArray with NaN values dropped, if kwarg drop equals True """ if(operator=='<'): cloud_mask = xr.where(mask < threshold, 1, 0) #Generate cloud mask with value 1 for the pixels we want to keep elif(operator=='!='): cloud_mask = xr.where(mask != threshold, 1, 0) elif(operator=='>'): cloud_mask = xr.where(mask > threshold, 1, 0) else: cloud_mask = xr.where(mask == threshold, 1, 0) xarray_masked = xr.where(cloud_mask ==1, xarray, np.nan) #Apply mask onto the DataArray xarray_masked.attrs = xarray.attrs #Set DataArray attributes if(drop): return xarray_masked[~np.isnan(xarray_masked)] #Return masked DataArray else: return xarray_masked ###Output _____no_output_____ ###Markdown `load_masked_l2_da` ###Code def load_masked_l2_da(directory, internal_filepath, parameter, lat_path, lon_path, no_of_dims, paramname, longname, unit, threshold, operator): """ Loads a Metop-A/B Gome-2 Level 2 data and cloud fraction information and returns a masked xarray.DataArray. It combines the functions `load_l2_data_xr` and `generate_masked_array`. Parameters: directory(str): Path to directory with Level 2 data files. internal_filepath(str): Internal file path under which the parameters are strored, e.g. TOTAL_COLUMNS parameter(str): atmospheric parameter, e.g. NO2 lat_path(str): name of the latitude variable within the file lon_path(str): path to the longitude variable within the file no_of_dims(int): specify the number of dimensions, 1 or 2 paramname(str): parameter name longname(str): long name of the parameter that shall be used unit(str): unit of the parameter threshold(float): any number specifying the threshold operator(str): operator how to mask the xarray.DataArray, e.g. '<', '>' or '!=' Returns: Masked xarray.DataArray keeping NaN values (drop=False) """ da = load_l2_data_xr(directory, internal_filepath, parameter, lat_path, lon_path, no_of_dims, paramname, unit, longname) cloud_fraction = load_l2_data_xr(directory, 'CLOUD_PROPERTIES', 'CloudFraction', lat_path, lon_path, no_of_dims, 'CloudFraction', unit='-', longname='Cloud Fraction') return generate_masked_array(da, cloud_fraction, threshold, operator, drop=False) ###Output _____no_output_____ ###Markdown `select_channels_for_rgb` ###Code def select_channels_for_rgb(xarray, red_channel, green_channel, blue_channel): """ Selects the channels / bands of a multi-dimensional xarray for red, green and blue composite based on Sentinel-3 OLCI Level 1B data. Parameters: xarray(xarray.Dataset): xarray.Dataset object that stores the different channels / bands. red_channel(str): Name of red channel to be selected green_channel(str): Name of green channel to be selected blue_channel(str): Name of blue channel to be selected Returns: Three xarray DataArray objects with selected channels / bands """ return xarray[red_channel], xarray[green_channel], xarray[blue_channel] ###Output _____no_output_____ ###Markdown `normalize` ###Code def normalize(array): """ Normalizes a numpy array / xarray.DataArray object to values between 0 and 1. Parameters: xarray(numpy array or xarray.DataArray): xarray.DataArray or numpy array object whose values should be normalized. Returns: xarray.DataArray with normalized values """ array_min, array_max = array.min(), array.max() return ((array - array_min)/(array_max - array_min)) ###Output _____no_output_____ ###Markdown `slstr_frp_gridding` ###Code def slstr_frp_gridding(parameter_array, parameter, lat_min, lat_max, lon_min, lon_max, sampling_lat_FRP_grid, sampling_lon_FRP_grid, n_fire, lat_frp, lon_frp, **kwargs): """ Produces gridded data of Sentinel-3 SLSTR NRT Fire Radiative Power Data Parameters: parameter_array(xarray.DataArray): xarray.DataArray with extracted data variable of fire occurences parameter(str): NRT S3 FRP channel - either `mwir`, `swir` or `swir_nosaa` lat_min, lat_max, lon_min, lon_max(float): Floats of geographical bounding box sampling_lat_FRP_grid, sampling_long_FRP_grid(float): Float of grid cell size n_fire(int): Number of fire occurences lat_frp(xarray.DataArray): Latitude values of occurred fire events lon_frp(xarray.DataArray): Longitude values of occurred fire events **kwargs: additional keyword arguments to be added. Required for parameter `swir_nosaa`, where the function requires the xarray.DataArray with the SAA FLAG information. Returns: the gridded xarray.Data Array and latitude and longitude grid information """ n_lat = int( (np.float32(lat_max) - np.float32(lat_min)) / sampling_lat_FRP_grid ) + 1 # Number of rows per latitude sampling n_lon = int( (np.float32(lon_max) - np.float32(lon_min)) / sampling_lon_FRP_grid ) + 1 # Number of lines per longitude sampling slstr_frp_gridded = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999. lat_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999. lon_grid = np.zeros( [n_lat, n_lon], dtype='float32' ) - 9999. if (n_fire >= 0): # Loop on i_lat: begins for i_lat in range(n_lat): # Loop on i_lon: begins for i_lon in range(n_lon): lat_grid[i_lat, i_lon] = lat_min + np.float32(i_lat) * sampling_lat_FRP_grid + sampling_lat_FRP_grid / 2. lon_grid[i_lat, i_lon] = lon_min + np.float32(i_lon) * sampling_lon_FRP_grid + sampling_lon_FRP_grid / 2. # Gridded SLSTR FRP MWIR Night - All days if(parameter=='swir_nosaa'): FLAG_FRP_SWIR_SAA_nc = kwargs.get('flag', None) mask_grid = np.where( (lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) & (lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) & (lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) & (lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) & (parameter_array[:] != -1.) & (FLAG_FRP_SWIR_SAA_nc[:] == 0), False, True) else: mask_grid = np.where( (lat_frp[:] >= lat_min + np.float32(i_lat) * sampling_lat_FRP_grid) & (lat_frp[:] < lat_min + np.float32(i_lat+1) * sampling_lat_FRP_grid) & (lon_frp[:] >= lon_min + np.float32(i_lon) * sampling_lon_FRP_grid) & (lon_frp[:] < lon_min + np.float32(i_lon+1) * sampling_lon_FRP_grid) & (parameter_array[:] != -1.), False, True) masked_slstr_frp_grid = np.ma.array(parameter_array[:], mask=mask_grid) if len(masked_slstr_frp_grid.compressed()) != 0: slstr_frp_gridded[i_lat, i_lon] = np.sum(masked_slstr_frp_grid.compressed()) return slstr_frp_gridded, lat_grid, lon_grid ###Output _____no_output_____ ###Markdown `df_subset` ###Code def df_subset(df,low_bound1, high_bound1, low_bound2, high_bound2): """ Creates a subset of a pandas.DataFrame object with time-series information Parameters: df(pandas.DataFrame): pandas.DataFrame with time-series information low_bound1(str): dateTime string, e.g. '2018-11-30' high_bound1(str): dateTime string, e.g. '2018-12-01' low_bound2(str): dateTime string, e.g. '2019-12-30' high_bound2(str): dateTime string, e.g. '2020-01-15' Returns: the subsetted time-series as pandas.DataFrame object """ return df[(df.index>low_bound1) & (df.index<high_bound1)], df[(df.index>low_bound2) & (df.index<high_bound2)] ###Output _____no_output_____ ###Markdown Data visualization functions `visualize_scatter` ###Code def visualize_scatter(xr_dataarray, conversion_factor, projection, vmin, vmax, point_size, color_scale, unit, title): """ Visualizes a xarray.DataArray in a given projection using matplotlib's scatter function. Parameters: xr_dataarray(xarray.DataArray): a one-dimensional xarray DataArray object with latitude and longitude information as coordinates conversion_factor(int): any number to convert the DataArray values projection(str): choose one of cartopy's projection, e.g. ccrs.PlateCarree() vmin(int): minimum number on visualisation legend vmax(int): maximum number on visualisation legend point_size(int): size of marker, e.g. 5 color_scale(str): string taken from matplotlib's color ramp reference unit(str): define the unit to be added to the color bar title(str): define title of the plot """ fig, ax = plt.subplots(figsize=(40, 10)) ax = plt.axes(projection=projection) ax.coastlines() if (projection==ccrs.PlateCarree()): gl = ax.gridlines(draw_labels=True, linestyle='--') gl.top_labels=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':14} gl.ylabel_style={'size':14} # plot pixel positions img = ax.scatter( xr_dataarray.longitude.data, xr_dataarray.latitude.data, c=xr_dataarray.data*conversion_factor, cmap=plt.cm.get_cmap(color_scale), marker='o', s=point_size, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax ) plt.xticks(fontsize=16) plt.yticks(fontsize=16) plt.xlabel("Longitude", fontsize=16) plt.ylabel("Latitude", fontsize=16) cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1) cbar.set_label(unit, fontsize=16) cbar.ax.tick_params(labelsize=14) ax.set_title(title, fontsize=20, pad=20.0) plt.show() ###Output _____no_output_____ ###Markdown `visualize_pcolormesh` ###Code def visualize_pcolormesh(data_array, longitude, latitude, projection, color_scale, unit, long_name, vmin, vmax, set_global=True, lonmin=-180, lonmax=180, latmin=-90, latmax=90): """ Visualizes a xarray.DataArray with matplotlib's pcolormesh function. Parameters: data_array(xarray.DataArray): xarray.DataArray holding the data values longitude(xarray.DataArray): xarray.DataArray holding the longitude values latitude(xarray.DataArray): xarray.DataArray holding the latitude values projection(str): a projection provided by the cartopy library, e.g. ccrs.PlateCarree() color_scale(str): string taken from matplotlib's color ramp reference unit(str): the unit of the parameter, taken from the NetCDF file if possible long_name(str): long name of the parameter, taken from the NetCDF file if possible vmin(int): minimum number on visualisation legend vmax(int): maximum number on visualisation legend set_global(boolean): optional kwarg, default is True lonmin,lonmax,latmin,latmax(float): optional kwarg, set geographic extent is set_global kwarg is set to False """ fig=plt.figure(figsize=(20, 10)) ax = plt.axes(projection=projection) # fig, ax = plt.subplots(nrows=1, ncols=1,figsize=(20,10),subplot_kw=dict(projection=projection)) img = ax.pcolormesh(longitude, latitude, data_array, cmap=plt.get_cmap(color_scale), transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, shading='auto') ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1) ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1) if (projection==ccrs.PlateCarree()): ax.set_extent([lonmin, lonmax, latmin, latmax], projection) gl = ax.gridlines(draw_labels=True, linestyle='--') gl.top_labels=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':14} gl.ylabel_style={'size':14} if(set_global): ax.set_global() ax.gridlines() cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.04, pad=0.1) cbar.set_label(unit, fontsize=16) cbar.ax.tick_params(labelsize=14) ax.set_title(long_name, fontsize=20, pad=20.0) # plt.show() return fig, ax ###Output _____no_output_____ ###Markdown `visualize_s3_pcolormesh` ###Code def visualize_s3_pcolormesh(color_array, array, latitude, longitude, title): """ Visualizes a xarray.DataArray or numpy.MaskedArray (Sentinel-3 OLCI Level 1 data) with matplotlib's pcolormesh function as RGB image. Parameters: color_array (numpy.MaskedArray): any numpy.MaskedArray, e.g. loaded with the NetCDF library and the Dataset function array(numpy.Array): numpy.Array to get dimensions of the resulting plot longitude (numpy.Array): array with longitude values latitude (numpy.Array) : array with latitude values title (str): title of the resulting plot """ fig=plt.figure(figsize=(20, 12)) ax=plt.axes(projection=ccrs.Mercator()) ax.coastlines() gl = ax.gridlines(draw_labels=True, linestyle='--') gl.top_labels=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':14} gl.ylabel_style={'size':14} img1 = plt.pcolormesh(longitude, latitude, array*np.nan, color=color_array, clip_on = True, edgecolors=None, zorder=0, transform=ccrs.PlateCarree()) ax.set_title(title, fontsize=20, pad=20.0) plt.show() ###Output _____no_output_____ ###Markdown `visualize_s3_frp` ###Code def visualize_s3_frp(data, lat, lon, unit, longname, textstr_1, textstr_2, vmax): """ Visualizes a numpy.Array (Sentinel-3 SLSTR NRT FRP data) with matplotlib's pcolormesh function and adds two text boxes to the plot. Parameters: data(numpy.MaskedArray): any numpy MaskedArray, e.g. loaded with the NetCDF library and the Dataset function lat(numpy.Array): array with longitude values lon(numpy.Array) : array with latitude values unit(str): unit of the resulting plot longname(str): Longname to be used as title textstr_1(str): String to fill box 1 textstr_2(str): String to fill box 2 vmax(float): Maximum value of color scale """ fig=plt.figure(figsize=(20, 15)) ax = plt.axes(projection=ccrs.PlateCarree()) img = plt.pcolormesh(lon, lat, data, cmap=cm.autumn_r, transform=ccrs.PlateCarree(), vmin=0, vmax=vmax) ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1) ax.add_feature(cfeature.COASTLINE, edgecolor='black', linewidth=1) gl = ax.gridlines(draw_labels=True, linestyle='--') gl.bottom_labels=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':14} gl.ylabel_style={'size':14} cbar = fig.colorbar(img, ax=ax, orientation='horizontal', fraction=0.029, pad=0.025) cbar.set_label(unit, fontsize=16) cbar.ax.tick_params(labelsize=14) ax.set_title(longname, fontsize=20, pad=40.0) props = dict(boxstyle='square', facecolor='white', alpha=0.5) # place a text box on the right side of the plot ax.text(1.1, 0.9, textstr_1, transform=ax.transAxes, fontsize=16, verticalalignment='top', bbox=props) props = dict(boxstyle='square', facecolor='white', alpha=0.5) # place a text box in upper left in axes coords ax.text(1.1, 0.85, textstr_2, transform=ax.transAxes, fontsize=16, verticalalignment='top', bbox=props) plt.show() ###Output _____no_output_____ ###Markdown `visualize_s3_aod` ###Code def visualize_s3_aod(aod_ocean, aod_land, latitude, longitude, title, unit, vmin, vmax, color_scale, projection): """ Visualizes two xarray.DataArrays from the Sentinel-3 SLSTR NRT AOD dataset onto the same plot with matplotlib's pcolormesh function. Parameters: aod_ocean(xarray.DataArray): xarray.DataArray with the Aerosol Optical Depth for ocean values aod_land(xarray.DataArray): xarray.DataArray with Aerosol Optical Depth for land values longitude(xarray.DataArray): xarray.DataArray holding the longitude values latitude(xarray.DataArray): xarray.DataArray holding the latitude values title(str): title of the resulting plot unit(str): unit of the resulting plot vmin(int): minimum number on visualisation legend vmax(int): maximum number on visualisation legend color_scale(str): string taken from matplotlib's color ramp reference projection(str): a projection provided by the cartopy library, e.g. ccrs.PlateCarree() """ fig=plt.figure(figsize=(12, 12)) ax=plt.axes(projection=projection) ax.coastlines(linewidth=1.5, linestyle='solid', color='k', zorder=10) gl = ax.gridlines(draw_labels=True, linestyle='--') gl.top_labels=False gl.right_labels=False gl.xformatter=LONGITUDE_FORMATTER gl.yformatter=LATITUDE_FORMATTER gl.xlabel_style={'size':12} gl.ylabel_style={'size':12} img1 = plt.pcolormesh(longitude, latitude, aod_ocean, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale) img2 = plt.pcolormesh(longitude, latitude, aod_land, transform=ccrs.PlateCarree(), vmin=vmin, vmax=vmax, cmap=color_scale) ax.set_title(title, fontsize=20, pad=20.0) cbar = fig.colorbar(img1, ax=ax, orientation='vertical', fraction=0.04, pad=0.05) cbar.set_label(unit, fontsize=16) cbar.ax.tick_params(labelsize=14) plt.show() ###Output _____no_output_____
KNearestNeighburs.ipynb
###Markdown BoW ###Code count_vect = CountVectorizer() X_tra = count_vect.fit_transform(X_train) X_tes = count_vect.transform(X_test) #print(X_train.shape) #print(X_test.shape) kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=1, algorithm='brute') knn1.fit(X_tra, y_train) scores1 = cross_val_score(knn1, X_tra, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) #mse = [1-x for x in cv_scores1] optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() #print("the misclassification error for each k value is : ", np.round(mse,3)) knn = KNeighborsClassifier(n_neighbors=optimal_k1) knn.fit(X_tra,y_train) y_pred = knn.predict(X_tes) print("k=", optimal_k1,"\n") print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) #cnf = pd.DataFrame(confusion_matrix(y_test, y_pred), range(2),range(2)) #sns.set(font_scale=1.4)#for label size #sns.heatmap(cnf, annot=True,annot_kws={"size": 16}, fmt='g') ###Output k= 7 Accuracy on test set: 84.693% Precision on test set: 0.851 Recall on test set: 0.992 F1-Score on test set: 0.916 Confusion Matrix(test set): ###Markdown KD Tree ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) svd = TruncatedSVD(n_components=100) X_tra_svd = svd.fit_transform(X_tra) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=1, algorithm='kd_tree') knn1.fit(X_tra, y_train) scores1 = cross_val_score(knn1, X_tra_svd, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) #mse = [1-x for x in cv_scores1] optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1, algorithm='kd_tree') X_te = svd.fit_transform(X_tes) knn.fit(X_tra_svd,y_train) y_pred = knn.predict(X_te) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 83.668% Precision on test set: 0.845 Recall on test set: 0.987 F1-Score on test set: 0.911 Confusion Matrix(test set): ###Markdown tf-idf ###Code tfidf_vect = TfidfVectorizer() X_train_tfidf = tfidf_vect.fit_transform(X_train) X_test_tfidf = tfidf_vect.transform(X_test) ###Output _____no_output_____ ###Markdown Brute force ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=-1, algorithm='brute') knn1.fit(X_train_tfidf, y_train) scores1 = cross_val_score(knn1, X_train_tfidf, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors= optimal_k1, algorithm = 'brute') knn.fit(X_train_tfidf,y_train) y_pred = knn.predict(X_test_tfidf) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 85.485% Precision on test set: 0.860 Recall on test set: 0.988 F1-Score on test set: 0.920 Confusion Matrix(test set): ###Markdown KD tree ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) X_train_tfidf_svd = svd.fit_transform(X_train_tfidf) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=-1, algorithm='kd_tree') knn1.fit(X_train_tfidf_svd, y_train) scores1 = cross_val_score(knn1, X_train_tfidf, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1, algorithm='kd_tree') X_test_svd = svd.fit_transform(X_test_tfidf) knn.fit(X_train_tfidf_svd,y_train) y_pred = knn.predict(X_test_svd) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 79.618% Precision on test set: 0.847 Recall on test set: 0.926 F1-Score on test set: 0.885 Confusion Matrix(test set): ###Markdown W2V ###Code # Train your own Word2Vec model using your own train text corpus import gensim list_of_sent=[] for sent in X_train: filtered_sentence=[] for word in sent.split(): for cleaned_words in cleanpunc(word).split(): if(cleaned_words.isalpha()): filtered_sentence.append(cleaned_words.lower()) else: continue list_of_sent.append(filtered_sentence) w2v_model=gensim.models.Word2Vec(list_of_sent,min_count=5,size=200, workers=4) w2v_words = list(w2v_model.wv.vocab) list_of_sent_test = [] for sent in X_test: filtered_sentence=[] for word in sent.split(): for cleaned_words in cleanpunc(word).split(): if(cleaned_words.isalpha()): filtered_sentence.append(cleaned_words.lower()) else: continue list_of_sent_test.append(filtered_sentence) w2v_model_test=gensim.models.Word2Vec(list_of_sent,min_count=5,size=200, workers=4) w2v_words_test = list(w2v_model.wv.vocab) ###Output _____no_output_____ ###Markdown Avg W2V ###Code sent_vectors_TRAIN = []; for sent in list_of_sent: sent_vec = np.zeros(200) cnt_words =0; for word in sent: if word in w2v_words: vec = w2v_model.wv[word] sent_vec += vec cnt_words += 1 if cnt_words != 0: sent_vec /= cnt_words sent_vectors_TRAIN.append(sent_vec) print(len(sent_vectors_TRAIN)) print(len(sent_vectors_TRAIN[0])) sent_vectors_TEST = []; for sent in list_of_sent_test: sent_vec = np.zeros(200) cnt_words =0; for word in sent: if word in w2v_words_test: vec = w2v_model_test.wv[word] sent_vec += vec cnt_words += 1 if cnt_words != 0: sent_vec /= cnt_words sent_vectors_TEST.append(sent_vec) print(len(sent_vectors_TEST)) print(len(sent_vectors_TEST[0])) ###Output 28000 200 12001 200 ###Markdown Brute force ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=-1, algorithm='brute') knn1.fit(sent_vectors_TRAIN, y_train) scores1 = cross_val_score(knn1, sent_vectors_TRAIN, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1) knn.fit(sent_vectors_TRAIN,y_train) y_pred = knn.predict(sent_vectors_TEST) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 86.309% Precision on test set: 0.873 Recall on test set: 0.980 F1-Score on test set: 0.923 Confusion Matrix(test set): ###Markdown KD tree ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) svd = TruncatedSVD(n_components=100) sent_vectors_TRAIN_svd = svd.fit_transform(sent_vectors_TRAIN) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=1, algorithm='kd_tree') knn1.fit(sent_vectors_TRAIN_svd, y_train) scores1 = cross_val_score(knn1, sent_vectors_TRAIN_svd, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) sent_vectors_TRAIN_svd #mse = [1-x for x in cv_scores1] optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1, algorithm='kd_tree') X_test_svd = svd.fit_transform(sent_vectors_TEST) knn.fit(sent_vectors_TRAIN_svd, y_train) y_pred = knn.predict(X_test_svd) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 84.376% Precision on test set: 0.853 Recall on test set: 0.983 F1-Score on test set: 0.914 Confusion Matrix(test set): ###Markdown Tf-Idf weighted W2V ###Code tfidf_vect = TfidfVectorizer() final_tfidf = tfidf_vect.fit_transform(final_['CleanedText'].values) tfidf_feat = tfidf_vect.get_feature_names() tfidf_sent_vectors_tr = [] row=0 for sent in list_of_sent: sent_vec = np.zeros(200) weight_sum =0 for word in sent: if word in w2v_words: vec = w2v_model.wv[word] tf_idf = final_tfidf[row, tfidf_feat.index(word)] sent_vec += (vec * tf_idf) weight_sum += tf_idf if weight_sum != 0: sent_vec /= weight_sum tfidf_sent_vectors_tr.append(sent_vec) row += 1 tfidf_sent_vectors_TEST = []; row=0 for sent in list_of_sent_test: sent_vec = np.zeros(200) weight_sum =0 for word in sent: if word in w2v_words: vec = w2v_model.wv[word] tf_idf = final_tfidf[row, tfidf_feat.index(word)] sent_vec += (vec * tf_idf) weight_sum += tf_idf if weight_sum != 0: sent_vec /= weight_sum tfidf_sent_vectors_TEST.append(sent_vec) row += 1 ###Output _____no_output_____ ###Markdown Brute force ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=-1, algorithm = 'brute') knn1.fit(tfidf_sent_vectors_tr, y_train) scores1 = cross_val_score(knn1, tfidf_sent_vectors_tr, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1, algorithm='brute') knn.fit(tfidf_sent_vectors_tr,y_train) y_pred = knn.predict(tfidf_sent_vectors_TEST) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) ###Output Accuracy on test set: 83.985% Precision on test set: 0.840 Recall on test set: 1.000 F1-Score on test set: 0.913 Confusion Matrix(test set): ###Markdown KD tree ###Code kn = list(range(0,40)) neighbors = list(filter(lambda x: x % 2 != 0, kn)) svd = TruncatedSVD(n_components=100) tfidf_sent_vectors_tr_svd = svd.fit_transform(tfidf_sent_vectors_tr) cv_scores1 = [] for k in neighbors: knn1 = KNeighborsClassifier(n_neighbors=k, n_jobs=1, algorithm='kd_tree') knn1.fit(tfidf_sent_vectors_tr_svd, y_train) scores1 = cross_val_score(knn1, tfidf_sent_vectors_tr_svd, y_train, cv=10, scoring='f1_micro') cv_scores1.append(scores1.mean()) #mse = [1-x for x in cv_scores1] optimal_k1 = neighbors[cv_scores1.index(max(cv_scores1))] print('\nThe optimal number of neighbors is %d.' % optimal_k1) plt.figure(figsize=(10,6)) plt.plot(list(filter(lambda x: x % 2 != 0, kn)),cv_scores1,color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10) plt.title('f1 score vs. K Value') plt.xlabel('K') plt.ylabel('f1 score') plt.show() knn = KNeighborsClassifier(n_neighbors=optimal_k1, algorithm='kd_tree') X_test_svd = svd.fit_transform(tfidf_sent_vectors_TEST) knn.fit(tfidf_sent_vectors_tr_svd, y_train) y_pred = knn.predict(X_test_svd) print("Accuracy on test set: %0.3f%%"%(accuracy_score(y_test, y_pred)*100)) print("Precision on test set: %0.3f"%(precision_score(y_test, y_pred, pos_label='positive'))) print("Recall on test set: %0.3f"%(recall_score(y_test, y_pred, pos_label='positive'))) print("F1-Score on test set: %0.3f"%(f1_score(y_test, y_pred, pos_label='positive'))) print("Confusion Matrix(test set):\n") skplt.plot_confusion_matrix(y_test, y_pred) from prettytable import PrettyTable x = PrettyTable() y = PrettyTable() x.border = True y.border = True print("\n\n") print("Algorithm : Brute Force") x.field_names = ["Vectorization type", "Value of Hyperparameter", "Test Accuracy", "Precison", "Recall","F1 Score"] x.add_row(["BoW","7","84.693","0.851","0.992","0.916"]) x.add_row(["TF-IDF","9","85.485","0.860","0.988","0.920"]) x.add_row(["Average W2V","21","86.309","0.873","0.980","0.923"]) x.add_row(["TF-IDF weighted W2V","33","83.985","0.840","1.000","0.913"]) print(x.get_string(),"\n\n\n\n") print("Algorithm : KDTree") y.field_names = ["Vectorization type", "Value of Hyperparameter", "Test Accuracy", "Precison", "Recall","F1 Score"] y.add_row(["BoW","23","83.668","0.845","0.987","0.911"]) y.add_row(["TF-IDF","9","79.618","0.847","0.926","0.855"]) y.add_row(["Average W2V","21","84.376","0.853","0.983","0.914"]) y.add_row(["TF-IDF weighted W2V","33","84.001","0.840","1.000","0.913"]) print(y) ###Output Algorithm : Brute Force +---------------------+-------------------------+---------------+----------+--------+----------+ | Vectorization type | Value of Hyperparameter | Test Accuracy | Precison | Recall | F1 Score | +---------------------+-------------------------+---------------+----------+--------+----------+ | BoW | 7 | 84.693 | 0.851 | 0.992 | 0.916 | | TF-IDF | 9 | 85.485 | 0.860 | 0.988 | 0.920 | | Average W2V | 21 | 86.309 | 0.873 | 0.980 | 0.923 | | TF-IDF weighted W2V | 33 | 83.985 | 0.840 | 1.000 | 0.913 | +---------------------+-------------------------+---------------+----------+--------+----------+ Algorithm : KDTree +---------------------+-------------------------+---------------+----------+--------+----------+ | Vectorization type | Value of Hyperparameter | Test Accuracy | Precison | Recall | F1 Score | +---------------------+-------------------------+---------------+----------+--------+----------+ | BoW | 23 | 83.668 | 0.845 | 0.987 | 0.911 | | TF-IDF | 9 | 79.618 | 0.847 | 0.926 | 0.855 | | Average W2V | 21 | 84.376 | 0.853 | 0.983 | 0.914 | | TF-IDF weighted W2V | 33 | 84.001 | 0.840 | 1.000 | 0.913 | +---------------------+-------------------------+---------------+----------+--------+----------+
TF_RL_DQN_Cartpole.ipynb
###Markdown Copyright 2021 The TF-Agents Authors. ###Code #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ###Output _____no_output_____ ###Markdown Introduction This example shows how to train a [DQN (Deep Q Networks)](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) agent on the Cartpole environment using the TF-Agents library.![Cartpole environment](https://raw.githubusercontent.com/tensorflow/agents/master/docs/tutorials/images/cartpole.png)It will walk you through all the components in a Reinforcement Learning (RL) pipeline for training, evaluation and data collection.To run this code live, click the 'Run in Google Colab' link above. Setup If you haven't installed the following dependencies, run: ###Code !sudo apt-get install -y xvfb ffmpeg !pip install 'imageio==2.4.0' !pip install pyvirtualdisplay !pip install tf-agents from __future__ import absolute_import, division, print_function import base64 import imageio import IPython import matplotlib import matplotlib.pyplot as plt import numpy as np import PIL.Image import pyvirtualdisplay import tensorflow as tf from tf_agents.agents.dqn import dqn_agent from tf_agents.drivers import dynamic_step_driver from tf_agents.environments import suite_gym from tf_agents.environments import tf_py_environment from tf_agents.eval import metric_utils from tf_agents.metrics import tf_metrics from tf_agents.networks import q_network from tf_agents.policies import random_tf_policy from tf_agents.replay_buffers import tf_uniform_replay_buffer from tf_agents.trajectories import trajectory from tf_agents.utils import common tf.compat.v1.enable_v2_behavior() # Set up a virtual display for rendering OpenAI gym environments. display = pyvirtualdisplay.Display(visible=0, size=(1400, 900)).start() tf.version.VERSION ###Output _____no_output_____ ###Markdown Hyperparameters ###Code num_iterations = 20000 # @param {type:"integer"} initial_collect_steps = 100 # @param {type:"integer"} collect_steps_per_iteration = 1 # @param {type:"integer"} replay_buffer_max_length = 100000 # @param {type:"integer"} batch_size = 64 # @param {type:"integer"} learning_rate = 1e-3 # @param {type:"number"} log_interval = 200 # @param {type:"integer"} num_eval_episodes = 10 # @param {type:"integer"} eval_interval = 1000 # @param {type:"integer"} ###Output _____no_output_____ ###Markdown EnvironmentIn Reinforcement Learning (RL), an environment represents the task or problem to be solved. Standard environments can be created in TF-Agents using `tf_agents.environments` suites. TF-Agents has suites for loading environments from sources such as the OpenAI Gym, Atari, and DM Control.Load the CartPole environment from the OpenAI Gym suite. ###Code env_name = 'CartPole-v0' env = suite_gym.load(env_name) ###Output _____no_output_____ ###Markdown You can render this environment to see how it looks. A free-swinging pole is attached to a cart. The goal is to move the cart right or left in order to keep the pole pointing up. ###Code #@test {"skip": true} env.reset() PIL.Image.fromarray(env.render()) ###Output _____no_output_____ ###Markdown The `environment.step` method takes an `action` in the environment and returns a `TimeStep` tuple containing the next observation of the environment and the reward for the action.The `time_step_spec()` method returns the specification for the `TimeStep` tuple. Its `observation` attribute shows the shape of observations, the data types, and the ranges of allowed values. The `reward` attribute shows the same details for the reward. ###Code print('Observation Spec:') print(env.time_step_spec().observation) print('Reward Spec:') print(env.time_step_spec().reward) ###Output _____no_output_____ ###Markdown The `action_spec()` method returns the shape, data types, and allowed values of valid actions. ###Code print('Action Spec:') print(env.action_spec()) ###Output _____no_output_____ ###Markdown In the Cartpole environment:- `observation` is an array of 4 floats: - the position and velocity of the cart - the angular position and velocity of the pole - `reward` is a scalar float value- `action` is a scalar integer with only two possible values: - `0` — "move left" - `1` — "move right" ###Code time_step = env.reset() print('Time step:') print(time_step) action = np.array(1, dtype=np.int32) next_time_step = env.step(action) print('Next time step:') print(next_time_step) ###Output _____no_output_____ ###Markdown Usually two environments are instantiated: one for training and one for evaluation. ###Code train_py_env = suite_gym.load(env_name) eval_py_env = suite_gym.load(env_name) ###Output _____no_output_____ ###Markdown The Cartpole environment, like most environments, is written in pure Python. This is converted to TensorFlow using the `TFPyEnvironment` wrapper.The original environment's API uses Numpy arrays. The `TFPyEnvironment` converts these to `Tensors` to make it compatible with Tensorflow agents and policies. ###Code train_env = tf_py_environment.TFPyEnvironment(train_py_env) eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) ###Output _____no_output_____ ###Markdown AgentThe algorithm used to solve an RL problem is represented by an `Agent`. TF-Agents provides standard implementations of a variety of `Agents`, including:- [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) (used in this tutorial)- [REINFORCE](https://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)- [DDPG](https://arxiv.org/pdf/1509.02971.pdf)- [TD3](https://arxiv.org/pdf/1802.09477.pdf)- [PPO](https://arxiv.org/abs/1707.06347)- [SAC](https://arxiv.org/abs/1801.01290).The DQN agent can be used in any environment which has a discrete action space.At the heart of a DQN Agent is a `QNetwork`, a neural network model that can learn to predict `QValues` (expected returns) for all actions, given an observation from the environment.Use `tf_agents.networks.q_network` to create a `QNetwork`, passing in the `observation_spec`, `action_spec`, and a tuple describing the number and size of the model's hidden layers. ###Code fc_layer_params = (100,) q_net = q_network.QNetwork( train_env.observation_spec(), train_env.action_spec(), fc_layer_params=fc_layer_params) ###Output _____no_output_____ ###Markdown Now use `tf_agents.agents.dqn.dqn_agent` to instantiate a `DqnAgent`. In addition to the `time_step_spec`, `action_spec` and the QNetwork, the agent constructor also requires an optimizer (in this case, `AdamOptimizer`), a loss function, and an integer step counter. ###Code optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate) train_step_counter = tf.Variable(0) agent = dqn_agent.DqnAgent( train_env.time_step_spec(), train_env.action_spec(), q_network=q_net, optimizer=optimizer, td_errors_loss_fn=common.element_wise_squared_loss, train_step_counter=train_step_counter) agent.initialize() ###Output _____no_output_____ ###Markdown PoliciesA policy defines the way an agent acts in an environment. Typically, the goal of reinforcement learning is to train the underlying model until the policy produces the desired outcome.In this tutorial:- The desired outcome is keeping the pole balanced upright over the cart.- The policy returns an action (left or right) for each `time_step` observation.Agents contain two policies: - `agent.policy` — The main policy that is used for evaluation and deployment.- `agent.collect_policy` — A second policy that is used for data collection. ###Code eval_policy = agent.policy collect_policy = agent.collect_policy ###Output _____no_output_____ ###Markdown Policies can be created independently of agents. For example, use `tf_agents.policies.random_tf_policy` to create a policy which will randomly select an action for each `time_step`. ###Code random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(), train_env.action_spec()) ###Output _____no_output_____ ###Markdown To get an action from a policy, call the `policy.action(time_step)` method. The `time_step` contains the observation from the environment. This method returns a `PolicyStep`, which is a named tuple with three components:- `action` — the action to be taken (in this case, `0` or `1`)- `state` — used for stateful (that is, RNN-based) policies- `info` — auxiliary data, such as log probabilities of actions ###Code example_environment = tf_py_environment.TFPyEnvironment( suite_gym.load('CartPole-v0')) time_step = example_environment.reset() random_policy.action(time_step) ###Output _____no_output_____ ###Markdown Metrics and EvaluationThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode. Several episodes are run, creating an average return.The following function computes the average return of a policy, given the policy, environment, and a number of episodes. ###Code #@test {"skip": true} def compute_avg_return(environment, policy, num_episodes=10): total_return = 0.0 for _ in range(num_episodes): time_step = environment.reset() episode_return = 0.0 while not time_step.is_last(): action_step = policy.action(time_step) time_step = environment.step(action_step.action) episode_return += time_step.reward total_return += episode_return avg_return = total_return / num_episodes return avg_return.numpy()[0] # See also the metrics module for standard implementations of different metrics. # https://github.com/tensorflow/agents/tree/master/tf_agents/metrics ###Output _____no_output_____ ###Markdown Running this computation on the `random_policy` shows a baseline performance in the environment. ###Code compute_avg_return(eval_env, random_policy, num_eval_episodes) ###Output _____no_output_____ ###Markdown Replay BufferThe replay buffer keeps track of data collected from the environment. This tutorial uses `tf_agents.replay_buffers.tf_uniform_replay_buffer.TFUniformReplayBuffer`, as it is the most common. The constructor requires the specs for the data it will be collecting. This is available from the agent using the `collect_data_spec` method. The batch size and maximum buffer length are also required. ###Code replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer( data_spec=agent.collect_data_spec, batch_size=train_env.batch_size, max_length=replay_buffer_max_length) ###Output _____no_output_____ ###Markdown For most agents, `collect_data_spec` is a named tuple called `Trajectory`, containing the specs for observations, actions, rewards, and other items. ###Code agent.collect_data_spec agent.collect_data_spec._fields ###Output _____no_output_____ ###Markdown Data CollectionNow execute the random policy in the environment for a few steps, recording the data in the replay buffer. ###Code #@test {"skip": true} def collect_step(environment, policy, buffer): time_step = environment.current_time_step() action_step = policy.action(time_step) next_time_step = environment.step(action_step.action) traj = trajectory.from_transition(time_step, action_step, next_time_step) # Add trajectory to the replay buffer buffer.add_batch(traj) def collect_data(env, policy, buffer, steps): for _ in range(steps): collect_step(env, policy, buffer) collect_data(train_env, random_policy, replay_buffer, initial_collect_steps) # This loop is so common in RL, that we provide standard implementations. # For more details see the drivers module. # https://www.tensorflow.org/agents/api_docs/python/tf_agents/drivers ###Output _____no_output_____ ###Markdown The replay buffer is now a collection of Trajectories. ###Code # For the curious: # Uncomment to peel one of these off and inspect it. # iter(replay_buffer.as_dataset()).next() ###Output _____no_output_____ ###Markdown The agent needs access to the replay buffer. This is provided by creating an iterable `tf.data.Dataset` pipeline which will feed data to the agent.Each row of the replay buffer only stores a single observation step. But since the DQN Agent needs both the current and next observation to compute the loss, the dataset pipeline will sample two adjacent rows for each item in the batch (`num_steps=2`).This dataset is also optimized by running parallel calls and prefetching data. ###Code # Dataset generates trajectories with shape [Bx2x...] dataset = replay_buffer.as_dataset( num_parallel_calls=3, sample_batch_size=batch_size, num_steps=2).prefetch(3) dataset iterator = iter(dataset) print(iterator) # For the curious: # Uncomment to see what the dataset iterator is feeding to the agent. # Compare this representation of replay data # to the collection of individual trajectories shown earlier. # iterator.next() ###Output _____no_output_____ ###Markdown Training the agentTwo things must happen during the training loop:- collect data from the environment- use that data to train the agent's neural network(s)This example also periodicially evaluates the policy and prints the current score.The following will take ~5 minutes to run. ###Code #@test {"skip": true} try: %%time except: pass # (Optional) Optimize by wrapping some of the code in a graph using TF function. agent.train = common.function(agent.train) # Reset the train step agent.train_step_counter.assign(0) # Evaluate the agent's policy once before training. avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) returns = [avg_return] for _ in range(num_iterations): # Collect a few steps using collect_policy and save to the replay buffer. collect_data(train_env, agent.collect_policy, replay_buffer, collect_steps_per_iteration) # Sample a batch of data from the buffer and update the agent's network. experience, unused_info = next(iterator) train_loss = agent.train(experience).loss step = agent.train_step_counter.numpy() if step % log_interval == 0: print('step = {0}: loss = {1}'.format(step, train_loss)) if step % eval_interval == 0: avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes) print('step = {0}: Average Return = {1}'.format(step, avg_return)) returns.append(avg_return) ###Output _____no_output_____ ###Markdown Visualization PlotsUse `matplotlib.pyplot` to chart how the policy improved during training.One iteration of `Cartpole-v0` consists of 200 time steps. The environment gives a reward of `+1` for each step the pole stays up, so the maximum return for one episode is 200. The charts shows the return increasing towards that maximum each time it is evaluated during training. (It may be a little unstable and not increase monotonically each time.) ###Code #@test {"skip": true} iterations = range(0, num_iterations + 1, eval_interval) plt.plot(iterations, returns) plt.ylabel('Average Return') plt.xlabel('Iterations') plt.ylim(top=250) ###Output _____no_output_____ ###Markdown Videos Charts are nice. But more exciting is seeing an agent actually performing a task in an environment. First, create a function to embed videos in the notebook. ###Code def embed_mp4(filename): """Embeds an mp4 file in the notebook.""" video = open(filename,'rb').read() b64 = base64.b64encode(video) tag = ''' <video width="640" height="480" controls> <source src="data:video/mp4;base64,{0}" type="video/mp4"> Your browser does not support the video tag. </video>'''.format(b64.decode()) return IPython.display.HTML(tag) ###Output _____no_output_____ ###Markdown Now iterate through a few episodes of the Cartpole game with the agent. The underlying Python environment (the one "inside" the TensorFlow environment wrapper) provides a `render()` method, which outputs an image of the environment state. These can be collected into a video. ###Code def create_policy_eval_video(policy, filename, num_episodes=5, fps=30): filename = filename + ".mp4" with imageio.get_writer(filename, fps=fps) as video: for _ in range(num_episodes): time_step = eval_env.reset() video.append_data(eval_py_env.render()) while not time_step.is_last(): action_step = policy.action(time_step) time_step = eval_env.step(action_step.action) video.append_data(eval_py_env.render()) return embed_mp4(filename) create_policy_eval_video(agent.policy, "trained-agent") ###Output _____no_output_____ ###Markdown For fun, compare the trained agent (above) to an agent moving randomly. (It does not do as well.) ###Code create_policy_eval_video(random_policy, "random-agent") ###Output _____no_output_____
notebooks/3.0-tg-model-selection.ipynb
###Markdown Anticipez les besoins en consommation électrique de bâtiments=============================================================![logo-seattle](https://www.seattle.gov/Documents/Departments/Arts/Downloads/Logo/Seattle_logo_landscape_blue-black.png)Explication des variables:[City of seattle](https://data.seattle.gov/dataset/2015-Building-Energy-Benchmarking/h7rm-fz6m) On cherche ici à déterminer quel modèle est le plus adapté.Les modèles de régression possible sont: * **Linéaires** : * LinearRegression (Overfitting) * Ridge * Lasso * Elastic-Net * *LARS* (context : number of features >> number of samples [1]) * **Support Vector Machine (SVM)** * Support Vector Regression (SVR) * **Stochastic Gradient Descent** * SGDRegressor * **Nearest Neighbors** * Nearest Neighbors Regression (poor results on sparse data [2]) * **Gaussian Processes** * *Gaussian Process Regression (GPR)* * **Decision Trees** * DecisionTreeRegressor * **Ensemble methods** * RandomForestRegressor * *ExtraTreesRegressor* * GradientBoostingRegressor * *VotingRegressor* * **Multiclass and multilabel algorithms** * *Regressor Chain* (Intéressant si on cherche à prévoir des sorties multiples corrélées) * **Neural Network** * Multi Layer Perceptron - MLPRegressor[1] [Scikit-learn documentation](https://scikit-learn.org/stable/modules/linear_model.htmlleast-angle-regression).[2] Müller, A. C., & Guido, S. (2017). Introduction to machine learning with Python: A guide for data scientists. ###Code from decimal import Decimal from importlib import import_module import os from pathlib import Path import pickle from shutil import rmtree from tempfile import mkdtemp from time import time import matplotlib import matplotlib.pyplot as plt import numpy as np import pandas as pd import seaborn as sns from sklearn.compose import ColumnTransformer, TransformedTargetRegressor from sklearn.metrics import mean_squared_error from sklearn.model_selection import cross_val_score, GridSearchCV, train_test_split from sklearn.neighbors import KNeighborsRegressor from sklearn.pipeline import Pipeline from sklearn.preprocessing import FunctionTransformer import statsmodels.api as sm import missingno cache_dir = mkdtemp() sns.set() sns.set_context("notebook", font_scale=1.0) matplotlib.rcParams['figure.figsize'] = (10, 6) ###Output _____no_output_____ ###Markdown On recharge les données ###Code data = pd.read_pickle('../data/processed/model_data_percentV2.pickle') origin_data = pd.read_pickle('../data/interim/full_data.pickle') data.describe() data = pd.concat([data, origin_data['OSEBuildingID']], axis=1) data ###Output _____no_output_____ ###Markdown Variable à prédire ###Code target = ['SiteEnergyUseWN_kBtu'] log_transform = FunctionTransformer(np.log1p, inverse_func=np.expm1) inverse_transform = FunctionTransformer(lambda x: 1 / x * 1e4, inverse_func=lambda x: 1 / x * 1e4) data_train = data.loc[2016][data.loc[2016, 'OSEBuildingID'].isin(data.loc[2015, 'OSEBuildingID'])] data_test = data.loc[2016][~data.loc[2016, 'OSEBuildingID'].isin(data.loc[2015, 'OSEBuildingID'])] data_train.dropna(inplace=True) data_test.dropna(inplace=True) data_test.shape data_train.shape data_train.drop('OSEBuildingID', axis=1, inplace=True) data_test.drop('OSEBuildingID', axis=1, inplace=True) models = { 'LinearRegression': 'linear_model.LinearRegression', 'Ridge': 'linear_model.Ridge', 'Lasso': 'linear_model.Lasso', 'Elastic-Net': 'linear_model.ElasticNet', 'SGDRegressor': 'linear_model.SGDRegressor', 'KNNRegressor': 'neighbors.KNeighborsRegressor', 'DecisionTreeRegressor': 'tree.DecisionTreeRegressor', 'GradientBoostingRegressor': 'ensemble.GradientBoostingRegressor', 'RandomForestRegressor': 'ensemble.RandomForestRegressor', 'SVR': 'svm.SVR', 'MLP': 'neural_network.MLPRegressor', } models scores = dict() scores_train = dict() times = dict() for model, cls in models.items(): mod = import_module(f"sklearn.{cls.split('.')[0]}") cls = getattr(mod, cls.split('.')[1]) preprocessor = ColumnTransformer( transformers=[ ('log_transform', log_transform, [-1]), ('others', FunctionTransformer(), slice(0, -1)) ] ) clf = Pipeline([ ('preprocessor', preprocessor), ('regressor', TransformedTargetRegressor(cls(), transformer=log_transform)) ]) t1 = time() clf.fit(data_train.drop(target, axis=1), data_train[target]) t2 = time() score = clf.score(data_test.drop(target, axis=1), data_test[target]) print("score -- %25s : %5f (%3f s)" % (model, score, t2 - t1)) scores[model] = score times[model] = t2 - t1 scores_train[model] = clf.score(data_train.drop(target, axis=1), data_train[target]) dataframe = pd.DataFrame(pd.Series(scores, name='score')) dataframe['time'] = pd.Series(times) dataframe['score train'] = pd.Series(scores_train) dataframe.sort_values('score', ascending=False) dataframe['overfit'] = dataframe['score train'] > dataframe['score'] default_score = dataframe dataframe[["score", "score train"]].to_latex('../reports/latex-report/includes/scores_1.tex') dataframe NUM = 20 model_params = { 'Ridge': {'regressor__regressor__alpha': np.logspace(-3, 0, num=NUM), 'regressor__regressor__tol': [0.001]}, 'Lasso': {'regressor__regressor__alpha': np.logspace(-5, -3, num=NUM), 'regressor__regressor__tol': [0.001]}, 'Elastic-Net': {'regressor__regressor__alpha': np.logspace(-5, -3, num=NUM), 'regressor__regressor__tol': [0.001]}, 'SVR': {"regressor__regressor__C": [5, 10, 15], "regressor__regressor__gamma": np.arange(0.1, 1.0, NUM), "regressor__regressor__kernel": ['rbf', 'linear']}, 'SGDRegressor': {'regressor__regressor__alpha': np.logspace(-6, -3, num=NUM)}, 'KNNRegressor': {'regressor__regressor__n_neighbors': np.arange(1, 10)}, 'DecisionTreeRegressor': None, 'RandomForestRegressor': None, 'GradientBoostingRegressor': None, 'MLP': {'regressor__regressor__hidden_layer_sizes': [(50, 50, 50), (50, 100, 50), (100,)], # 'regressor__regressor__activation': ['tanh', 'logistic', 'relu'], # 'regressor__regressor__solver': ['sgd', 'adam'], 'regressor__regressor__alpha': [0.00005, 0.0001, 0.0005], 'regressor__regressor__learning_rate': ['constant', 'adaptive'] } } %%time scores = dict() scores_train = dict() times = dict() models_ = dict() for model, cls in models.items(): mod = import_module(f"sklearn.{cls.split('.')[0]}") cls = getattr(mod, cls.split('.')[1]) preprocessor = ColumnTransformer( transformers=[ ('log_transform', log_transform, [-1]), ('others', FunctionTransformer(), slice(0, -1)) ] ) clf = Pipeline(memory=cache_dir, steps=[ ('preprocessor', preprocessor), ('regressor', TransformedTargetRegressor(cls(), transformer=log_transform))]) params = model_params.get(model) if params: clf = GridSearchCV(clf, params, n_jobs=-1) else: pass t1 = time() clf.fit(data_train.drop(target, axis=1), data_train[target]) t2 = time() score = clf.score(data_test.drop(target, axis=1), data_test[target]) print("score -- %25s : %5f (%3f s)" % (model, score, t2 - t1)) scores[model] = score times[model] = t2 - t1 models_[model] = clf scores_train[model] = clf.score(data_train.drop(target, axis=1), data_train[target]) dataframe = pd.DataFrame(pd.Series(scores, name='score')) dataframe['score train'] = pd.Series(scores_train) dataframe['time'] = pd.Series(times) dataframe.sort_values('score', ascending=False, inplace=True) dataframe.reset_index(inplace=True) dataframe.rename(columns={'index': 'model'}, inplace=True) dataframe.set_index(dataframe['model'], inplace=True) dataframe.drop('model', axis=1, inplace=True) dataframe['old score'] = default_score['score'] dataframe['gain'] = dataframe['score'] - default_score['score'] dataframe['overfit'] = dataframe['score train'] > dataframe['score'] dataframe dataframe.reset_index(inplace=True) sns.barplot(y='model', x='score', data=dataframe, facecolor=(0.6, 0.6, 0.6, 1)) best_params = {} for model, clf in models_.items(): if model_params.get(model): best_params[model] = clf.best_params_ best_params for model, params in best_params.items(): params_ = dict() for param_name, param_val in params.items(): params_['__'.join(param_name.split('__')[1:])] = param_val best_params[model] = params_ %%time scores = dict() score_train = dict() times = dict() models_ = dict() for model, cls in models.items(): mod = import_module(f"sklearn.{cls.split('.')[0]}") cls = getattr(mod, cls.split('.')[1]) preprocessor = ColumnTransformer( transformers=[ ('log_transform', log_transform, [-1]), ('others', FunctionTransformer(), slice(0, -1)) ] ) clf = Pipeline(memory=cache_dir, steps=[ ('preprocessor', preprocessor), ('regressor', TransformedTargetRegressor(cls(), transformer=log_transform))]) params = best_params.get(model) if params: clf.named_steps['regressor'].set_params(**params) t1 = time() clf.fit(data_train.drop(target, axis=1), data_train[target]) t2 = time() score = clf.score(data_test.drop(target, axis=1), data_test[target]) print("score -- %25s : %5f (%3f s)" % (model, score, t2 - t1)) scores[model] = score times[model] = t2 - t1 models_[model] = clf scores_train[model] = clf.score(data_train.drop(target, axis=1), data_train[target]) dataframe = pd.DataFrame(pd.Series(scores, name='score')) dataframe['score train'] = pd.Series(scores_train) dataframe['time'] = pd.Series(times) dataframe.sort_values('score', ascending=False, inplace=True) dataframe.reset_index(inplace=True) dataframe.rename(columns={'index': 'model'}, inplace=True) dataframe.set_index(dataframe['model'], inplace=True) dataframe.drop('model', axis=1, inplace=True) dataframe['old score'] = default_score['score'] dataframe['gain'] = dataframe['score'] - default_score['score'] dataframe['overfit'] = dataframe['score train'] > dataframe['score'] dataframe.reset_index(inplace=True) sns.barplot(y='model', x='score', data=dataframe, facecolor=(0.6, 0.6, 0.6, 1)) plt.tight_layout() plt.savefig('../reports/figures/results_scores.png') dataframe[['model', 'score', 'score train']]\ .set_index('model', drop=True)\ .to_latex("../reports/latex-report/includes/scores_2.tex") dataframe df_train = dataframe[['model', 'score train']].copy() df_train.set_index('model', inplace=True, drop=True) df_train['set'] = 'train' df_train.rename(columns={'score train': 'score'}, inplace=True) df_test = dataframe[['model', 'score']].copy() df_test.set_index('model', inplace=True, drop=True) df_test['set'] = 'test' df = pd.concat([df_test, df_train], axis=0) df.reset_index(inplace=True) plt.subplots(1, figsize=(7, 7)) sns.barplot(x=df['model'], y=df['score'], hue=df['set'], data=data) plt.xticks(rotation=80) plt.tight_layout() plt.savefig('../reports/figures/models_scores.png') best_model = models_[dataframe.loc[dataframe['score'].idxmax(), 'model']] y_pred = best_model.predict(data_test.drop(target, axis=1)) y_true = data_test[target].values fig, ax = plt.subplots(1, figsize=(12, 8)) sns.scatterplot(x=data_train[target].values.ravel(), y=best_model.predict(data_train.drop(target, axis=1)).ravel(), marker='+', alpha=0.4) sns.scatterplot(x='y_true', y='y_pred', data=pd.DataFrame({'y_true': y_true.ravel(), 'y_pred': y_pred.ravel()}), ax=ax, alpha=0.5) plt.show() models_idx = list(dataframe.set_index('model')['score'].to_dict().keys()) n_col = 3 n_row = 4 fig, axes = plt.subplots(n_row, n_col, figsize=(15, 20)) log = True for ax, m in zip(axes.ravel(), models_idx): model_ = models_[m] y_true = data_test[target].values.ravel() y_pred = model_.predict(data_test.drop(target, axis=1)).ravel() if log: sns.scatterplot(np.log(y_true), np.log(y_pred), marker='+', alpha=0.3, ax=ax, color='0.2') else: sns.scatterplot(y_true, y_pred, marker='+', alpha=0.3, ax=ax, color='0.2') ax.set_title(m + '\n score : %4f' % scores[m] + '\n RMSE %.4E' % Decimal(np.sqrt(mean_squared_error(y_true, y_pred)))) axes[-1, -1].axis('off') plt.subplots_adjust(hspace=.3, ) plt.savefig('../reports/figures/all_models_results_test.png') models_idx = list(dataframe.set_index('model')['score'].to_dict().keys()) scores_ = dataframe.set_index('model')['score train'].to_dict() n_col = 3 n_row = 4 fig, axes = plt.subplots(n_row, n_col, figsize=(15, 20)) log = True for ax, m in zip(axes.ravel(), models_idx): model_ = models_[m] y_true = data_train[target].values.ravel() y_pred = model_.predict(data_train.drop(target, axis=1)).ravel() if log: sns.scatterplot(np.log(y_true), np.log(y_pred), marker='+', color='0.2', alpha=0.3, ax=ax) else: sns.scatterplot(y_true, y_pred, marker='+', color='0.2', alpha=0.3, ax=ax) ax.set_title(m + '\n score : %4f' % scores_[m] + '\n RMSE %.4E' % Decimal(np.sqrt(mean_squared_error(y_true, y_pred)))) axes[-1, -1].axis('off') plt.subplots_adjust(hspace=.3, ) plt.savefig('../reports/figures/all_models_results_train.png') base_path = os.path.abspath('..') model_name = dataframe.loc[dataframe['score'].idxmax(), 'model'] s = scores[model_name] path = os.path.join(base_path, 'models', model_name + '%3f_V1.pickle' % s) with open(path, 'wb') as f: pickle.Pickler(f).dump(best_model) ###Output _____no_output_____
.ipynb_checkpoints/Caderno-01-checkpoint.ipynb
###Markdown Introdução ao Python - Ana Beatriz Macedo Link para download: https://github.com/AnabeatrizMacedo241/Python-101 Github: https://github.com/AnabeatrizMacedo241 Linkedin: https://www.linkedin.com/in/ana-beatriz-oliveira-de-macedo-85b05b215/ Pricipais plataformas para estudar Python:1. Google Colab https://colab.research.google.com/notebooks/2. Jupyter Notebook https://jupyter.org/ Nessa primeira parte veremos:1. Operações matemáticas2. Conversões e funções type3. Variáveis4. Strings5. Operadores lógicos e relacionais Operações matemáticas 1. Adição: +2. Subtração: -3. Multiplicação: *4. Divisão: /5. Potência: **6. Divisão inteira: //7. Resto da divisão: % ###Code 4+5 5-4 5*4 5/5 5**2 5//5 5%5 5-5*6 #Tente fazer mais exemplos, mas pense em seu resultado antes de rodá-los ###Output _____no_output_____ ###Markdown A PrecedênciaParênteses devem ser utilizados para definir a precedência de operações matemáticas assim como na vida real. ###Code 1000 + (1000 * 5 / 100) ###Output _____no_output_____ ###Markdown Conversões e funções type 1. float: 5.0, 6.0, 7.02. int: 5, 6, 73. String(str): 'cinco', 'seis', 'sete'. Strings sempre devem estar entre aspas! ###Code 4/2 #Gera um número float 4//2 #Gera um número int #Convertendo números float(5) int(5.0) type('python') #Para arredondar round(6.9) round(6.4) # Converter para valores absolutos abs(-5) #Outra forma de fazer potência(número da base, número elevado) pow(5, 2) ###Output _____no_output_____ ###Markdown Variáveis Sempre devemos definir uma variável antes de utilizá-la. Usamos o símbolo de '=' para atribuir um valor a uma variável ###Code variável = 1 ###Output _____no_output_____ ###Markdown Pata obtermos o valor podemos escrever novamente o nome da variável ou usar a função print() com o nome da variável entre os parênteses. Atenção: sempre reescreva a variável do mesmo jeito que ela foi criada. Qualquer caracter diferente retornará um erro! ###Code variável #Vamos fazer uma com erro. Trocaremso o 'v'minúsculo para um maiúsculo. Variável print(variável) #Podemos descobrir seu type também type(variável) ###Output _____no_output_____ ###Markdown Podemos realizar operações com variáveis ###Code x = 2 y = 7 x+y #Estamos concatenando o código ao realizar operações com strings a = 'Olá,' b = ' estou aprendendo Python!' a+b ###Output _____no_output_____ ###Markdown Strings ###Code print('Trabalhando com strings') # \n resulta em uma quebra de linha print('Lista de compras\nOvos\nLeite\nQueijo') ###Output Lista de compras Ovos Leite Queijo ###Markdown Assim como podemos coverter números, também podemos converter strings com suas funções built-in. Aqui vão algumas delas... ###Code frase = 'Python para iniciantes' frase.upper() frase.lower() frase.split() frase.count('a') #Conta quantos caracteres 'a' existem na frase frase.isnumeric() #A variável 'frase'é um número? frase.isalpha() frase.isascii() #Verifica se há carecteres na frase ###Output _____no_output_____ ###Markdown Você pode testar muitas outras funções que não mencionei, basta você digitar sua variável, depois '.' e dar um tab em seu teclado para descobrir mais opções. Operadores lógicos e relacionais Operador relacional| Operação | Símbolo matemático--------:|-----------|:-----------------```==``` | Igualdade | $=$```>``` | Maior que | $>$```<``` | Menor que | $<$```!=``` | Diferente | $\neq$```>=``` | Maior ou igual | $\geq$```<=``` | Menor ou igual | $\leq$ Operador lógico | Significado---------------:|---------```not``` | não```and``` | e ```or``` | ou Mas o que são esses valores True e False retornados acima? Eles são os booleans! Retornam se os valores são verdadeiros ou falsos e são sempre escritos em maiúsculo: True e False ###Code # Exemplos, usamos o símbolo de '=='para checar se um valor é igual ao outro print('Python' == 'python') print('Codar' == 'Codar') Ano = 2021 século = 21 Ano > século século > Ano século and Ano > 18 not True ###Output _____no_output_____
blender/Codes/label_generator_zxy.ipynb
###Markdown Origin-Line Distance (3D)Position: $\ \vec{r} = (\alpha, \beta, \gamma)$\Direction vector: $\ \vec{n} = (n_x, n_y, n_z)$\Line: $\ \vec{p}(t) = \vec{r} + t\vec{n}$\$$\vec{p}(h)\cdot\vec{n} = 0 \\h = - \frac{\vec{r}\cdot\vec{n}}{\|\vec{n}\|^2} \\d = \|\vec{r}-h\vec{n}\|_2$$ ###Code def o2line(r, n): s = 0 for i in range(3): s += (r[i] - np.dot(r, n) / ((np.linalg.norm(n, ord=2)) ** 2) * n[i]) ** 2 return np.sqrt(s) ###Output _____no_output_____ ###Markdown Make a dataset directory ###Code ds_num = 1 # Initialize ds_num while True: ds_dir = "Database/ds_{:03d}".format(ds_num) if not os.path.exists(ds_dir): os.makedirs(ds_dir) os.makedirs(ds_dir + "/train") os.makedirs(ds_dir + "/val") break else: ds_num += 1 continue sample_num_dict = {"train": conf.train_sample_num, "val": conf.val_sample_num} ###Output _____no_output_____ ###Markdown Parameters$$\theta = \{x, y, z, x_{2d}, y_{2d}, n_x, n_y, n_z, \phi, \gamma, \alpha, \beta\}$$ ###Code phase_dict = {"train": "training", "val": "validation"} def make_csv(): for phase in ["train", "val"]: data = [] i = 0 while i < sample_num_dict[phase]: # Position z = uniform(conf.z_min, conf.z_max) x_range = conf.x_min * (z / conf.z_min) y_range = conf.y_min * (z / conf.z_min) x = uniform(-x_range, x_range) y = uniform(-y_range, y_range) r = np.array([x, y, z]) # 2D position x_2d, y_2d = rep.repon2plane(r, a_ratio=(16, 9), fov=70) alpha = uniform( conf.alpha_min, conf.alpha_max ) # Rotation around local x-axis beta = uniform(conf.beta_min, conf.beta_max) # Rotation around local y-axis gamma = uniform( conf.gamma_min, conf.gamma_max ) # Rotation around local z-axis rot_matrix = Rotation.from_euler("zxy", [gamma, alpha, beta], degrees=True) n0 = np.array([0.0, 0.0, -1.0]) # Initial vector n = rot_matrix.apply(n0) # Direction vector phi = uniform(0, conf.phi_max) if n[2] >= -0.5 or o2line(r, n) < 5: continue else: row = [x, y, -z, n[0], n[1], n[2], phi, gamma, alpha, beta, x_2d, y_2d] data.append(row) i += 1 df = pd.DataFrame( data=data, columns=[ "x", "y", "z", "nx", "ny", "nz", "phi", "gamma", "alpha", "beta", "x_2d", "y_2d", ], ) alpha_list = [row[8] for row in data] beta_list = [row[9] for row in data] gamma_list = [row[7] for row in data] nx_list = [row[3] for row in data] ny_list = [row[4] for row in data] nz_list = [row[5] for row in data] fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 4)) ax1.hist( [alpha_list, beta_list, gamma_list], bins=36, range=(-180, 180), label=[r"$\alpha$", r"$\beta$", r"$\gamma$"], ) ax1.legend() ax1.set_title(phase_dict[phase].capitalize()) ax1.set_xlabel("Degrees") ax1.set_ylabel("Frequency") ax2.hist( [nx_list, ny_list, nz_list], bins=40, range=(-1, 1), label=[r"$n_x$", r"$n_y$", r"$n_z$"], ) ax2.legend() ax2.set_title(phase_dict[phase].capitalize()) ax2.set_xlabel(r"$[a.u.]$") ax2.set_ylabel("Frequency") fig.savefig(ds_dir + "/{}_data_distribution.png".format(phase), dpi=300) # save as csv df.to_csv(ds_dir + "/{}_{:03d}.csv".format(phase, ds_num), index=False) yml = { "pose": {"euler": {"axis": "zxy", "rotation": "intrinsic"}}, "translation": { "x_range": 65, "y_range": 35, "z_min": conf.z_min, "z_max": conf.z_max, }, "articulation": {"phi_max": conf.phi_max}, "data": {"train": conf.train_sample_num, "val": conf.val_sample_num}, "params": 12, "o2line": 5, } with open(ds_dir + "/ds_config.yaml", "w") as file: yaml.dump(yml, file, default_flow_style=False) make_csv() print( "\n=========================\n" "PROCESS COMPLETED \n" "=========================\n" ) ###Output _____no_output_____
HW1_weight_initialization-konoplev.ipynb
###Markdown Weight InitializationВ этом ноутбуке вы узнаете, как найти хорошие начальные веса для нейронной сети. Инициализация весов происходит один раз, когда модель создана, до обучения.Имея хорошие начальные веса, можно расположить нейронную сеть близко к оптимальному решению.Это позволяет нейронной сети быстрее сойтись к наилучшему решению. Initial Weights and Observing Training LossЧтобы увидеть, как работают различные веса, мы протестируем один и тот же набор данных и нейронную сеть. Таким образом, мы знаем, что любые изменения в поведении модели происходят из-за весов, а не из-за каких-либо изменений данных или структуры модели. Dataset and ModelДля изучения различных инициализаций мы обучим MLP классифицировать изображения из набора данных [Fashion-MNIST] (https://github.com/zalandoresearch/fashion-mnist). Набор данных FashionMNIST содержит изображения типов одежды; ' classes = ['футболка / топ', 'брюки', 'пуловер', 'платье', 'пальто', 'сандалии', 'рубашка', 'кроссовки', 'сумка',`ботильоны']'. Изображения нормализуются таким образом, чтобы их пиксельные значения находились в диапазоне [0.0 - 1.0). Запустите ячейку ниже, чтобы загрузить данные. Import Libraries and Load [Data](http://pytorch.org/docs/stable/torchvision/datasets.html) ###Code import torch import numpy as np from torchvision import datasets import torchvision.transforms as transforms from torch.utils.data.sampler import SubsetRandomSampler # number of subprocesses to use for data loading num_workers = 0 # how many samples per batch to load batch_size = 100 # percentage of training set to use as validation valid_size = 0.2 # convert data to torch.FloatTensor transform = transforms.ToTensor() # choose the training and test datasets train_data = datasets.FashionMNIST(root='data', train=True, download=True, transform=transform) test_data = datasets.FashionMNIST(root='data', train=False, download=True, transform=transform) # obtain training indices that will be used for validation num_train = len(train_data) indices = list(range(num_train)) np.random.shuffle(indices) split = int(np.floor(valid_size * num_train)) train_idx, valid_idx = indices[split:], indices[:split] # define samplers for obtaining training and validation batches train_sampler = SubsetRandomSampler(train_idx) valid_sampler = SubsetRandomSampler(valid_idx) # prepare data loaders (combine dataset and sampler) train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=train_sampler, num_workers=num_workers) valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, sampler=valid_sampler, num_workers=num_workers) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers) # specify the image classes classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] print(f""" torch.__version__: {torch.__version__} np.__version__: {np.__version__} """) ###Output torch.__version__: 1.6.0 np.__version__: 1.16.3 ###Markdown Visualize Some Training Data ###Code import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(images[idx]), cmap='gray') ax.set_title(classes[labels[idx]]) ###Output _____no_output_____ ###Markdown Архитектура моделиМы создадим MLP (multilayer perceptron), который будем использовать для классификации данных, со следующими характеристиками:* 3 линейных слоя с размерами 256 и 128; * MLP принимает в качестве входных данных выпрямленное изображение (вектор длины 784) и выдает оценку принадлежности объекта к каждому из 10 классов.---Мы проверим влияние различных инициализаций на эту 3-слойную нейронную сеть, обученную с активациями ReLU и оптимизатором Adam. Полученные выводы применимы и к другим нейронным сетям, включая различные активации и оптимизаторы. --- All Zeros or OnesСледуя принципам бритвы Оккама ([Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor)), вы могли бы естестевенно подумать, что достаточно проиницализировать все веса нулем или единицей.При одинаковом весе все нейроны в каждом слое выдают одинаковый результат. Это затрудняет обучение, так как непонятно, какие именно веса в какую сторону нужно менять.Давайте сравним функции потерь для двух моделей, проинициализированных (1) нулями и (2) единицами.Ниже мы используем Pytorch's [nn. init](https://pytorch.org/docs/stable/nn.htmltorch-nn-init), чтобы проинициализировать веса каждого линейного слоя константной. Библиотека init предоставляет ряд функций инициализации, которые дают возможность инициализировать веса каждого слоя в соответствии с его типом.Для линейного слоя веса инициализируются следующим образом:>```if isinstance(m, nn.Linear): nn.init.constant_(m.weight, constant_weight) nn.init.constant_(m.bias, 0)```где `constant_weight` - значение константы (в нашем случае 0 или 1). **Задание**: определите модель c описанной выше архитуктурой ###Code import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Net(nn.Module): def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None): super(Net, self).__init__() # linear layer (784 -> hidden_1) self.fc1 = nn.Linear(784, hidden_1) # linear layer (hidden_1 -> hidden_2) self.fc2 = nn.Linear(hidden_1, hidden_2) # linear layer (hidden_2 -> 10) self.fc3 = nn.Linear(hidden_2, 10) # dropout layer (p=0.2) self.dropout = nn.Dropout(p=.2) # initialize the weights to a specified, constant value if(constant_weight is not None): for m in self.modules(): if isinstance(m, nn.Linear): nn.init.constant_(m.weight, constant_weight) nn.init.constant_(m.bias, 0) def forward(self, x): # flatten image input x = x.view(-1, 28 * 28) # add hidden layer, with relu activation function x = F.relu(self.fc1(x)) # add dropout layer x = self.dropout(x) # add hidden layer, with relu activation function x = F.relu(self.fc2(x)) # add dropout layer x = self.dropout(x) # add output layer x = self.fc3(x) return x ###Output _____no_output_____ ###Markdown Сравнение поведения моделиНиже мы используем функцию `.compare_init_weights`, чтобы сравнить функции потерь на обучении и тесте для двух моделей: `model_0` и `model_1`. Эта функция принимает список моделей (каждая с различными начальными весами), название создаваемого графика, а также загрузчики обучающих и тестовых наборов данных. Для каждой заданной модели эта функцию построит график лосса га обучения для первых 100 батчей и выведет точность валидации после 2 эпох обучения. *Примечание: Если вы использовали батчи меньшего размера, вы можете увеличить количество эпох здесь, чтобы лучше сравнить, как ведут себя модели после просмотра нескольких сотен изображений.* **Задание**: Допишите обучение модели и запустите ячейки ниже, чтобы увидеть разницу между инициализациями всеми нулями и всеми единицами. ###Code # initialize two NN's with 0 and 1 constant weights model_0 = Net(constant_weight=0) model_1 = Net(constant_weight=1) def _get_loss_acc(model, train_loader, valid_loader): """ Get losses and validation accuracy of example neural network """ n_epochs = 2 learning_rate = 0.001 # Training loss criterion = nn.CrossEntropyLoss() # Optimizer optimizer = optimizer = torch.optim.Adam(model.parameters(), learning_rate) # Measurements used for graphing loss loss_batch = [] for epoch in range(1, n_epochs+1): # initialize var to monitor training loss train_loss = 0.0 ######################## # TODO train the model # ######################## for data, target in train_loader: # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the batch loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # record average batch loss loss = loss.item() loss_batch.append(loss) # after training for 2 epochs, check validation accuracy correct = 0 total = 0 for data, target in valid_loader: # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # get the predicted class from the maximum class score _, predicted = torch.max(output.data, 1) # count up total number of correct labels # for which the predicted and true labels are equal total += target.size(0) correct += (predicted == target).sum() # calculate the accuracy # to convert `correct` from a Tensor into a scalar, use .item() valid_acc = correct.item() / total # return model stats return loss_batch, valid_acc def compare_init_weights( model_list, plot_title, train_loader, valid_loader, plot_n_batches=100): """ Plot loss and print stats of weights using an example neural network """ colors = ['r', 'b', 'g', 'c', 'y', 'k'] label_accs = [] label_loss = [] assert len(model_list) <= len(colors), 'Too many initial weights to plot' for i, (model, label) in enumerate(model_list): loss, val_acc = _get_loss_acc(model, train_loader, valid_loader) plt.plot(loss[:plot_n_batches], colors[i], label=label) label_accs.append((label, val_acc)) label_loss.append((label, loss[-1])) plt.title(plot_title) plt.xlabel('Batches') plt.ylabel('Loss') plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.show() print('After 2 Epochs:') print('Validation Accuracy') for label, val_acc in label_accs: print(' {:7.3f}% -- {}'.format(val_acc*100, label)) print('Training Loss') for label, loss in label_loss: print(' {:7.3f} -- {}'.format(loss, label)) model_list = [(model_0, 'All Zeros'), (model_1, 'All Ones')] compare_init_weights(model_list, 'All Zeros vs All Ones', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown Как вы можете видеть, точность близка к случайному угадыванию как для нулей, так и для единиц, около 10%.Нейронной сети трудно определить, какие веса должны быть изменены, так как нейроны имеют одинаковый выход для каждого слоя. Чтобы избежать нейронов с одинаковым выходом, давайте использовать уникальные веса. Мы также можем случайным образом выбрать веса, чтобы избежать застревания в локальном минимуме для каждого запуска.Хорошим решением для получения этих случайных весов является выборка из однородного распределения. Равномерное распределение[Равномерное распределение](https://en.wikipedia.org/wiki/Uniform_distribution) имеет равную вероятность выбора любого числа из набора. Мы будем выбирать из непрерывного распределения, поэтому вероятность выбора одного и того же числа невелика. Uniform Initialization, BaselineДавайте посмотрим, насколько хорошо нейронная сеть тренируется с использованием равномерной инициализации весов, где параметры равномерного распределения `a=0.0` и `b=1.0`. Мы рассмотрим другой способ инициализации весов нейросети (помимо использованного в коде класса Net). Чтобы инициализировать веса вне определения модели, вы можете:1. Определить функцию, которая инициализирует веса нужных слоев (в нашем случае - линейных)2. Инициализировать модель, используя `model.apply(fn)`, которая применяет функцию `fn` к каждому слою модели.Для равномерной инициализации весов нашей модели используйте `weight.data.uniform_`.**Задание:** допишите функцию равномерной инициализации весов ###Code # takes in a module and applies the specified weight initialization def weights_init_uniform(m): classname = m.__class__.__name__ for l in m.modules(): if isinstance(l, nn.Linear): torch.nn.init.uniform_(l.weight, a=0.0, b=1.0) if l.bias.data is not None: l.bias.data.zero_() # for every Linear layer in a model.. # create a new model with these weights model_uniform = Net() model_uniform.apply(weights_init_uniform) # evaluate behavior compare_init_weights([(model_uniform, 'Uniform Weights')], 'Uniform Baseline', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown ---График потерь показывает, что нейронная сеть учится, чего она не делала со всеми нулями или со всеми единицами. Мы движемся в правильном направлении! Общее правило инициализации весовОбщее правило для инициализации весов в нейронной сети состоит в том, чтобы установить их близкими к нулю, но не слишком маленькими. >Хорошая практика заключается в том, чтобы инициализировать веса в диапазоне $[- y, y]$, где $y=1/\sqrt{n}$ ($n$ - это число входов в данный нейрон).Давайте посмотрим, верно ли это: центрируем наш равномерный диапазон относительно нуля, сдвинув его на 0,5. Это даст нам диапазон [-0.5, 0.5] равномерного распределения.**Задание:** поменяйте функцию равномерной инициализации весов, чтобы распределение весов было в диапозоне [-0.5, 0.5]. ###Code # takes in a module and applies the specified weight initialization def weights_init_uniform_center(m): classname = m.__class__.__name__ for l in m.modules(): if isinstance(l, nn.Linear): torch.nn.init.uniform_(l.weight, a=-0.5, b=0.5) if l.bias.data is not None: l.bias.data.zero_() # for every Linear layer in a model.. # create a new model with these weights model_centered = Net() model_centered.apply(weights_init_uniform_center) # takes in a module and applies the specified weight initialization def weights_init_uniform_rule(m): classname = m.__class__.__name__ for l in m.modules(): if isinstance(l, nn.Linear): y = 1 / np.sqrt(l.in_features) torch.nn.init.uniform_(l.weight, a=-y, b=y) if l.bias.data is not None: l.bias.data.zero_() # for every Linear layer in a model.. # create a new model with these weights model_rule = Net() model_rule.apply(weights_init_uniform_rule) model_list = [(model_centered, 'Centered Weights [-0.5, 0.5)'), (model_rule, 'General Rule [-y, y)')] compare_init_weights(model_list, '[-0.5, 0.5) vs [-y, y)', train_loader, valid_loader) ###Output _____no_output_____ ###Markdown Такое поведение действительно многообещающе! Мало того, что лосс уменьшается, но, кажется, это происходит очень быстро; всего через две эпохи мы получаем довольно высокую точность на тесте. Это должно дать вам некоторое представление о том, почему хорошая инициализация весов действительно может помочь тренировочному процессу!---Равномерное распределение имеет одинаковый шанс выбрать *любое значение* в диапазоне. Что, если мы используем распределение, которое имеет более высокий шанс выбрать числа ближе к 0? Давайте рассмотрим на нормальное распределение. Hормальное распределениеВ отличие от равномерного распределения, [нормальное распределение](https://en.wikipedia.org/wiki/Normal_distribution) имеет более высокую вероятность выбора числа, близкого к среднему значению. **Задание:** добавьте нормальное распределение:как стандартное отклонение выберите $y=1/\sqrt{n}$ ###Code ## complete this function def weights_init_normal(m): '''Takes in a module and initializes all linear layers with weight values taken from a normal distribution.''' classname = m.__class__.__name__ for l in m.modules(): if isinstance(l, nn.Linear): y = 1 / np.sqrt(l.in_features) torch.nn.init.normal_(l.weight, std=y) if l.bias.data is not None: l.bias.data.zero_() ## -- no need to change code below this line -- ## # create a new model with the rule-based, uniform weights model_uniform_rule = Net() model_uniform_rule.apply(weights_init_uniform_rule) # create a new model with the rule-based, NORMAL weights model_normal_rule = Net() model_normal_rule.apply(weights_init_normal) model_list = [(model_uniform_rule, 'Uniform Rule [-y, y)'), (model_normal_rule, 'Normal Distribution')] compare_init_weights(model_list, 'Uniform vs Normal', train_loader, valid_loader) ###Output _____no_output_____
data-analysis/pydata-book/ch03.ipynb
###Markdown Built-in Data Structures, Functions, Data Structures and Sequences Tuple ###Code tup = 4, 5, 6 tup type(tup) nested_tup = (4, 5, 6), (7, 8) nested_tup type(nested_tup) nested_tup[0] tuple([4, 0, 2]) tup = tuple('string') tup tup[0] tup = tuple(['foo', [1, 2], True]) tup[2] = False tup[1].append(3) tup (4, None, 'foo') + (6, 0) + ('bar',) ('foo', 'bar') * 4 type((4,None,'foo')) ###Output _____no_output_____ ###Markdown Unpacking tuples ###Code tup = (4, 5, 6) a, b, c = tup b tup = 4, 5, (6, 7) a, b, (c, d) = tup d ###Output _____no_output_____ ###Markdown tmp = aa = bb = tmp ###Code a, b = 1, 2 print(a) print(b) b, a = a, b print(a) print(b) seq = [(1, 2, 3), (4, 5, 6), (7, 8, 9)] for a, b, c in seq: print('a={0}, b={1}, c={2}'.format(a, b, c)) seq = [(1, 2, 3), (4, 5, 6), (7, 8, 9)] for a, b, c in seq: # fstring (python3 only) print(f'a={a}, b={b}, c={c}') values = 1, 2, 3, 4, 5 a, b, *rest = values a, b rest a, b, *_ = values print(a,b, *_) type(*_) ###Output _____no_output_____ ###Markdown Tuple methods ###Code a = (1, 2, 2, 2, 3, 4, 2) a.count(2) ###Output _____no_output_____ ###Markdown List ###Code a_list = [2, 3, 7, None] tup = ('foo', 'bar', 'baz') b_list = list(tup) b_list b_list[1] = 'peekaboo' b_list gen = range(10) gen list(gen) ###Output _____no_output_____ ###Markdown Adding and removing elements ###Code b_list.append('dwarf') b_list b_list.insert(1, 'red') b_list b_list.pop(2) b_list b_list.append('foo') b_list b_list.remove('foo') b_list 'dwarf' in b_list 'dwarf' not in b_list ###Output _____no_output_____ ###Markdown Concatenating and combining lists ###Code [4, None, 'foo'] + [7, 8, (2, 3)] x = [4, None, 'foo'] x.extend([7, 8, (2, 3)]) x ###Output _____no_output_____ ###Markdown everything = []for chunk in list_of_lists: everything.extend(chunk) everything = []for chunk in list_of_lists: everything = everything + chunk Sorting ###Code a = [7, 2, 5, 1, 3] a.sort() a b = ['saw', 'small', 'He', 'foxes', 'six'] b.sort(key=len) b ###Output _____no_output_____ ###Markdown Binary search and maintaining a sorted list ###Code import bisect c = [1, 2, 2, 2, 3, 4, 7] bisect.bisect(c, 2) bisect.bisect(c, 5) bisect.insort(c, 6) c ###Output _____no_output_____ ###Markdown Slicing ###Code seq = [7, 2, 3, 7, 5, 6, 0, 1] seq[1:5] seq[3:4] = [6, 3] seq seq[:5] seq[3:] seq[-4:] seq[-6:-2] seq[::2] seq[::-1] ###Output _____no_output_____ ###Markdown Built-in Sequence Functions enumerate i = 0for value in collection: do something with value i += 1 for i, value in enumerate(collection): do something with value ###Code some_list = ['foo', 'bar', 'baz'] mapping = {} for i, v in enumerate(some_list): mapping[v] = i mapping ###Output _____no_output_____ ###Markdown sorted ###Code sorted([7, 1, 2, 6, 0, 3, 2]) sorted('horse race') ###Output _____no_output_____ ###Markdown zip ###Code seq1 = ['foo', 'bar', 'baz'] seq2 = ['one', 'two', 'three'] zipped = zip(seq1, seq2) list(zipped) seq3 = [False, True] list(zip(seq1, seq2, seq3)) for i, (a, b) in enumerate(zip(seq1, seq2)): print('{0}: {1}, {2}'.format(i, a, b)) pitchers = [('Nolan', 'Ryan'), ('Roger', 'Clemens'), ('Schilling', 'Curt')] first_names, last_names = zip(*pitchers) first_names last_names ###Output _____no_output_____ ###Markdown reversed ###Code list(reversed(range(10))) ###Output _____no_output_____ ###Markdown dict ###Code empty_dict = {} d1 = {'a' : 'some value', 'b' : [1, 2, 3, 4]} d1 d1[7] = 'an integer' d1 d1['b'] 'b' in d1 d1[5] = 'some value' d1 d1['dummy'] = 'another value' d1 del d1[5] d1 ret = d1.pop('dummy') ret d1 list(d1.keys()) list(d1.values()) d1.update({'b' : 'foo', 'c' : 12}) d1 ###Output _____no_output_____ ###Markdown Creating dicts from sequences mapping = {}for key, value in zip(key_list, value_list): mapping[key] = value ###Code mapping = dict(zip(range(5), reversed(range(5)))) mapping ###Output _____no_output_____ ###Markdown Default values if key in some_dict: value = some_dict[key]else: value = default_value value = some_dict.get(key, default_value) ###Code words = ['apple', 'bat', 'bar', 'atom', 'book'] by_letter = {} for word in words: letter = word[0] if letter not in by_letter: by_letter[letter] = [word] else: by_letter[letter].append(word) by_letter ###Output _____no_output_____ ###Markdown for word in words: letter = word[0] by_letter.setdefault(letter, []).append(word) from collections import defaultdictby_letter = defaultdict(list)for word in words: by_letter[word[0]].append(word) Valid dict key types ###Code hash('string') hash((1, 2, (2, 3))) hash((1, 2, [2, 3])) # fails because lists are mutable d = {} d[tuple([1, 2, 3])] = 5 d ###Output _____no_output_____ ###Markdown set ###Code set([2, 2, 2, 1, 3, 3]) {2, 2, 2, 1, 3, 3} a = {1, 2, 3, 4, 5} b = {3, 4, 5, 6, 7, 8} a.union(b) a | b a.intersection(b) a & b c = a.copy() c |= b c d = a.copy() d &= b d my_data = [1, 2, 3, 4] my_set = {tuple(my_data)} my_set a_set = {1, 2, 3, 4, 5} {1, 2, 3}.issubset(a_set) a_set.issuperset({1, 2, 3}) {1, 2, 3} == {3, 2, 1} ###Output _____no_output_____ ###Markdown List, Set, and Dict Comprehensions [ result = []for val in collection: if ###Code strings = ['a', 'as', 'bat', 'car', 'dove', 'python'] [x.upper() for x in strings if len(x) > 2] ###Output _____no_output_____ ###Markdown dict_comp = { set_comp = { ###Code unique_lengths = {len(x) for x in strings} unique_lengths set(map(len, strings)) loc_mapping = {val : index for index, val in enumerate(strings)} loc_mapping ###Output _____no_output_____ ###Markdown Nested list comprehensions ###Code all_data = [['John', 'Emily', 'Michael', 'Mary', 'Steven'], ['Maria', 'Juan', 'Javier', 'Natalia', 'Pilar']] ###Output _____no_output_____ ###Markdown names_of_interest = []for names in all_data: enough_es = [name for name in names if name.count('e') >= 2] names_of_interest.extend(enough_es) ###Code result = [name for names in all_data for name in names if name.count('e') >= 2] result some_tuples = [(1, 2, 3), (4, 5, 6), (7, 8, 9)] flattened = [x for tup in some_tuples for x in tup] flattened ###Output _____no_output_____ ###Markdown flattened = []for tup in some_tuples: for x in tup: flattened.append(x) ###Code [[x for x in tup] for tup in some_tuples] ###Output _____no_output_____ ###Markdown Functions def my_function(x, y, z=1.5): if z > 1: return z * (x + y) else: return z / (x + y) my_function(5, 6, z=0.7)my_function(3.14, 7, 3.5)my_function(10, 20) Namespaces, Scope, and Local Functions def func(): a = [] for i in range(5): a.append(i) a = []def func(): for i in range(5): a.append(i) ###Code a = None def bind_a_variable(): global a a = [] bind_a_variable() print(a) ###Output _____no_output_____ ###Markdown Returning Multiple Values def f(): a = 5 b = 6 c = 7 return a, b, ca, b, c = f() return_value = f() def f(): a = 5 b = 6 c = 7 return {'a' : a, 'b' : b, 'c' : c} Functions Are Objects ###Code states = [' Alabama ', 'Georgia!', 'Georgia', 'georgia', 'FlOrIda', 'south carolina##', 'West virginia?'] import re def clean_strings(strings): result = [] for value in strings: value = value.strip() value = re.sub('[!#?]', '', value) value = value.title() result.append(value) return result clean_strings(states) def remove_punctuation(value): return re.sub('[!#?]', '', value) clean_ops = [str.strip, remove_punctuation, str.title] def clean_strings(strings, ops): result = [] for value in strings: for function in ops: value = function(value) result.append(value) return result clean_strings(states, clean_ops) for x in map(remove_punctuation, states): print(x) ###Output _____no_output_____ ###Markdown Anonymous (Lambda) Functions def short_function(x): return x * 2equiv_anon = lambda x: x * 2 def apply_to_list(some_list, f): return [f(x) for x in some_list]ints = [4, 0, 1, 5, 6]apply_to_list(ints, lambda x: x * 2) ###Code strings = ['foo', 'card', 'bar', 'aaaa', 'abab'] strings.sort(key=lambda x: len(set(list(x)))) strings ###Output _____no_output_____ ###Markdown Currying: Partial Argument Application def add_numbers(x, y): return x + y add_five = lambda y: add_numbers(5, y) from functools import partialadd_five = partial(add_numbers, 5) Generators ###Code some_dict = {'a': 1, 'b': 2, 'c': 3} for key in some_dict: print(key) dict_iterator = iter(some_dict) dict_iterator list(dict_iterator) def squares(n=10): print('Generating squares from 1 to {0}'.format(n ** 2)) for i in range(1, n + 1): yield i ** 2 gen = squares() gen for x in gen: print(x, end=' ') ###Output _____no_output_____ ###Markdown Generator expresssions ###Code gen = (x ** 2 for x in range(100)) gen ###Output _____no_output_____ ###Markdown def _make_gen(): for x in range(100): yield x ** 2gen = _make_gen() ###Code sum(x ** 2 for x in range(100)) dict((i, i **2) for i in range(5)) ###Output _____no_output_____ ###Markdown itertools module ###Code import itertools first_letter = lambda x: x[0] names = ['Alan', 'Adam', 'Wes', 'Will', 'Albert', 'Steven'] for letter, names in itertools.groupby(names, first_letter): print(letter, list(names)) # names is a generator ###Output _____no_output_____ ###Markdown Errors and Exception Handling ###Code float('1.2345') float('something') def attempt_float(x): try: return float(x) except: return x attempt_float('1.2345') attempt_float('something') float((1, 2)) def attempt_float(x): try: return float(x) except ValueError: return x attempt_float((1, 2)) def attempt_float(x): try: return float(x) except (TypeError, ValueError): return x ###Output _____no_output_____ ###Markdown f = open(path, 'w')try: write_to_file(f)finally: f.close() f = open(path, 'w')try: write_to_file(f)except: print('Failed')else: print('Succeeded')finally: f.close() Exceptions in IPython In [10]: %run examples/ipython_bug.py---------------------------------------------------------------------------AssertionError Traceback (most recent call last)/home/wesm/code/pydata-book/examples/ipython_bug.py in () 13 throws_an_exception() 14---> 15 calling_things()/home/wesm/code/pydata-book/examples/ipython_bug.py in calling_things() 11 def calling_things(): 12 works_fine()---> 13 throws_an_exception() 14 15 calling_things()/home/wesm/code/pydata-book/examples/ipython_bug.py in throws_an_exception() 7 a = 5 8 b = 6----> 9 assert(a + b == 10) 10 11 def calling_things():AssertionError: Files and the Operating System ###Code %pushd book-materials path = 'examples/segismundo.txt' f = open(path) ###Output _____no_output_____ ###Markdown for line in f: pass ###Code lines = [x.rstrip() for x in open(path)] lines f.close() with open(path) as f: lines = [x.rstrip() for x in f] f = open(path) f.read(10) f2 = open(path, 'rb') # Binary mode f2.read(10) f.tell() f2.tell() import sys sys.getdefaultencoding() f.seek(3) f.read(1) f.close() f2.close() with open('tmp.txt', 'w') as handle: handle.writelines(x for x in open(path) if len(x) > 1) with open('tmp.txt') as f: lines = f.readlines() lines import os os.remove('tmp.txt') ###Output _____no_output_____ ###Markdown Bytes and Unicode with Files ###Code with open(path) as f: chars = f.read(10) chars with open(path, 'rb') as f: data = f.read(10) data data.decode('utf8') data[:4].decode('utf8') sink_path = 'sink.txt' with open(path) as source: with open(sink_path, 'xt', encoding='iso-8859-1') as sink: sink.write(source.read()) with open(sink_path, encoding='iso-8859-1') as f: print(f.read(10)) os.remove(sink_path) f = open(path) f.read(5) f.seek(4) f.read(1) f.close() %popd ###Output _____no_output_____
Q1/Assignment/Saylani_Assignments/Assignment#01.ipynb
###Markdown Question 01 ###Code print("Twinkle, twinkle, little star,\n \t How I wonder what you are! \n \t\t Up above the world so high, \n \t\t Like a diamond in the sky. \n Twinkle, twinkle, little star, \n \t How I wonder what you are") ###Output Twinkle, twinkle, little star, How I wonder what you are! Up above the world so high, Like a diamond in the sky. Twinkle, twinkle, little star, How I wonder what you are ###Markdown Question 02 ###Code from platform import python_version print(python_version()) ###Output 3.7.3 ###Markdown Question 03 ###Code from datetime import datetime now = datetime.now() print("Date and time : ", now) ###Output Date and time : 2019-11-22 00:20:29.294185 ###Markdown Question 04 ###Code import math r = int(input("Enter Radius of the circle : ")) A = math.pi*(math.pow(r,2)) print("Area of Circle = : ",A) ###Output _____no_output_____ ###Markdown Question 05 ###Code first_Name = input("Enter First Name : ") Last_Name = input("Enter Last Name : ") Full_Name = first_Name + " " + Last_Name reverseName = [] for i in range(len(Full_Name)): reverseName.append(Full_Name[i]) reverseName.reverse() reverseName "".join(reverseName) ###Output _____no_output_____ ###Markdown Question 06 ###Code num1 = int(input("Enter First Value : ")) num2 = int(input("Enter Secound Value : ")) print("Total = ",num1+num2) ###Output _____no_output_____
Practical_4.ipynb
###Markdown Practical 4: Modules and Functions - Building Conway's Game of LifeObjectives: In this practical we continue to use functions, modules and conditional statements. We also continue practicing how we access entries from 2D arrays. At the end of this notebook you will have a complete version of Conway's Game of Life which will produce an animation. This will be done through 3 different sections, each of which has an exercise for you to complete: - 1) [Creating different shapes through 2D Numpy array modifications](Part1) * [Exercise 1: Draw still 'life' from Conway's Universe](Exercise1) * [Exercise 2: Draw oscillators and space-ship 'life' from Conway's Universe](Exercise2) - 2) [Creating a function that searches a local neighbourhood for values of '1' and '0'](Part2) * [Exercise 3: Implement the 4 rules of life](Exercise3) * [Exercise 4: Loop through 20 oscillations of the 'Beacon' lifeform](Exercise4) - 3) [Populating Conway's Universe with multiple species](Part3) As with our other notebooks, we will provide you with a template for plotting the results. Also please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder. Please note: After reading the instructions and aims of any exercise, search the code snippets for a note that reads -------'INSERT CODE HERE'------- to identify where you need to write your code Introduction: The gameBefore we get our teeth into the exercises included in this notebook, let's remind ourselves about the basis for Conway's game of life. In Conway's game of life, the Universe is represented as a 2D space [a 2D Numpy array in our case!] on which each cell can either be alive or dead. If we refer to each cell as having one of two states, we can represent this numerically as each cell having either a value of 1 or 0. If we then assume we can draw 2D shapes that represent a 'specie', as a collection of live cells, we might find patterns changing over time.Every cell interacts with its neighbours, whether they are horizontally, vertically of diagonally adjacent. There are 4 laws that define these interactions: - Any live cell with fewer than two live neighbours dies, as if by underpopulation. - Any live cell with two or three live neighbours lives on to the next generation. - Any live cell with more than three live neighbours dies, as if by overpopulation. - Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.So, imagine we are at the beginning of time in our 2D Universe. We need to check the status of every cell and make changes according to these laws. After one sweep through our 2D space, or time step, the status of individual cells will change. Numerically, the distribution of '1's and '0's change across our 2D space. In fact, by defining species as dinstinct groups of cells of a certain shape, as we move through multiple time steps we find 3 types of patterns emerging: - Still life: These patterns remain fixed on the board - Oscillators: These patterns change shape on every iteration, but return to their initial state after a number of generations. - Space-ships: These patterns end up moving across the board according to the rules that define life and death.From a programming perspective, implementing these rules through any number of time steps requires a number of procedures to be implemented via code: - 1) Defining 2D arrays that represent species in Conway's Universe. - 2) Creating a function that searches the immediate neighbouring space of each cell for 1's and 0's. - 3) Counting the number of 1's and 0's according to the previous point. - 4) Changing the values of each cell according to the 4 laws stated above. - 5) Looping through points 2-4 for any number of time steps.By sequentially following the proceeding exercises, we will eventually build a variant of Conway's game of life. Creating different shapes through 2D Numpy array modifications Before we can run a simulation, let's create distinct species as groups of cells, and thus patterns. This will help us practice creating 2D arrays and populating each cell with either a '0' or '1' depending on what pattern we want to draw. To generate and thus draw each specie you will be asked to initialise a 2D Numpy array that repeats the pattern seen in the picture. The code to plot, thus visualise, each pattern is given for you. Still life The pictures in Figure 1 and 2 illustrate common types of still life in Conway's Universe. Ive given you some code that reproduces the pattern for 'Block', in the code box below. Read through the code and comments and see if this makes sense. ![](images/Practical_3_figure1.png "Title") Figure 1![](images/Practical_3_figure2.png "Title") Figure 2 ###Code #%matplotlib inline #this is to help us retrieve those love animations! import numpy as np #import the numerical python library, numpy. Changing the referenced library to 'np' is solely for convenience import matplotlib.pyplot as plt #as per the above, much easier to write over and over again from matplotlib import animation, rc # Lets first create our 'Block'. Dont forget, we can call our arrays and matrices anything we want. In this case Im going to use the name of the pattern we are interested in Block = np.zeros((4,4),dtype=int) #Im telling the Python interpreter I want a numpy array that is 4 rows by 4 columns, contains '0' for now and is expecting my data to be of integer type # What does this look like? print("An empty array",Block) # Can you see a matrix of 0s? # Ok cool. Now lets add some black cells by position some values of 1. For the Block pattern, this is done as follows: Block[1,1]=1 Block[1,2]=1 Block[2,1]=1 Block[2,2]=1 # Remeber how we refer to elements in an array in Python? Everything starts at 0, so here im filling in the central 2x2 matrix with 1s. Lets check this out numerically: print(print("A finished array",Block)) #Now lets plot this to recreate the patterns given in figure x. plt.imshow(Block, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Block') plt.show() ###Output An empty array [[0 0 0 0] [0 0 0 0] [0 0 0 0] [0 0 0 0]] A finished array [[0 0 0 0] [0 1 1 0] [0 1 1 0] [0 0 0 0]] None ###Markdown Exercise 1: Draw still 'life' from Conway's Universe. In this exercise you will need to create a 2D Numpy array that essentially 'draws' both the *Tub* and *Boat* specie from figure 2. ###Code # We have already imported both Numpy and Matplotlib so no need to import those again. # Initialise our matrices Tub = np.zeros((5,5),dtype=int) Boat = np.zeros((5,5),dtype=int) #-------'INSERT CODE HERE'------- # Now add '1's to the currently empty 2D array Tub #-------------------------------- plt.subplot(1, 2, 1).imshow(Tub, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Tub') #plt.show() #-------'INSERT CODE HERE'------- # Now add '1's to the currently empty 2D array Boat #-------------------------------- plt.subplot(1, 2, 2).imshow(Boat, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Boat') plt.show() ###Output _____no_output_____ ###Markdown Exercise 2: Draw oscillators and space-ship 'life' from Conway's Universe. Following exercise 1,now do the same for 2 types of both *oscillators* and *space ships*: Toad, Beacon, Glider and Light-weight spaceship (LWSS). Can you replicate the patterns shown in figures 1 and 3? Check the size of each array you need, accounting for white space around the outside. Use the space below and copy-paste the code we have already used.![](images/Practical_3_figure3.png "Title") Figure 3 ###Code #Enter the Python code here to create and then visualise a Toad, Beacon and Glider #Initialise each matrix Beacon = np.zeros((6,6),dtype=int) Toad = np.zeros((6,6),dtype=int) Glider = np.zeros((5,5),dtype=int) LWSS = np.zeros((6,7),dtype=int) #Enter values for '1' where you would like a black square #-------'INSERT CODE HERE'------- #-------------------------------- #Now visualise your results. plt.subplot(1, 2, 1).imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Beacon') plt.subplot(1, 2, 2).imshow(Toad, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Toad') plt.show() plt.subplot(1, 2, 1).imshow(Glider, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Glider') plt.subplot(1, 2, 2).imshow(LWSS, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('LWSS') plt.show() ###Output _____no_output_____ ###Markdown Creating a function that searches a local neighbourhood for values of '1' and '0' Now we know how to define a specie by modifying values in a 2D array, we also need to now create a function that can search the neighbouring space of any cell for the occurance of '1's or '0's. We are going to perform this operation many times so creating a function to do this seems a sensible approach.As an example, let's re-create the 2D array that represents the specie 'Beacon' and then pass this array into a new function that will search the neighbouring space of every cell to detect a '1' or '0. In this example I have given you all of the code to perform this operation. Try to understand the syntax used. Does this make sense? First look at the code and then let's formulate the steps in the function as a narrative. ###Code #Initialise the Beacon matrix Beacon = np.zeros((6,6),dtype=int) #Enter values for '1' where you would like a black square Beacon [1,1]=1 Beacon [1,2]=1 Beacon [2,1]=1 Beacon [3,4]=1 Beacon [4,3]=1 Beacon [4,4]=1 # Now define a function that moves through through each cell in our 2D array and searches the neighbouring space # We pass three variables: # rows - Number of rows in our space to be searched # cols - Number of columns in our space to be searched # space - The 2D array space to be searched def search_each_cell(total_rows,total_cols,space): # 1) First, we need to start moving through each cell of our 'space'. # To do this, we will use two nested 'for' loops for row in range(total_rows): for col in range(total_cols): # So 'row' and 'col' define our current cell. # We now need to search a neighbourhood defined as 1 cell distance around this position # We thus need two more nested for loops. When searching this neighbouring space, we want # to count the number of 1's. Thus we also need a variable that we can increment by 1 # everytime we find a value of 1. Lets call this integer variable count count = 0 for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range' for col2 in range(col-1,col+2): # We need to check if our new position, defined by [row2,col2] is off the board if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols): # Do nothing pass elif row2 == row and col2 == col: # Do nothing, its the cell we already have! pass # If we are not off the board or in the same cell as our starting point... # We can check if this new space has a value of 1. It it does, lets count it else: if space[row2,col2]>0: count=count+1 return # At the moment we are not returning anything. Seem odd? We will get back to this. # call the above function search_each_cell(6,6,Beacon) print("Finished function call, nothing to report!") ###Output Finished function call, nothing to report! ###Markdown Now let's try to understand what this function is actually doing. As an algorithm, we have the following steps - 1) Pass the 2D Numpy array to the new function along with variables that define the total number of rows and columns - 2) We need to move through every cell and search its local neighbourhood. Moving through each cell is defined by the first two loops that cycle through both the row and column index of our 2D space. The limits are defined by the variables total_rows and total_cols - 3) For each cell, we will want to have an integer variable that counts how many 1's there are in the local neighborhood. We need to initialise this to 0 for each cell we move through. We call this variable count - 4) Now we need to look at the local space surrounding our cell. For this we need two more nested loops that look 1 row above, 1 row below, 1 column to the left and one to the right. - 5) As we move through this neighborhood we need to check if we are either off the board OR in the same location as the cell we are interested in! - 6) If none of the above is true, then check if a cell has a value greater then 0. If it does, increment variable count by 1. - 7) For each cell on the board, repeat steps 3-6. - 8) When the entire space has been searched, stop the function and return nothing. Exercise 3 - Implement the 4 rules of life Now we have the function that can search the local neighbourhood of any cell and count how many 1s and 0's there are, we can now add on more code that can implement the 4 rules of life and thus keep the value of our current cell or change it. Let's remind ourselves what those rules are: - Any live cell with fewer than two live neighbours dies, as if by underpopulation. - Any live cell with two or three live neighbours lives on to the next generation. - Any live cell with more than three live neighbours dies, as if by overpopulation. - Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction.So in this exercise we have a shape that has been passed into our function, and then create a new shape according to the rules of life. In the exercise you will need to add a series of conditional statements that populate the value of cells in our new shape according to these rules. In other words, we can re-write the above rules as: - If our current cell is alive [=1]: a) If count < 2, current cell = 0 [it dies]. b) If 2<=count<=3, current cell = 1 [it stays alive]. c) If count>3, current cell = 0 [it dies] - If our current cell is dead [=0] a) If count == 3, current cell = 1 [born] Notice the syntax I have used for the last conditional: If count == 3 ? When checking a value we use two equals signs == as we are not *assigning* a value as we would in, e.g. x = 4 . In the code snippet below, I have identified where you need to implement these rules. Notice that we plot the 'Beacon' pattern before we call the function and then the new 2D space which should change the pattern. With this in mind, also note that our function new returns a new version of our 2D space which I have called 'new_space'. If correct, when you run your completed code you should see figure 4.![](images/Practical_3_figure4.png "Title") Figure 4Please note that where I have added 'INSERT CODE HERE' we are using the correct indentation. ###Code #Initialise the Beacon matrix Beacon = np.zeros((6,6),dtype=int) #Enter values for '1' where you would like a black square Beacon [1,1]=1 Beacon [1,2]=1 Beacon [2,1]=1 Beacon [3,4]=1 Beacon [4,3]=1 Beacon [4,4]=1 # Now define a function that moves through through each cell in our 2D array and searches the neighbouring space # We pass three variables: # rows - Number of rows in our space to be searched # cols - Number of columns in our space to be searched # space - The 2D array space to be searched def search_each_cell(total_rows,total_cols,space): new_space = np.zeros((total_rows,total_cols),dtype=int) # 1) First, we need to start moving through each cell of our 'space'. # To do this, we will use two nested 'for' loops for row in range(total_rows): for col in range(total_cols): # So 'row' and 'col' define our current cell index. # We now need to search a neighbourhood defined as 1 cell distance around this position # We thus need two more nested for loops. When searching this neighbouring space, we want # to count the number of 1's. Thus we also need a variable that we can increment by 1 # everytime we find a value of 1. Lets call this integer variable count. count = 0 for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range' for col2 in range(col-1,col+2): # We need to check if our new position, defined by [row2,col2] is off the board if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols): # Do nothing pass elif row2 == row and col2 == col: # Do nothing, its the cell we already have! pass # If we are not off the board or in the same cell as our starting point... # We can check if this new space has a value of 1. It it does, lets count it else: if space[row2,col2]>0: count=count+1 #-------'INSERT CODE HERE'------- # Here you need to introduce conditional statements that act on the value of 'count' # Read through the narrative provided above and remember to obey the spacing rules # You will need to check the value of space[row,col] and then, depending on whether # this is greater than 0 OR equals to 0, implement the rules of life. I have provided # the first example. Please do try to complete this. if space[row,col] > 0: elif space[row,col] == 0: #-------------------------------- return new_space # call the above function Beacon_new = search_each_cell(6,6,Beacon) print("Finished function call, now lets compare our pattern before and after...") #Now visualise your results. plt.subplot(1, 2, 1).imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Beacon - before') plt.subplot(1, 2, 2).imshow(Beacon_new, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Beacon - after') plt.show() ###Output _____no_output_____ ###Markdown Exercise 4 - Loop through 20 oscillations of the 'Beacon' lifeform Now that we have build the function that can implement the 4 rules of life, all that is left for us to do is to call this function a set number of times to simulate evolution across our Universe. In the code box below, drop your conditional statements from above in the relevant place and click 'Run'. Do you see the Beacon shape oscillating? As before, I have provided the code for plotting but see if the syntax makes sense. ###Code import numpy as np #import the numerical python library, numpy. Changing the referenced library to 'np' is solely for convenience import matplotlib.pyplot as plt #as per the above, much easier to write over and over again from matplotlib import animation, rc from IPython.display import HTML from IPython.display import clear_output import time #Initialise the Beacon matrix Beacon = np.zeros((6,6),dtype=int) #Enter values for '1' where you would like a black square Beacon [1,1]=1 Beacon [1,2]=1 Beacon [2,1]=1 Beacon [3,4]=1 Beacon [4,3]=1 Beacon [4,4]=1 # Now define a function that moves through through each cell in our 2D array and searches the neighbouring space # We pass three variables: # rows - Number of rows in our space to be searched # cols - Number of columns in our space to be searched # space - The 2D array space to be searched def search_each_cell(total_rows,total_cols,space): new_space = np.zeros((total_rows,total_cols),dtype=int) # 1) First, we need to start moving through each cell of our 'space'. # To do this, we will use two nested 'for' loops for row in range(total_rows): for col in range(total_cols): # So 'row' and 'col' define our current cell index. # We now need to search a neighbourhood defined as 1 cell distance around this position # We thus need two more nested for loops. When searching this neighbouring space, we want # to count the number of 1's. Thus we also need a variable that we can increment by 1 # everytime we find a value of 1. Lets call this integer variable count count = 0 for row2 in range(row-1,row+2): # See here that we can define a start and end to our 'range' for col2 in range(col-1,col+2): # We need to check if our new position, defined by [row2,col2] is off the board if (row2<0) or (row2>=total_rows) or (col<0) or (col2>=total_cols): # Do nothing pass elif row2 == row and col2 == col: # Do nothing, its the cell we already have! pass # If we are not off the board or in the same cell as our starting point... # We can check if this new space has a value of 1. It it does, lets count it else: if space[row2,col2]>0: count=count+1 #-------'INSERT CODE HERE'------- # Here you need to introduce conditional statements that act on the value of 'count' # Read through the narrative provided above and remember to obey the spacing rules if space[row,col] > 0: elif space[row,col] == 0: #-------------------------------- return new_space fig, ax2 = plt.subplots() plt.imshow(Beacon, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Beacon oscillating') plt.show() # Let us call the function 20 times # Each time we are given a new shape to plot on our figure. # Wait 0.2 seconds before moving on to thje next iteration # We shpuld see oscillating behaviour. for x in range(20): clear_output(wait=True) Beacon_new = search_each_cell(6,6,Beacon) Beacon = Beacon_new plt.imshow(Beacon_new, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Beacon oscillating') plt.show() time.sleep(0.2) ###Output _____no_output_____ ###Markdown Populating Conway's Universe with multiple species Now we are going to use the defintion of our shapes to populate a miniature Universe in 2D space! Once we have this, following the same procedure as above, we should see some interesting movement! So let's create a space that is big enough for all of our cell types. To do this, we need to create another matrix: ###Code Universe=np.zeros((50,50),dtype=int) print(Universe) ###Output _____no_output_____ ###Markdown You should now see a snapshot of the Universe matrix that is empty. How do we populate our Universe with individual species? We could enter a value for each cell but this is laborious. Rather, we are going to use our existing matrices that define our species and place them on the Universe grid. We do that by definining the exact space in the Universe we want our cells to go. This is practice in recognising the correct shape of an array/matrix and matching one to another. For example, look at the code below which places the top left corner of an LWSS on the cell in the 12th row and 13th column of my Universe and then visualises the results. Dont forget, indexing in Python starts at 0 so for the 12th row and 13th column, I need to refer to element [11,12]. Im also using the operator : which allows us to straddle cells bound by a start and a finish. Why have I chosen the range given below? Feel free to change the values, but if you get the size of space needed to fit in an LWSS, Python will complain it cannot broadcast a given shape: ###Code #Define the space in the Universe you would like your LWSS to appear Universe[11:17,12:19] = LWSS #Now visualise our Universe plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Universe [with 1 LWSS]') plt.show() ###Output _____no_output_____ ###Markdown To finish this notebook, in the following code box we fill the Universe with a range of species and then run a simulation. Can you see how we have mapped species shapes into our Universe? It is left for you to copy the working function 'search_each_cell' from above to complete the simulation.Have a play with this! What happens if you increase the number of iterations to 300? Please note, we might want to clear our Universe from the above exercise, in which case we could write: Universe[:,:]=0, but let's keep it in for now. ###Code #Define the space in the Universe you would like your different species to appear Universe[30:36,32:39] = LWSS Universe[11:17,12:19] = LWSS Universe[22:28,12:18] = Beacon Universe[33:39,2:8] = Beacon Universe[19:25,32:38] = Toad Universe[1:6,1:6] = Glider Universe[6:11,25:30] = Boat plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Universe [with multiple cell types]') plt.show() #-------'INSERT CODE HERE'------- def search_each_cell(total_rows,total_cols,space): #-------------------------------- fig, ax2 = plt.subplots(figsize=(12, 12)) plt.imshow(Universe, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Universe simulation') plt.show() for x in range(100): clear_output(wait=True) Universe_new = search_each_cell(50,50,Universe) Universe = Universe_new plt.imshow(Universe_new, cmap='binary') #The cmap, or colour map, gives us a black and white board. plt.title('Universe simulation') plt.show() time.sleep(0.2) ###Output _____no_output_____
tech_talks/getting-started-web-scraping.ipynb
###Markdown 初學網路爬蟲的方式與建議郭耀仁 | [email protected] 大綱- 關於我- 網路爬蟲學習架構- 一個 End-to-end 簡單範例- 延伸閱讀 關於我 資料科學講師- 台大資工系統訓練班資深講師(授課時數 2,000+ 小時)- 資策會資料工程師養成班(Python、R)- 中華電信學院(Python 資料科學、Python 機器學習)- 華南銀行 Python 資料科學講師- 玉山銀行 Python 資料科學講師- 2017 資料科學年會講者 http://datasci.tw/tony/ 喜歡撰寫資料科學相關文章- 2.3k+ Likes at https://www.facebook.com/datainpoint/- 1.6k+ Followers at https://medium.com/datainpoint- https://www.datainpoint.com/- 2017 iT 邦幫忙 Big Data 組冠軍 https://ithelp.ithome.com.tw/ironman/articles/1077 著作- [進擊的資料科學](https://www.datainpoint.com/data-science-in-action/) @碁峰出版社- [輕鬆學習 R 語言, 2nd Edition](https://www.datainpoint.com/r-essentials/) @碁峰出版社 喜歡長跑- 3k PR: 10:01- 5k PR: 17:35- 10k PR: Around 37:00- 21k PR: Around 79:40- 42.195k PR 02:43:12 工作經歷- Senior Data Analyst, [Coupang](https://www.coupang.com/)- Senior Analytical Consultant, [SAS](https://www.sas.com/)- Management Associate, [CTBC](https://www.ctbcbank.com/)- Research Intern, [McKinsey & Company](https://www.mckinsey.com/) 學歷- MBA@台大商研所- BA@台大工商管理系 網路爬蟲學習架構 架構一覽- 釐清網路爬蟲任務- 學習網路爬蟲之前- 相關的工具與方法- 碰到困難如何解決 釐清網路爬蟲任務- **請求資料**- **解析資料**- 資料清理- 定期執行- 資料儲存- 資料分享 學習網路爬蟲之前- Python 程式語言- HTML/CSS 入門- 暸解陣列與 JSON- 暸解基礎的 HTTP 與資料庫- 暸解基礎的命令列 Python 程式語言> 因應任務:請求資料、解析資料、資料清理、資料儲存、資料分享 HTML/CSS 入門> 因應任務:解析資料 暸解陣列與 JSON> 因應任務:解析資料 暸解基礎的 HTTP 與資料庫> 因應任務:請求資料、資料儲存https://www.w3schools.com/ 暸解基礎的命令列> 因應任務:定期執行、資料分享https://www.learnenough.com/command-line-tutorial/basics 相關的工具與方法- Chrome 開發者工具、瀏覽器外掛- Python 模組套件 Chrome 開發者工具 Chrome 瀏覽器外掛- [Quick Javascript Switcher](https://chrome.google.com/webstore/detail/quick-javascript-switcher/geddoclleiomckbhadiaipdggiiccfje):關閉 JavaScript 功能- [JSONView](https://chrome.google.com/webstore/detail/jsonview/chklaanhfefbnpoihckbnefhakgolnmc):讓 JSON 資料在瀏覽器上呈現得比較漂亮- [SelectorGadget](https://chrome.google.com/webstore/detail/selectorgadget/mhjhnkcfbdhnjickkkdbjoemdmbfginb):協助 CSS Selector 定位- [XPath Helper](https://chrome.google.com/webstore/detail/xpath-helper/hgimnogjllphhhkhlmebbmlgjoejdpjl):協助 XPath 定位- [EditThisCookie](https://chrome.google.com/webstore/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg):觀察網站的 cookies 參數 Python 模組套件- 請求資料:`requests`、`selenium`, `urllib`- 解析資料:`BeautifulSoup4`、`PyQuery`- 資料清理:`NumPy`、`Pandas`、`re`- 資料儲存:`json`、`sqlite3`、`Pandas`- 資料分享:`flask`、`gunicorn` 碰到困難如何解決- 80/20 法則- 以英文下關鍵字搜尋- 學習社群- 導師 花費 80% 的精力學習 20% 的主題,以 Python 程式設計為例- 資料型態留意 `str`- 資料結構留意 `list` 與 `dict`- 技巧留意 `list comprehension` 與 `generator`- Standard Library 留意 `json`, `re` 與 `urllib`- ...etc. 花費 80% 的精力學習 20% 的主題,以 HTML/CSS 入門為例- 留意標記的樹狀結構- 留意 DOM- 留意 id 與 class- 留意 CSS Selector- 留意 XPath- ...etc. 一個 End-to-end 簡單範例 想要讓這本書的選股指標自動化[財務自由的世界:財務分析就是一場投資的戰爭](https://www.books.com.tw/products/0010562279) 盤點六個任務- 請求資料- 解析資料- 資料清理- 資料儲存- 定期執行- 資料分享 請求資料- - ###Code import requests page_url = "https://tw.stock.yahoo.com/d/i/rank.php?t=pri&e=tse&n=100" r = requests.get(page_url) print(r.text) ###Output _____no_output_____ ###Markdown 解析資料使用 `BeautifulSoup` ###Code from bs4 import BeautifulSoup soup = BeautifulSoup(r.text, 'html.parser') ###Output _____no_output_____ ###Markdown 資料清理- 處理文字- 處理 `list`- 使用 `pandas` 處理表格 ###Code import datetime import requests from bs4 import BeautifulSoup import pandas as pd def get_price_ranks(): current_dt = datetime.datetime.now().strftime("%Y-%m-%d %X") current_dts = [current_dt for _ in range(200)] stock_types = ["tse", "otc"] price_rank_urls = ["https://tw.stock.yahoo.com/d/i/rank.php?t=pri&e={}&n=100".format(st) for st in stock_types] tickers = [] stocks = [] prices = [] volumes = [] mkt_values = [] ttl_steps = 10*100 each_step = 10 for pr_url in price_rank_urls: r = requests.get(pr_url) soup = BeautifulSoup(r.text, 'html.parser') ticker = [i.text.split()[0] for i in soup.select(".name a")] tickers += ticker stock = [i.text.split()[1] for i in soup.select(".name a")] stocks += stock price = [float(soup.find_all("td")[2].find_all("td")[i].text) for i in range(5, 5+ttl_steps, each_step)] prices += price volume = [int(soup.find_all("td")[2].find_all("td")[i].text.replace(",", "")) for i in range(11, 11+ttl_steps, each_step)] volumes += volume mkt_value = [float(soup.find_all("td")[2].find_all("td")[i].text)*100000000 for i in range(12, 12+ttl_steps, each_step)] mkt_values += mkt_value types = ["上市" for _ in range(100)] + ["上櫃" for _ in range(100)] ky_registered = [True if "KY" in st else False for st in stocks] df = pd.DataFrame() df["scrapingTime"] = current_dts df["type"] = types df["kyRegistered"] = ky_registered df["ticker"] = tickers df["stock"] = stocks df["price"] = prices df["volume"] = volumes df["mktValue"] = mkt_values return df price_ranks = get_price_ranks() price_ranks.head() ###Output _____no_output_____ ###Markdown 資料儲存 增加寫入資料庫的程式碼 ###Code import sqlite3 import pandas as pd conn = sqlite3.connect('/home/ubuntu/yahoo_stock.db') price_ranks.to_sql("price_ranks", conn, if_exists="append", index=False) ###Output _____no_output_____
_downloads/87127ec1c2cdd385ef16e2a5f447b86c/scan.ipynb
###Markdown Scan and Recurrent Kernel**Author**: `Tianqi Chen `_This is an introduction material on how to do recurrent computing in TVM.Recurrent computing is a typical pattern in neural networks. ###Code from __future__ import absolute_import, print_function import tvm import tvm.testing from tvm import te import numpy as np ###Output _____no_output_____ ###Markdown TVM supports a scan operator to describe symbolic loop.The following scan op computes cumsum over columns of X.The scan is carried over the highest dimension of the tensor.:code:`s_state` is a placeholder that describes the transition state of the scan.:code:`s_init` describes how we can initialize the first k timesteps.Here since s_init's first dimension is 1, it describes how we initializeThe state at first timestep.:code:`s_update` describes how to update the value at timestep t. The updatevalue can refer back to the values of previous timestep via state placeholder.Note that while it is invalid to refer to :code:`s_state` at current or later timestep.The scan takes in state placeholder, initial value and update description.It is also recommended(although not necessary) to list the inputs to the scan cell.The result of the scan is a tensor, giving the result of :code:`s_state` after theupdate over the time domain. ###Code m = te.var("m") n = te.var("n") X = te.placeholder((m, n), name="X") s_state = te.placeholder((m, n)) s_init = te.compute((1, n), lambda _, i: X[0, i]) s_update = te.compute((m, n), lambda t, i: s_state[t - 1, i] + X[t, i]) s_scan = tvm.te.scan(s_init, s_update, s_state, inputs=[X]) ###Output _____no_output_____ ###Markdown Schedule the Scan CellWe can schedule the body of the scan by scheduling the update andinit part seperately. Note that it is invalid to schedule thefirst iteration dimension of the update part.To split on the time iteration, user can schedule on scan_op.scan_axis instead. ###Code s = te.create_schedule(s_scan.op) num_thread = 256 block_x = te.thread_axis("blockIdx.x") thread_x = te.thread_axis("threadIdx.x") xo, xi = s[s_init].split(s_init.op.axis[1], factor=num_thread) s[s_init].bind(xo, block_x) s[s_init].bind(xi, thread_x) xo, xi = s[s_update].split(s_update.op.axis[1], factor=num_thread) s[s_update].bind(xo, block_x) s[s_update].bind(xi, thread_x) print(tvm.lower(s, [X, s_scan], simple_mode=True)) ###Output _____no_output_____ ###Markdown Build and VerifyWe can build the scan kernel like other TVM kernels, here we usenumpy to verify the correctness of the result. ###Code fscan = tvm.build(s, [X, s_scan], "cuda", name="myscan") ctx = tvm.gpu(0) n = 1024 m = 10 a_np = np.random.uniform(size=(m, n)).astype(s_scan.dtype) a = tvm.nd.array(a_np, ctx) b = tvm.nd.array(np.zeros((m, n), dtype=s_scan.dtype), ctx) fscan(a, b) tvm.testing.assert_allclose(b.asnumpy(), np.cumsum(a_np, axis=0)) ###Output _____no_output_____ ###Markdown Multi-Stage Scan CellIn the above example we described the scan cell using one Tensorcomputation stage in s_update. It is possible to use multipleTensor stages in the scan cell.The following lines demonstrate a scan with two stage operationsin the scan cell. ###Code m = te.var("m") n = te.var("n") X = te.placeholder((m, n), name="X") s_state = te.placeholder((m, n)) s_init = te.compute((1, n), lambda _, i: X[0, i]) s_update_s1 = te.compute((m, n), lambda t, i: s_state[t - 1, i] * 2, name="s1") s_update_s2 = te.compute((m, n), lambda t, i: s_update_s1[t, i] + X[t, i], name="s2") s_scan = tvm.te.scan(s_init, s_update_s2, s_state, inputs=[X]) ###Output _____no_output_____ ###Markdown These intermediate tensors can also be scheduled normally.To ensure correctness, TVM creates a group constraint to forbidthe body of scan to be compute_at locations outside the scan loop. ###Code s = te.create_schedule(s_scan.op) xo, xi = s[s_update_s2].split(s_update_s2.op.axis[1], factor=32) s[s_update_s1].compute_at(s[s_update_s2], xo) print(tvm.lower(s, [X, s_scan], simple_mode=True)) ###Output _____no_output_____ ###Markdown Multiple StatesFor complicated applications like RNN, we might need more than onerecurrent state. Scan support multiple recurrent states.The following example demonstrates how we can build recurrence with two states. ###Code m = te.var("m") n = te.var("n") l = te.var("l") X = te.placeholder((m, n), name="X") s_state1 = te.placeholder((m, n)) s_state2 = te.placeholder((m, l)) s_init1 = te.compute((1, n), lambda _, i: X[0, i]) s_init2 = te.compute((1, l), lambda _, i: 0.0) s_update1 = te.compute((m, n), lambda t, i: s_state1[t - 1, i] + X[t, i]) s_update2 = te.compute((m, l), lambda t, i: s_state2[t - 1, i] + s_state1[t - 1, 0]) s_scan1, s_scan2 = tvm.te.scan( [s_init1, s_init2], [s_update1, s_update2], [s_state1, s_state2], inputs=[X] ) s = te.create_schedule(s_scan1.op) print(tvm.lower(s, [X, s_scan1, s_scan2], simple_mode=True)) ###Output _____no_output_____
Chapter08/Exercise8.03/Exercise8.03.ipynb
###Markdown **Exercise 8.03** ###Code #orders made by each customer ord_cust = retail.groupby(by = ['cust_id', 'country'], as_index = False)['invoice'].count() ord_cust.head(10) plt.subplots(figsize = (15, 6)) oc = plt.plot(ord_cust.cust_id, ord_cust.invoice) plt.xlabel('Customer ID') plt.ylabel('Number of Orders') plt.title('Number of Orders made by Customers') plt.show() ord_cust.describe() # 5 customers who ordered the most number of times ord_cust.sort_values(by = 'invoice', ascending = False).head() ord_cust.sort_values(by = 'invoice', ascending = False).tail() spent_cust = retail.groupby(by = ['cust_id', 'country', 'quantity', 'unit_price'], as_index = False)['spent'].sum() spent_cust.head() plt.subplots(figsize = (15, 6)) sc = plt.plot(spent_cust.cust_id, spent_cust.spent) plt.xlabel('Customer ID') plt.ylabel('Total Amount Spent') plt.title('Amount Spent by Customers') plt.show() spent_cust.sort_values(by = 'spent', ascending = False).head() spent_cust.sort_values(by = 'spent', ascending = False).tail() ord_month = retail.groupby(['invoice'])['year_month'].unique().value_counts().sort_index() ord_month om = ord_month.plot(kind='bar', figsize = (15, 6)) om.set_xlabel('Month') om.set_ylabel('Number of Orders') om.set_title('Orders per Month') om.set_xticklabels(('Dec 09', 'Jan 10', 'Feb 10', 'Mar 10', 'Apr 10', 'May 10', 'Jun 10', 'Jul 10', 'Aug 10', 'Sep 10', 'Oct 10', 'Nov 10', 'Dec 10'), rotation = 'horizontal') plt.show() ord_day = retail.groupby('invoice')['day'].unique().value_counts().sort_index() ord_day od = ord_day.plot(kind='bar', figsize = (15, 6)) od.set_xlabel('Day of the Month') od.set_ylabel('Number of Orders') od.set_title('Orders per Day of the Month') od.set_xticklabels(labels = [i for i in range (1, 32)], rotation = 'horizontal') plt.show() ord_dayofweek = retail.groupby('invoice')['day_of_week'].unique().value_counts().sort_index() ord_dayofweek odw = ord_dayofweek.plot(kind='bar', figsize = (15, 6)) odw.set_xlabel('Day of the Week') odw.set_ylabel('Number of Orders') odw.set_title('Orders per Day of the Week') odw.set_xticklabels(labels = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'], rotation = 'horizontal') plt.show() ord_hour = retail.groupby(by = ['invoice'])['hour'].unique().value_counts().sort_index() ord_hour oh = ord_hour.plot(kind='bar', figsize = (15, 6)) oh.set_xlabel('Hour of the Day') oh.set_ylabel('Number of Orders') oh.set_title('Orders per Hour of the Day') oh.set_xticklabels(labels = [i for i in range (7, 21)], rotation = 'horizontal') plt.show() q_item = retail.groupby(by = ['desc'], as_index = False)['quantity'].sum() q_item.head() q_item.sort_values(by = 'quantity', ascending = False).head() q_item.sort_values(by = 'quantity', ascending = False).tail() ord_coun = retail.groupby(['country'])['invoice'].count().sort_values() ord_coun.head() ocoun = ord_coun.plot(kind='barh', figsize = (15, 6)) ocoun.set_xlabel('Number of Orders') ocoun.set_ylabel('Country') ocoun.set_title('Orders per Country') plt.show() del ord_coun['United Kingdom'] ocoun2 = ord_coun.plot(kind='barh', figsize = (15, 6)) ocoun2.set_xlabel('Number of Orders') ocoun2.set_ylabel('Country') ocoun2.set_title('Orders per Country') plt.show() coun_spent = retail.groupby('country')['spent'].sum().sort_values() cs = coun_spent.plot(kind='barh', figsize = (15, 6)) cs.set_xlabel('Amount Spent') cs.set_ylabel('Country') cs.set_title('Amount Spent per Country') plt.show() del coun_spent['United Kingdom'] cs2 = coun_spent.plot(kind='barh', figsize = (15, 6)) cs2.set_xlabel('Amount Spent') cs2.set_ylabel('Country') cs2.set_title('Amount Spent per Country') plt.show() ###Output _____no_output_____ ###Markdown Exercise 8.01 ###Code import pandas as pd from sklearn.model_selection import train_test_split from sklearn import svm from sklearn.model_selection import cross_val_score import numpy as np data=pd.read_csv("Shill_Bidding_Dataset.csv") # Drop irrelevant columns data.drop(["Record_ID","Auction_ID","Bidder_ID"],axis=1,\           inplace=True) data.head() data.dtypes data.isnull().sum() ### Check for missing values target = 'Class' X = data.drop(target,axis=1) y = data[target] X_train, X_test, y_train, y_test = train_test_split\                                    (X.values,y,test_size=0.50,\                                     random_state=123, \                                     stratify=y) clf_svm=svm.SVC(kernel='linear', C=1) clf_svm clf_svm.fit(X_train,y_train) clf_svm.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Exercise 8.02 ###Code import graphviz from sklearn import tree from six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus clf_tree = tree.DecisionTreeClassifier() clf_tree = clf_tree.fit(X_train, y_train) dot_data = StringIO() export_graphviz(clf_tree, out_file=dot_data,\                 filled=True, rounded=True,\                 class_names=['Normal','Abnormal'],\                 max_depth = 3,                 special_characters=True,\                 feature_names=X.columns.values) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) clf_tree.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Exercise 8.03 ###Code from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=20, max_depth=None,\                              min_samples_split=7, random_state=0) clf.fit(X_train,y_train) clf_random.fit(X_train,y_train) clf_random.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Exercise 8.01 ###Code import pandas as pd from sklearn.model_selection import train_test_split from sklearn import svm from sklearn.model_selection import cross_val_score import numpy as np data=pd.read_csv("Shill_Bidding_Dataset.csv") # Drop irrelevant columns data.drop(["Record_ID","Auction_ID","Bidder_ID"],axis=1,\ inplace=True) data.head() data.dtypes data.isnull().sum() ### Check for missing values target = 'Class' X = data.drop(target,axis=1) y = data[target] X_train, X_test, y_train, y_test = train_test_split\ (X.values,y,test_size=0.50,\ random_state=123, \ stratify=y) clf_svm=svm.SVC(kernel='linear', C=1) clf_svm clf_svm.fit(X_train,y_train) clf_svm.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Exercise 8.02 ###Code import graphviz from sklearn import tree from six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus clf_tree = tree.DecisionTreeClassifier() clf_tree = clf_tree.fit(X_train, y_train) dot_data = StringIO() export_graphviz(clf_tree, out_file=dot_data,\ filled=True, rounded=True,\ class_names=['Normal','Abnormal'],\ max_depth = 3,\ special_characters=True,\ feature_names=X.columns.values) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) clf_tree.score(X_test, y_test) ###Output _____no_output_____ ###Markdown Exercise 8.03 ###Code from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=20, max_depth=None,\ min_samples_split=7, random_state=0) clf.fit(X_train,y_train) clf.fit(X_train,y_train) clf.score(X_test, y_test) ###Output _____no_output_____ ###Markdown **Exercise 8.03** ###Code #orders made by each customer ord_cust = retail.groupby(by = ['cust_id', 'country'], as_index = False)['invoice'].count() ord_cust.head(10) plt.subplots(figsize = (15, 6)) oc = plt.plot(ord_cust.cust_id, ord_cust.invoice) plt.xlabel('Customer ID') plt.ylabel('Number of Orders') plt.title('Number of Orders made by Customers') plt.show() ord_cust.describe() # 5 customers who ordered the most number of times ord_cust.sort_values(by = 'invoice', ascending = False).head() ord_cust.sort_values(by = 'invoice', ascending = False).tail() spent_cust = retail.groupby(by = ['cust_id', 'country', 'quantity', 'unit_price'], as_index = False)['spent'].sum() spent_cust.head() plt.subplots(figsize = (15, 6)) sc = plt.plot(spent_cust.cust_id, spent_cust.spent) plt.xlabel('Customer ID') plt.ylabel('Total Amount Spent') plt.title('Amount Spent by Customers') plt.show() spent_cust.sort_values(by = 'spent', ascending = False).head() spent_cust.sort_values(by = 'spent', ascending = False).tail() ord_month = retail.groupby(['invoice'])['year_month'].unique().value_counts().sort_index() ord_month om = ord_month.plot('bar', figsize = (15, 6)) om.set_xlabel('Month') om.set_ylabel('Number of Orders') om.set_title('Orders per Month') om.set_xticklabels(('Dec 09', 'Jan 10', 'Feb 10', 'Mar 10', 'Apr 10', 'May 10', 'Jun 10', 'Jul 10', 'Aug 10', 'Sep 10', 'Oct 10', 'Nov 10', 'Dec 10'), rotation = 'horizontal') plt.show() ord_day = retail.groupby('invoice')['day'].unique().value_counts().sort_index() ord_day od = ord_day.plot('bar', figsize = (15, 6)) od.set_xlabel('Day of the Month') od.set_ylabel('Number of Orders') od.set_title('Orders per Day of the Month') od.set_xticklabels(labels = [i for i in range (1, 32)], rotation = 'horizontal') plt.show() ord_dayofweek = retail.groupby('invoice')['day_of_week'].unique().value_counts().sort_index() ord_dayofweek odw = ord_dayofweek.plot('bar', figsize = (15, 6)) odw.set_xlabel('Day of the Week') odw.set_ylabel('Number of Orders') odw.set_title('Orders per Day of the Week') odw.set_xticklabels(labels = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'], rotation = 'horizontal') plt.show() ord_hour = retail.groupby(by = ['invoice'])['hour'].unique().value_counts().sort_index() ord_hour oh = ord_hour.plot('bar', figsize = (15, 6)) oh.set_xlabel('Hour of the Day') oh.set_ylabel('Number of Orders') oh.set_title('Orders per Hour of the Day') oh.set_xticklabels(labels = [i for i in range (7, 21)], rotation = 'horizontal') plt.show() q_item = retail.groupby(by = ['desc'], as_index = False)['quantity'].sum() q_item.head() q_item.sort_values(by = 'quantity', ascending = False).head() q_item.sort_values(by = 'quantity', ascending = False).tail() ord_coun = retail.groupby(['country'])['invoice'].count().sort_values() ord_coun.head() ocoun = ord_coun.plot('barh', figsize = (15, 6)) ocoun.set_xlabel('Number of Orders') ocoun.set_ylabel('Country') ocoun.set_title('Orders per Country') plt.show() del ord_coun['United Kingdom'] ocoun2 = ord_coun.plot('barh', figsize = (15, 6)) ocoun2.set_xlabel('Number of Orders') ocoun2.set_ylabel('Country') ocoun2.set_title('Orders per Country') plt.show() coun_spent = retail.groupby('country')['spent'].sum().sort_values() cs = coun_spent.plot('barh', figsize = (15, 6)) cs.set_xlabel('Amount Spent') cs.set_ylabel('Country') cs.set_title('Amount Spent per Country') plt.show() del coun_spent['United Kingdom'] cs2 = coun_spent.plot('barh', figsize = (15, 6)) cs2.set_xlabel('Amount Spent') cs2.set_ylabel('Country') cs2.set_title('Amount Spent per Country') plt.show() ###Output _____no_output_____
deep_dive/2_iteration_generators/7_pipline_project.ipynb
###Markdown Pipline ProjectThe goal is to write a pipeline that will push data from the source file, `cars.csv`, and push it through some filters and print out the results. ###Code import csv from contextlib import contextmanager file = '/mnt/data-ubuntu/Projects/Learning_PY_hardway/data/deep_dive/cars.csv' idx_car = 0 idx_mpg = 1 idx_cylinders = 2 idx_displacement = 3 idx_horsepower = 4 idx_weight = 5 idx_acceleration = 6 idx_model = 7 idx_origin = 8 converters = (str, float, int, float, float, float, float, int, str) def data_reader(f_name): """ Read data from f_name. """ with open(f_name) as f: dialect = csv.Sniffer().sniff(f.read(2000)) f.seek(0) yield from csv.reader(f, dialect = dialect) def data_parser(f_name, converters): """ Change data type accordingly. """ data = data_reader(file) next(data) # Skip the header. for row in data: row = [converter(e) for converter, e in zip(converters, row)] yield row def coroutine(fn): """ Coroutine decorator. """ def inner(*args, **kwargs): g = fn(*args, **kwargs) next(g) return g return inner @coroutine def data_filter(fn_filter, next_coroutine): """ Filter data based on fn_filter and send data to next_coroutine. """ while True: data = yield if fn_filter(data): next_coroutine.send(data) @coroutine def printer(): """ Print the results. """ while True: data = yield print(data) @coroutine def pipline(*filters_words): """ The pipline to process the data. """ p = printer() for filters_word in filters_words: p = data_filter(lambda d, v=filters_word: v in d[0], p) while True: received = yield p.send(received) data = data_parser(file, converters) p = pipline('Toyota', 'Mark') for row in data: p.send(row) ###Output ['Toyota Corolla Mark ii', 24.0, 4, 113.0, 95.0, 2372.0, 15.0, 70, 'Japan'] ['Toyota Corolla Mark II (sw)', 23.0, 4, 120.0, 97.0, 2506.0, 14.5, 72, 'Japan'] ['Toyota Mark II', 20.0, 6, 156.0, 122.0, 2807.0, 13.5, 73, 'Japan'] ['Toyota Mark II', 19.0, 6, 156.0, 108.0, 2930.0, 15.5, 76, 'Japan']
One-Shot Classification/One_Shot_Classification_V2.ipynb
###Markdown One Shot Learning on Omniglot DatasetThe [Omniglot](https://github.com/brendenlake/omniglot) dataset contains 1623 different handwritten characters from 50 different alphabets.Each of the 1623 characters was drawn online via Amazon's Mechanical Turk by 20 different people.This dataset has been the baseline for any one-shot learning algorithm.Some of the machine learning algorithms used for learning this dataset over the years are listed below in order of accuracy:* Hierarchical Bayesian Program Learning - 95.2%* Convolutional Siamese Net - 92.0%* Affine model - 81.8%* Hierarchical Deep - 65.2%* Deep Boltzmann Machine - 62.0%* Siamese Neural Net - 58.3%* Simple Stroke - 35.2%* 1-Nearest Neighbor - 21.7%This notebook implements a [Convolutional Siamese Neural Network](https://https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf) using a background set of 30 alphabets for training and evaluate on set of 20 alphabets. How is the data?The Omniglot data set contains 50 alphabets total. It is split into a background set of 30 alphabets and an evaluation set of 20 alphabets.To compare with the results in the paper [Siamese Neural Networks for One-shot Image Recognition](https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf), only the background set should be used to learn general knowledge about characters (e.g., hyperparameter inference or feature learning). One-shot learning results are reported using alphabets from the evaluation set. Where is the data stored?The actual zipped Omniglot dataset, extracted and processed data (Pickled data) are stored in my google drive folder named "One-Shot Classification". To mount the drive, we use 2 modules, namely:1. google.colab.auth - for authentication to drive2. google.colab.drive - for mounting from drive Hyper Parameter Optimisation - HyperOpt[Hyperopt](http://github.com/hyperopt/hyperopt) is a Python library for serial and parallel optimization over awkward search spaces, which may include real-valued, discrete, and conditional dimensions.In thiis notebook we use hyperopt trials to find the best hyper-parameters for the One-Shot Classification model. Hyperopt uses Bayesian optimization techniques to search. Tree-structured Parzen Estimator - TPE is used in this implementation. ###Code !pip install -q hyperopt from google.colab import auth, drive auth.authenticate_user() drive.mount('/content/drive') ###Output _____no_output_____ ###Markdown Imported Normal Libraries* matplotlib.pyplot - To plot images* numpy - Tensor manipulation* os - File system manipulation after mounting from Drive* PIL.Image - To convert image files into numpy arrays* pickle - To store objects are files (mainly numpy arrays in this implementation) ###Code %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import os from PIL import Image import numpy.random as rnd import pickle import gc ###Output _____no_output_____ ###Markdown Tensorflow, Keras and Hyperopt Imports* **Keras** * models - To create, load and save models * layers - To create different types of layers * preprocessing.image - To generate image transformations * backend - Utilities for manipulating keras variables * optimizers - To create optimizers * regularizers - To create regularizers * initializers - To create kernel and bias initializers * legacy - To refer keras source* **Tensorflow** * logging - To avoid unnecessary prints * test - To check if GPU is available* **Hyperopt** * hp - All randomization functions * fmin - Optimization function to minimize objective * tpe - Tree-structured Parzen Estimator * Trails - Object to hold each run information * STATUS_OK - Flag to indicate trial was successful ###Code from tensorflow.python.keras.models import Model, Sequential from tensorflow.python.keras.layers import InputLayer, Input, Lambda from tensorflow.python.keras.layers import Reshape, MaxPooling2D, BatchNormalization from tensorflow.python.keras.layers import Conv2D, Dense, Flatten from tensorflow.python.keras.preprocessing.image import ImageDataGenerator from tensorflow.python.keras.models import load_model from tensorflow.python.keras import backend as K from tensorflow.python.keras.optimizers import Optimizer from tensorflow.python.keras.regularizers import l2 from tensorflow.python.keras.initializers import RandomNormal from tensorflow import test, logging from hyperopt import hp, fmin, tpe, STATUS_OK, Trials from keras.legacy import interfaces test.gpu_device_name() tf.__version__ ###Output _____no_output_____ ###Markdown File System StructureOne-Shot Classification* Background * Alphabet_of_the_Magi * character01 * 0709_01.png * 0709_02.png * . * . * . * character02 * . * . * . * Anglo-Saxon_Futhorc * . * . * .* Evaluation * Angelic * character01 * 0965_01.png * 0965_02.png * . * . * . * character02 * . * . * . * Atemayar_Qelisayer * . * . * . ###Code #path to main folder one_shot_path = os.path.join("drive", "My Drive", "Colab Notebooks", "One-Shot Classification") #path to background and evaluation data background_path = os.path.join(one_shot_path, "background") evaluation_path = os.path.join(one_shot_path, "evaluation") #path to final model recognition_model_path = os.path.join(one_shot_path, "recognition_model.h5") #seed seed = 20 ###Output _____no_output_____ ###Markdown Preprocessing and PicklingBoth the training and test data are converted in a 4 dimensional array of the following form,>(Character_id, Writer_id, Pixel_X, Pixel_Y)> where,* Character_id - number given in the filename as **number**_writer.png* Writer_id - writer given in the filename as number_**writer**.png* Pixel_X - X coordinate of pixel value* Pixel_Y - Y coordinate of pixel valueAlong with this data, store a mapping contating the alphabets and it's starting character_id number ###Code ##creating training set train_data = np.ndarray(shape=(964, 20, 105, 105)) train_alphabets = dict() #Preprocessing #for alphabet in os.listdir(background_path): # alphabet_path = os.path.join(background_path, alphabet) # for character in os.listdir(alphabet_path): # character_path = os.path.join(alphabet_path, character) # for image in os.listdir(character_path): # index = int(image[0:4]) - 1 # writer = int(image[5:7]) - 1 # train_data[index][writer] = np.array(Image.open(os.path.join(character_path, image))) # train_alphabets[alphabet] = index if alphabet not in train_alphabets or train_alphabets[alphabet] > index else train_alphabets[alphabet] #with open(os.path.join("train.pickle"), 'wb') as f: # pickle.dump([train_data, train_alphabets], f, protocol=2) ##creating test set test_data = np.ndarray(shape=(659, 20, 105, 105)) test_alphabets = dict() #Preprocessing #for alphabet in os.listdir(evaluation_path): # alphabet_path = os.path.join(evaluation_path, alphabet) # for character in os.listdir(alphabet_path): # character_path = os.path.join(alphabet_path, character) # for image in os.listdir(character_path): # index = int(image[0:4]) - 965 # writer = int(image[5:7]) - 1 # test_data[index][writer] = np.array(Image.open(os.path.join(character_path, image))) # test_alphabets[alphabet] = index if alphabet not in test_alphabets or test_alphabets[alphabet] > index else test_alphabets[alphabet] #with open(os.path.join("test.pickle"), 'wb') as f: # pickle.dump([test_data, test_alphabets], f, protocol=2) ###Output _____no_output_____ ###Markdown Loading Preprocessed training and test data (train.pickle and test.pickle) ###Code with open(os.path.join(one_shot_path, "train.pickle"), 'rb') as f: train_data, train_alphabets = pickle.load(f, encoding='latin1') with open(os.path.join(one_shot_path, "test.pickle"), 'rb') as f: test_data, test_alphabets = pickle.load(f, encoding='latin1') batch_size = 128 image_size = 105 ###Output _____no_output_____ ###Markdown Image Augmentation RangesDuring data generation, images are transformed to provide a more robust training. The parameters for the transformations are defined as follows:* Rotation Range - maximum degrees upto which image can be rotated clockwise and anti-clockwise* Width Shift Range - maximum number of pixels upto which image can be shifted to the left or right* Height Shift Range - maximum number of pixels upto which image can be shifted to the up or down* Shear Range - maximum degree of shearing allowed* Zoom Range ###Code #@title Data Augmentation rotation_range = 10 #@param {type:"slider", min:0, max:90, step:1} width_shift_range = 2 #@param {type:"slider", min:0, max:10, step:0.1} height_shift_range = 2 #@param {type:"slider", min:0, max:10, step:0.1} shear_range = 0.3 #@param {type:"slider", min:0, max:1, step:0.1} zoom_range = 0.2 #@param {type:"slider", min:0, max:1, step:0.01} ###Output _____no_output_____ ###Markdown Batch Generation* Create X1, X2 which will contain batch_size number of images paired against one other for comparison * Create Y which will contain the results of comparison for the whole batch* Each alphabet should get equal representation in the training * s_alphabets -> alphabet's starting character_ids in sorted order * times -> number of times each alphabet can be represented in a single batch equally for both same and different pairs * reminder -> number of times alphabets have to be picked at random cause batch size is not a multiple of the number of alphabets* For each alphabet chosen, create same (writer) and different (character) pairs * w_range -> writers to chose from * c_range -> characters to choose from * transform_image -> image augmentation function* Yield created batch ###Code # this is the augmentation configuration we will use for training datagen = ImageDataGenerator() def transform_image(image): return datagen.apply_transform(image.reshape((image_size, image_size, 1)), transform_parameters = {'theta': rnd.uniform(-rotation_range, rotation_range), 'tx' : rnd.uniform(-width_shift_range, width_shift_range), 'ty' : rnd.uniform(-height_shift_range, height_shift_range), 'shear': rnd.uniform(-shear_range, shear_range), 'zx' : rnd.uniform(-zoom_range, zoom_range), 'zy' : rnd.uniform(-zoom_range, zoom_range) }) #generate image pairs [x1, x2] with target y = 1/0 representing same/different def datagen_flow(datagen): while True: X1 = np.ndarray(shape=(batch_size, image_size, image_size, 1)) X2 = np.ndarray(shape=(batch_size, image_size, image_size, 1)) Y = np.ndarray(shape=(batch_size,)) s_alphabets = sorted(train_alphabets.values()) a_indices = list(range(len(s_alphabets))) times = batch_size//(2*len(a_indices)) remainder = (batch_size//2)%len(a_indices) aindices = a_indices*times + list(rnd.choice(a_indices, remainder)) rnd.shuffle(aindices) w_range = list(range(20)) i = 0 for a in aindices: end_index = (len(train_data) if a+1 == len(s_alphabets) else s_alphabets[a+1]) c_range = list(range(s_alphabets[a], end_index)) writers = rnd.choice(w_range, 2) same = rnd.choice(c_range) X1[2*i] = transform_image(train_data[same, writers[0]]) X2[2*i] = transform_image(train_data[same, writers[1]]) Y[2*i] = 1.0 writers = rnd.choice(w_range, 2) diff = rnd.choice(c_range, 2) X1[2*i + 1] = transform_image(train_data[diff[0], writers[0]]) X2[2*i + 1] = transform_image(train_data[diff[1], writers[1]]) Y[2*i + 1] = 0.0 i += 1 yield [X1, X2], Y train_generator = datagen_flow(datagen) ###Output _____no_output_____ ###Markdown Modification of Keras SGD OptimizerA few modifications have been done to the original Keras SGD optimizer to include* Learning rate for each layer* Maximum momentum for each layer* Linearly increase momentum from 0.5 to Maximum momemtum based on the number of epochs ###Code class Modified_SGD(Optimizer): """ Modified Stochastic gradient descent optimizer. Reorganized SGD to allow layer-wise momentum and learning-rate Includes support for momentum, learning rate decay, and Nesterov momentum. Includes the possibility to add multipliers to different learning rates in each layer. # Arguments lr: float >= 0. Learning rate. momentum: float >= 0. Parameter updates momentum. decay: float >= 0. Learning rate decay over each update. nesterov: boolean. Whether to apply Nesterov momentum. lr_values: dictionary with learning rate for a specific layer for example: # Setting the Learning rate multipliers lr_values = {} lr_values['conv1']=1 momentum_values: dictionary with momentum for a specific layer """ def __init__(self, lr=1, momentum=0.5, decay=0., n_epochs=200, nesterov=False, lr_values=None, momentum_values=None, **kwargs): super(Modified_SGD, self).__init__(**kwargs) with K.name_scope(self.__class__.__name__): self.iterations = K.variable(0, dtype='int64', name='iterations') self.lr = K.variable(lr, name='lr') self.momentum = K.variable(momentum, name='momentum') self.decay = K.variable(decay, name='decay') self.initial_decay = decay self.nesterov = nesterov self.lr_values = lr_values self.momentum_values = momentum_values self.n_epochs = n_epochs @interfaces.legacy_get_updates_support def get_updates(self, loss, params): grads = self.get_gradients(loss, params) self.updates = [K.update_add(self.iterations, 1)] lr = self.lr if self.initial_decay > 0: lr *= (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay)))) # momentum shapes = [K.int_shape(p) for p in params] moments = [K.zeros(shape) for shape in shapes] self.weights = [self.iterations] + moments for p, g, m in zip(params, grads, moments): if self.lr_values != None: if p.name in self.lr_values: new_lr = lr * self.lr_values[p.name] else: new_lr = lr else: new_lr = lr if self.momentum_values != None: if p.name in self.momentum_values: new_momentum = self.momentum_values[p.name] if self.iterations >= self.n_epochs else (((self.momentum_values[p.name] - self.momentum)/self.n_epochs)*self.iterations + self.momentum) else: new_momentum = self.momentum else: new_momentum = self.momentum # velocity v = new_momentum * m - new_lr * g self.updates.append(K.update(m, v)) if self.nesterov: new_p = p + new_momentum * v - new_lr * g else: new_p = p + v # Apply constraints. if getattr(p, 'constraint', None) is not None: new_p = p.constraint(new_p) self.updates.append(K.update(p, new_p)) return self.updates def get_config(self): config = {'lr': float(K.get_value(self.lr)), 'momentum': float(K.get_value(self.momentum)), 'decay': float(K.get_value(self.decay)), 'nesterov': self.nesterov, 'lr_values': self.lr_values, 'momentum_values': self.momentum_values} base_config = super(Modified_SGD, self).get_config() return dict(list(base_config.items()) + list(config.items())) ###Output _____no_output_____ ###Markdown Weight and Bias Initilization* Weights are randomly initialized with a mean 0 and standard deviation 0.01 over a normal distribution* Biases are randomly initialized with a mean 0.5 and standard deviation 0.01 over a normal distribution ###Code w_init = RandomNormal(mean=0.0, stddev=1e-2, seed=seed) b_init = RandomNormal(mean=0.5, stddev=1e-2, seed=seed) #image shape input_shape=(image_size, image_size, 1) ###Output _____no_output_____ ###Markdown 20-Way One-Shot Classification Task![Example Image](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAOUAAADcCAMAAAC4YpZBAAAAkFBMVEX////+/v4AAAD5+fn7+/v09PTa2tr29vbw8PDn5+dycnLr6+vj4+PJycm+vr6enp4mJiaXl5d7e3vd3d1lZWWPj4/T09O0tLQaGhqtra09PT2JiYnCwsJMTEwrKysgICB/f38zMzNBQUFQUFBqamqkpKQjIyMbGxsRERE2Njaurq5dXV1PT09YWFhHR0cTExP1KCNUAAAX1UlEQVR4nO1dh3bqOBDF44KNG8UFMMX0Fsz//91KcsE2akkgm8d79+zZzSZG6FrSaJpGHeU16HwOL+pF1ZtXtctgM4tzjN6MpV6ha0wWUOBovxVL79ivcEP0DmOEA0D6ViwTqGE8jE0bQb/C4s1YXia7Emb5WxcgeS+WPZoYSmBal1Av6kXVm1e1W8KEm/lIEg1m49cv6kXVm1e1W0KHJY1lJ4TwjVh2UhjQp2z91y/qRdWbV7VbYQj+38sybQjZF/Wi6s2r2qWzTJJS54lhob8rSwCj+EmHsfX+LE2A3fuz7PT+sXx2b17VrgTLrfL+LGfQV9+WpVv+aMHiXVkeYVv+6Lwvyw+4lrvkG4/lGgDW8czrYEV22X1TlufVKXeN3MZQ//2LelH15lXt0lkiRN5mTlx5C+eNWWLoEULdV/miXlS9eVW7FXyYqg8023hRL6revKrdCjb0u2x678LS+sfyjVgutL+B5Tj+G1i2sVXej6WaHVssD4/r9EW9qHrzqnbriLwGnAeSb8FSjBf1ourNq9r9G1h+kuSPsPxsn/4U/GP5PvjH8o8EVVX+lSwVse3CgD24zGgN/kKW9jCLvvjRC0Dffvz1L2SppQBLS/wcBQbWjilv6P9gqercP/u4qwb3ERYIS0pa0c+zVAZj2uu+I4RWPpc8CMsL5Tt/nKU5BtjwHugm1PEg0HeTyWTE+CN2oAEcKdOAx1K1xJ7Fz0Pfo6543EfCRhC+jh6xS0P6HzFWMKT8lsPSnsNiJvbUfBb6cp6KaA5gSRswTR1gknv2orWWn2U5wy1unk5T74PXE6w89QqHtomtWPGFOBgC2o6YN31Gf59S8uA4LF0yO9zHz3wTKcw6O6rAv8Put8fETJeoN0ESMVelvstyb9LjwAhZMl/clxHCBP+L4tqqAU/qSioomhOvAFZhxPiMpqqagYNoyWwCMHx4SjRjX8ByB0E7y+cR5gGAjFrXtr3eFXFcDFiaQtcLrtfrAlY9jzT/uOg5LEfE5UbxQNEwSKWVMgP2XZKYxv2ENz6irx7F2YJE//YJU+RY19wjuCrm8hXmbSWPt5N403buPBNoixN7lAuYJzyMCsCAO2f1bmfyUbg0rx5Hf0crK/Xi+FwOtQNwaz3C1woYKa0PiNFikNc8kQpHBlMg2iy00Mb9RO2cgKsS2kjNmNR/gTSgrEmFz9KFvdROcqLuUixYeOAV1Vvx96khLHZYAOkrwbuOkGwd1Cdp1Hbb81nOZAI4ZM6QL9Ftvh5eoge9+HLc37iD6RTiBy8HgVrbHbQUIjSaQV2gPIXltehGck/iYcKKe71LFUNgS3B7BavinYVi5R3vepvaTuocoD8ZRWXnn8HSL3sbMrVsgq4TB6U8WaTOCIEpVJQM+qU8C3mKawETLfOjn++wozj2iMKbyLH0JAKrGpouxbrpMmesaYwi/4S3pv5l4Ps9uIqaXd1n8w6moqdxX9Hq3Nl4+o7LkFo1QnyWo4VQw9N6XBuhwAwww2UyccgC9iATfWJ1VwFHANQzGu2eoOFcmLg/vQGGf3/lFJZdt2q/m1Urx3bpc3cC9KT7JogelVUCwRJrGzWW6pyvJ1VANEM0s7KH+Udhmdynkz6vWG4goNI8w1hC6zG2B8zTL8fk9hmWnUCSJXqwN6SpJxSWKdYzCwyq9jNIqKoK2ptWFK/ZA+xBgpXRHWlEeRHLCXz0JL1b9QMeftm+vmBJfSRhT3IqvRkjlTTAwynBcnHXZ7QMfFXKW2PAci7JclcbS79kPIQ1SwK4c7LeR4Yj7Ikd9mFjKDIs02q7xEv/spd6kWhiwYnixqGwdBZQ6aTGDfIfthy9XfP7ueje+DMRUWuFDSMJllqlpSFFZjKAuYyebCNLmya9KSy1TS3bepyzRHYEz1Oju5M0I5b6x9ngE7WI0+cgtuiQWA6GGCskDV22W6+BDVDVL9p+6d/HrWR5VyqZ0LTRjGgcp8GMSwHLegDh0KT3XBKPcE74zRIg1WpN2QpoLM1+ZeqULD1Jb7c5yfvFVTx9EPkqNeyMg8TPQUQ4saVjUSewArl+XJhU3WdY+aftnGW8hIVUHMry8yHgG2JRyHsPinEF9E9738tzowUeYrJqTjJaARm5IHYxkA3ou7MExqlotrhnf9dbYcdEtk5d0Xy0P9gKEx7q2/HUEulazlLQDZfQfNgN6HrseQVN0DWCO7yslLKurcl45BW2rUM8yw+xPQ1N2WsgVLMcpC+Hsj48Pd0vCG6wXPSzWCA3ccd6YTIYPcNHbYXBsvfY0eGyJ6Gza+eEoiexbBLFJjB7cDb5HgDVH6CRD1iu0q9AsymNUX8pCRbLEkNBRAMtMTRHjsMvx8h/BE9guYRUtJf+3xCx7AlZdoyvJgH8HEQsZ/5vHycZiFi+B/6xfB/8Y/k++JtZduWiOn8MqCydMdOV9bNQnjTBqCwNpJkOJKN0BSLfj5+dNGP1iFm0mXw7uYrFEpt4w5kraS12LKSyr76WB8mAFoXL0rrtfZcmg+XaS6d5BC7LgiQl4MWFomenk4ywWb51jSjykPF6EirTNLhVDSEGSxxKG7m9INjXHAYc1xJm+cyxnKG5UeVoGWNRRhsVCVTl2VgsyzVmOs4gwQhgzlmohsgr9znYa1jXpo5x+ULCLEnKKgKrIpadUWFXeXDlSZezVCRVEsoWps1X2qMmahM4DMNvB3MPxkWDApYzgDXxA8T8hA7vmSytw4NHcsoKBU+AmkGi+H24OHAs/k/AEnvLSfDpySxH/ha1PKVvhmiyPbZ/ow6mjf30j2KPTNeLdZRkuYMi2PVUll2SVLYcz6ks8+SuFjZwoC1NmyQQNJrRZnP0uwCt5aXkWGrJ4pLP+2ey1FEP/J1r2xQNy0x7O1g8/sG4UUO1Gs5XaGT04cQCyGadbor+VETVBSydqrwrl2U0SeEk4nbHkLO3orl2qJfCqRBS85+VOdnk7gJZQdvrnKhhiuGWuRACljYOQZO/cFg6JDYizPqIk/KVB5ysku4aDw6FpflBja2Yu4G7rtFEJC9ycZKG9JlvCeawYehZsz7SCDYfQi9YUIYW0atjJ2CSwWmw7JpV/yfUj3QcnI1Dth47RCL3cbqLWA7vqs9jCjFBcoSVb6tit3NYzlOH5FQyoCHN8lpfZ8r2lGfumPNH0Vt+aIPmEnoFdkZPPmKw3Jdiu+uUsKgT1u3XEsH46F6KtetwA3TOvJx+uUKCM0y309XUxqkArA8pow+ADZpVJ+qUorLszGWT8km20taQM9LUTT5T+Sw7ZrFjqCei1/UgK5Roe8Vk2SlCZQt6T1gsJf3lSpzLuGss81pwme6dq1rCYCuBifODsB3gTrINZs5n2dHHsGe0S2f5IZtGhOC4ITHSFj1EQPAs3sLgENzA1HXh+PtwOgLaPK9lt/gsdRxtZ8R+6SwnEomuNWhGfF0TbSYRxIp1L5hW0ixNHJud9oXmdYwPtt0FK5eljiZVgBqldoDO8iyXd9KAuw2IZrVzBUwNJPexpyPnegmC4LJzXbfFd7eEk9nBWW2VPGGzRPv/FWtAIU5hp4jJ57FE0NzhQiL4j9albZq2F0+Cyz3tuZlwgUYxwGuxe76brcaSzlL1iFaCX5ODhhPSh4XzVJZIEVlLuGnuMlZTNVxELfLTpHlexIVF0N5T8RlbSmOKGRDfUO6pUF303s7th57J0pnhNXcSJCEIdxKMOQQPj+xo6TMDCPACWN0zxvQMju0dgiV9ZLeSomVd7/bItraW2FGELJFZ0X9YXPHydn2QVU7u42sMO6LZ9p/QWVpjfDzuDkG2z+C4PIzRHr6fSikTLp+lm1woRzCiA2weBTJRBVreE0KzOUh0lh1vDQ3we3+B42E8D2amlA7ko02TpxOinh/b2ryCBMyBYq3h/fdhfmLzZdrQ9BgsEc8G+AEFnIss6ejXfJKFx1sPg8cD3Oq0D0Pam8HZVzT1PAX67QrfiXlJZ1yXbnve0Tc1usImv3mn4jVn+UF1P6AHLrAt5qDhySf5M1hqmTxLk2yodE8VgbM+TZHMxFhcwhzztfyJwALYBjtWTuJnsEQ9l/cJR0u4hhwvfAQULCQOc7Rg56eQ8qXxJJafiI/tBMqRbZWw/ULE774STt0RlvnPP8+yM7s8nHV9BVS/D8tijv0PLH8M5qjs1juzvOMpLJd/A0t1N3hFwZEn4ilawa/HP5bvg38s3wf/WP4WRN9OsfkDWOrL8XeVjj+ApRbIW5fdOAnDxGsb1n8AS2RDHaSeU9xkk9ujm5Zn4U9gaa+k/KY2DqndesMhTkBpOi/+BJY47C98RukdAab+CHHQvHYpul/PMvIi7BITDuYOYHmu1uO+mYXyy1laPZKzs6UVn2wgBrjU3Kn6qlFn41ezjHA52HSCc6a4lS2JTyxo9N4+UW/v+zpL07Gc6r2ZlvO0lHd9UZVcAr6dHsG8nfE5hAvllrAvs9zhcjplNr9xQG9fsricEGgWekWX+CyVbRGW11SE/COzeiYFl6UuMS7YWb6/5ZFA97rsL4+weI6LTqldGclnGeceSdWfIvRzv50Ht7vE4rHsbuDxfHIbazgYqk9Kj9hrJCo0q0/xQevu5w84qrVKH3yWkzxFNi+7Wohjp54gxmOp9yEVsETvG864Px4pFIr/o00pLGefKiWXI4C0WmsyLEcwnoxGIyOXEd21PEtRPNInxCLY66QQCg51q3SW488mnvfqVTj4LM+EpdEIoMuzFFfdysv9zWCt4IQHUoosgvHjulQDSpiRB5zlWtsKuCyVlOT5GI319USW6p6EPAL8Ld2idoNLVcfMaV0aiBE3Eyi5LLUNqTbxxbHsSrAklzpecI9UOJItpEfPvDBPcPgEzWkzr4rP8kphqUqw1PAhkvQG22RmRFHEWlM5S6RO2RVLjxVQnSFFU3on1eFodzTcH41IIBFL/NcmS/fAZRlPJpPzoBVAZJixOcvodtRLlvYCrowOeUuYy4ogC/ZdY4MWphWQQl1yY1kbfn0O/fvm9cjSqN9HNThfSHoV49xPznJARBBhOVnBlRlNwGV35Sq1or1hiBZ4jygdjgRL/FcHtV49dW5kfFBm7KgPyRB9IgwHDp4uBrs4MWGJ9kkjZ2lPgFuaUUEvLJNSc7E8Uy30qGZZCmHJ+Zj2AWW+8KCjzGKMZmUq2roM4Rrdk8HiMa36Ucly3MUqEt7Y1ONxC+WRIxqcJMR1VU5hhYQl3Azo683/P/FyZ+LiaEAt/RymIvvSyiAoa6fhsxbsIxbaAjYqMgjwB0lCAM8MdCkJAwypa8C6RYNreenrfNuxsY63vyJsmnn8VBlr3CPynTnMOWE71PPjFU0OG/sjYMwuMd3BYzmYbgdhbSgPrGMlbZZTgRdvUiwqxXGoBgZ9J8E1wgZE7A8O/GtWoy0+RZtkcFteW9OMgvW0njGnr1j7cYulsIJbd8dPL2PslzjBYpHOotFSeKwyIVPvYtvIChAlrKzJFFUjl3yTKcsyqU4oMdHjlm1jsLRuQDLxLgduQnyHhNsTz5vhAQ8f8+daWBNpXOZUslmOmuepruLirTNYcqxaloaXQewFy9r6ZCGEadm8sxSlJa7J3509YemlB9bjk4ahhl650C43D7zB5LDsaLoPorli32rvecbfLrEoS/F36kRCDNm5ms0KUWd+qwpJl97yrhxgs8Sy2LyKcs/CxinCmcBxOqzX9Bw8uMBLqJcGyy13zVnVLjnOsnVvQl7cZCXldS7OOZsCM2J2bMp4NPg7zomvBsvOBInnhCbclKBWYtTaciUs0tSWAca2uA6B6DyD5vtjsfRgxW75/ul1u+AqEs4Zu09rmPuDO0iZANo42auyBDV5iKf8Rncv+mC5X+8G5LXpdmP/Y7MUCu8O0RzbKwtXf2QV/zMedJ/Ypx84dRb3ZwKuvRbBWGzPsVnKnA7yKLVzsaZ3YZ2u7zWARW4G45gyx7XIL5/iq/fo6/oDkTbCYqn5Mi5oj1ZFAOurRymXrJl5HX2DBPM3agaqZ7SxZwKaLJZ2JOO+iGm7tYZLc06lbvIw0Tjpwz1cv+OO74YgKvHPYhmXh+C58BhD5tWCCmLoV3wS9hvjaSzgOOUNC4tlFKRf/1a0SWwzubMlBN1kfVp9Zzjt6/SUrdg0WSzfC/9Yvg/ejqVpS1Yi/2LzvyHfWQ2pl2s+jWV8mzIqQ/wkLnSP2bNY4toJrAIYP4iPl7BMCnshQUp1dr+F4n/DC1gqxP9JTgAO4YZsy0+wNMOQq+pqTkND7LqpVMYh0qI+aD39BktkSx6z3MRNIY0pp66ZMLH3m6fp4kssKv1R2R1Flw4U6KafvctUiAF6cR95Rkj6aA+rTI+8Hsc7yDK4cXTAEb4MMncp2ckUrQdPLvjpU6On32RZum7w3G2Wk5xM5ywSxFEzNj9gyRNXNr4HBFvYyOQ4XmRrYD6fpRmsb+XBXAvq3kTdjfZooKlmhubDcb1eZx1rLrjps9ODhas5N+g50oU+n88STcpL5ZNp3KuCIxD7Pt31NgOIu6RYijMXlCSL0HLcbvKqEpLYfXFdjgYJO8jjV3cBNxxtOABxDYHqLO7db4aLBD5tJSKJSlIJURbJ81GDL42lkns7Y5du5OLbvHKafuV9jqIY3yWysNY0lkotHcgReu4V7yIXv1byy46QaF5Respnia8ygmlMPJ0hdW0YWeHG2+UsnWFxHxRcvAN1LPNYSW819PC4Ck1tXKhAYsbmLCMSX3zcoXgsVXzJNHHJRD1EdNx2KapdhaT8XM2SpUuy44PQz4n2aU6RfCzRg3O9Q71Irdk/TlZDi2Xk4jE5TOH2MMU5LLtomA7VF0xu7eocXZIAgycJDiIgliYuOtTboe9QB/gkfkrb9pWMrEukQyyjzlY8lgEsJe5kJFoPxtXU10h0t8aLw3JXz7lA47luuWjNPQSE5glWBnoab+NbseduhN6ETTqVOhJlYQMYiK/Fsg2XXHqZ4RdoXkmxn/Pu7vzmsBy2XMqzJssursxElEsb7Y0uVhHmoYyNiQWwMT0m0PduIumDWboXUfiyWCDVLXVIITxi6bwtBRGHZdZaD6Nl451auKWcN76XegxzQ9KOnmABdTIOELAL3FUIYOQL7qXNKxPA8b4cjXxnqHzfbZbaqBxn+9gqnqBOWyynIVxHI2+YJr1K1ZMC9r4fMNW++JxIAOdMwNLIxXpDkJG6ZAcWy1FVGMKDTXPn6bZZtvBYY4gFtYr1iMt6B8C94qxw5aNtsrmrEtnMHEuj+GacxNzSIlpjie+TxwVJTjhis50hO3oRnrG0MvN0Kw7QGvaiKHLnEiGnACbMasc5y8jFsej2apkdaleot1nqRSIdjgGcmoMTHZsxHn02iG3LKsTwDshZhKiDi8HzJ28Eq1yceELdh6zLnVQstQ279rIfpM8Z9qO8A61yVuhXvJJjOxjs8PaQl2LhbhAhHPI3L9bwyLq0JB7j44Gl1icjYWznISGJBswwuho5BsnZwrtE3M8ueVX/BS/ooVxLp4I+FSd/hHAcPp8l2nwu9Qgf6fZpne4SrnEQF381SZITtwS0fVqWysZQLJi7uJb35w8wNPHIMoZVjU0PTh8f+XWFzLwhNewNr6WpSBwItJs2K6T3xJYIlkL7GNmqq+8eqnpkiabR/fZdknmj6ZExGA6HDM16lF+weMzvpCU1h3kT1jnetQ0lEx8GnnzGa8YARfdx+rWvFt9AF23gtCPztFi2bLcWaX1ftxdjOIo02ckTbligaXi72tsTssTVgC00/rcEFhJGPc7qXtcSKOhe4jrUPvMKBGlQ9dhJdeWFmOUFVhHRJNeC22hzxG2NCfaCwUSfSKWdWwxQWXZPVchDyNLEt7POcqNZxv1tuC2IBgovhu+edaTbJBNYFD9v5W6QIVW8X3KjUnSEjJzM/05Ajs7SPpTn+4T5sS+GtYBtdEOLN6b7neRAZ6lsYUxo7r6tdnwTPtYWkaV2O33nWiI6y441hfGInBuVtxpfAWdDlGd8DFz2wA0NDJYdO4D1DCnovDzpl0M/HwuVWEth843rRlkssdzZG4E4YfyFQNIBqlzDb+U8sFl2SFHo//O+VnP1bTW9AIclTgHd/K8xdM4F4Z9siM0Sa5Ay/t4/ADyWnUnyG3J4ngAuy7fB38jy7fF3sPwPLpOGr2y6vvUAAAAASUVORK5CYII=)For validating and testing, twice for every alphabet (10 for each validation and test),* Randomly select 20 characters* Randomly select 2 writers* For each character chosen: * X1 -> 20 copies of the character written by the first writer * X2 -> all 20 characters by the second writer * check if the prediction for [X1, X2] as input activates the correct character * If correct count it* Finally accuracy -> count/2\*20\*no_of_alphabets -> count/400 ###Code def validate(model, test = False, show = False): N = 20 st_alphabets = sorted(test_alphabets.values()) max_index = len(test_data) if test else st_alphabets[10] st_alphabets = st_alphabets[10:20] if test else st_alphabets[0:10] correct = 0 for i in range(len(st_alphabets)): end_index = max_index if i+1 == len(st_alphabets) else st_alphabets[i+1] c_range = list(range(st_alphabets[i],end_index)) for j in range(2): c_list = rnd.choice(c_range, N) w_list = rnd.choice(range(20), 2) for c_i in range(N): image = test_data[c_list[c_i]][w_list[0]] X1 = np.array([image]*N).reshape((N, image_size, image_size, 1)) X2 = np.array(test_data[c_list][w_list[1]]).reshape((N, image_size, image_size, 1)) if show and c_i == 2 and i == 3: plt.imshow(image) plt.show() for m in range(N): plt.imshow(test_data[c_list[m]][w_list[1]]) plt.show() targets = np.zeros((N,)) targets[c_i] = 1 predictions = model.predict([X1, X2]) if show and c_i == 2 and i == 3: print(targets) print(predictions) show = False print(np.argmax(predictions)) if(np.argmax(predictions) == np.argmax(targets)): correct += 1 return (100*correct/(N*10*2)) ###Output _____no_output_____ ###Markdown Reduce learning rate by 1% after each epoch ###Code #CallBacks from tensorflow.python.keras.callbacks import LearningRateScheduler, ModelCheckpoint lr_scheduler = LearningRateScheduler(lambda epoch, lr: 0.99*lr) model_saver = ModelCheckpoint(recognition_model_path, monitor='loss') ###Output _____no_output_____ ###Markdown One-Shot Model building, compiling, fitting and validating**Model Architecture**![One Shot Model Architecture](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQThMusu8b2uK8kGwrFsg-cuZXaN8Wc7HkfgyiM-8YAfCfN_2uiJQ)**Model Keras Design*** The model in the image is created by creating a Sequential keras model until the fully connected layer * Each layer has it's own regularization parameter and a batch normalizer* A Lambda layer is introduced to perform a L1 distance calculation between the 4096 embedding outputs of the twin networks* The L1 distance is then given to a sigmoid to classify as same/different (1/0)* Finally this is wrapped up in a functional model that takes 2 input images for the twin networks and outputs the sigmoid value of the L1 distance**Model Compilation:**The above mentioned model is compiled with the modified SGD optimizer, binary_crossentropy loss and accuracy as the metric**Model Fitting:**The model is then trained for the given n_epochs, each epoch training on steps_per_epoch number of batchs with the given starting learning rates for each layer and final momentums for each layer**Model Validation:**The model trained thus is validated using the validate function defined earlier and the negative of validation accuracy is returned for the hyper-parameter optimization procedure to minimize ###Code def One_Shot_Model(params, save=False, verbose=False): print(params) lr_values = dict() lr_values['layer_conv1'] = params['l_c1'] lr_values['layer_conv2'] = params['l_c2'] lr_values['layer_conv3'] = params['l_c3'] lr_values['layer_conv4'] = params['l_c4'] lr_values['layer_dense1'] = params['l_d1'] momentum_values = dict() momentum_values['layer_conv1'] = params['m_c1'] momentum_values['layer_conv2'] = params['m_c2'] momentum_values['layer_conv3'] = params['m_c3'] momentum_values['layer_conv4'] = params['m_c4'] momentum_values['layer_dense1'] = params['m_d1'] reg_values = dict() reg_values['layer_conv1'] = params['r_c1'] reg_values['layer_conv2'] = params['r_c2'] reg_values['layer_conv3'] = params['r_c3'] reg_values['layer_conv4'] = params['r_c4'] reg_values['layer_dense1'] = params['r_d1'] left_input = Input(input_shape) right_input = Input(input_shape) # Start construction of the Keras Sequential model. convnet = Sequential() # First convolutional layer with activation, batchnorm and max-pooling. convnet.add(Conv2D(kernel_size=10, strides=1, filters=64, padding='valid', input_shape=input_shape, bias_initializer=b_init, activation='relu', name='layer_conv1', kernel_regularizer=l2(reg_values['layer_conv1']))) convnet.add(BatchNormalization(axis = 3, name = 'bn1')) convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling1")) # Second convolutional layer with activation, batchnorm and max-pooling. convnet.add(Conv2D(kernel_size=7, strides=1, filters=128, padding='valid', kernel_initializer=w_init, bias_initializer=b_init, activation='relu', name='layer_conv2', kernel_regularizer=l2(reg_values['layer_conv2']))) convnet.add(BatchNormalization(axis = 3, name = 'bn2')) convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling2")) # Third convolutional layer with activation, batchnorm and max-pooling. convnet.add(Conv2D(kernel_size=4, strides=1, filters=128, padding='valid', kernel_initializer=w_init, bias_initializer=b_init, activation='relu', name='layer_conv3', kernel_regularizer=l2(reg_values['layer_conv3']))) convnet.add(BatchNormalization(axis = 3, name = 'bn3')) convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling3")) # Fourth convolutional layer with activation, batchnorm and max-pooling. convnet.add(Conv2D(kernel_size=4, strides=1, filters=256, padding='valid', kernel_initializer=w_init, bias_initializer=b_init, activation='relu', name='layer_conv4', kernel_regularizer=l2(reg_values['layer_conv4']))) convnet.add(BatchNormalization(axis = 3, name = 'bn4')) convnet.add(MaxPooling2D(pool_size=2, strides=2, name="max_pooling4")) # Flatten the 4-rank output of the convolutional layers # to 2-rank that can be input to a fully-connected / dense layer. convnet.add(Flatten()) # First fully-connected / dense layer with activation. convnet.add(Dense(4096, activation='sigmoid', kernel_initializer=w_init, bias_initializer=b_init, name = "layer_dense1", kernel_regularizer=l2(reg_values['layer_dense1']))) convnet.add(BatchNormalization(axis = 1, name = 'bn5')) #call the convnet Sequential model on each of the input tensors so params will be shared encoded_l = convnet(left_input) encoded_r = convnet(right_input) #layer to merge two encoded inputs with the l1 distance between them L1_layer = Lambda(lambda tensors:K.abs(tensors[0] - tensors[1])) #call this layer on list of two input tensors. L1_distance = L1_layer([encoded_l, encoded_r]) prediction = Dense(1,activation='sigmoid',bias_initializer=b_init)(L1_distance) model = Model(inputs=[left_input,right_input],outputs=prediction) optimizer = Modified_SGD(lr=1, lr_values=lr_values, momentum_values=momentum_values, momentum=0.5, n_epochs=params['epochs']) model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy']) callbacks = [lr_scheduler, model_saver] if save else [lr_scheduler] history = model.fit_generator( train_generator, steps_per_epoch=params['steps_per_epoch'], epochs=params['epochs'], callbacks=callbacks, verbose=True#verbose ) val_acc = validate(model) loss_history = history.history['loss'] print('Validation accuracy:', val_acc) return {'loss': sum(loss_history)-val_acc, 'status': STATUS_OK, 'model': model if save else None} trials_path = os.path.join(one_shot_path, 'trials.hyperopt') trials=Trials() with open(trials_path, 'wb') as f: pickle.dump(trials, f, -1) ###Output _____no_output_____ ###Markdown Hyper-Parameter Optimization Space* Learning rates for each layer - [0.0001, 0.1]* Final momentum for each layer - [0.5, 1]* Regularization Parameter - [0, 0.1]In the original paper, each epoch trained a maximum of 150,000 pairs of images, equvalent to around 1200 batches where batch size is 128. But due to the lack of computational power and the fact that a single 200 epoch training with 100 batches each takes about 1-2 hours in a GPU, the number of batches per epoch in this implementation has been reduced. ###Code space = { 'l_c1' : 10 ** hp.uniform('l_c1', -4, -1), 'l_c2' : 10 ** hp.uniform('l_c2', -4, -1), 'l_c3' : 10 ** hp.uniform('l_c3', -4, -1), 'l_c4' : 10 ** hp.uniform('l_c4', -4, -1), 'l_d1' : 10 ** hp.uniform('l_d1', -4, -1), 'm_c1' : hp.uniform('m_c1', 0.5, 1), 'm_c2' : hp.uniform('m_c2', 0.5, 1), 'm_c3' : hp.uniform('m_c3', 0.5, 1), 'm_c4' : hp.uniform('m_c4', 0.5, 1), 'm_d1' : hp.uniform('m_d1', 0.5, 1), 'r_c1' : hp.uniform('r_c1', 0, 0.1), 'r_c2' : hp.uniform('r_c2', 0, 0.1), 'r_c3' : hp.uniform('r_c3', 0, 0.1), 'r_c4' : hp.uniform('r_c4', 0, 0.1), 'r_d1' : hp.uniform('r_d1', 0, 0.1), 'epochs' : 25, 'steps_per_epoch' : 25 } ###Output _____no_output_____ ###Markdown **Save Optimization state by pickling Trials object after each trial/run.****In this implementation, 1000 trials are run to find the optimal hyperparameters.****SInce Keras models are not serializable, the models themselves are not saved. After finding the best hyperparameters, a final model is trained using them and stored as the best model.** ###Code def remove_models(trials): for trial in trials.trials: if 'result' in trial.keys() and 'model' in trial['result'].keys(): trial['result'].pop('model', None) with open(trials_path, 'rb') as f: trials = pickle.load(f, encoding='latin1') print(len(trials)) while(len(trials) < 1000): best = fmin(One_Shot_Model, space, algo=tpe.suggest, trials=trials, max_evals=len(trials) + 1) remove_models(trials) gc.collect() with open(trials_path, 'wb') as f: pickle.dump(trials, f, -1) print("Best performing model chosen hyper-parameters:") hypms = trials.best_trial['misc']['vals'] space = { 'l_c1' : 10 ** hypms['l_c1'][0], 'l_c2' : 10 ** hypms['l_c2'][0], 'l_c3' : 10 ** hypms['l_c3'][0], 'l_c4' : 10 ** hypms['l_c4'][0], 'l_d1' : 10 ** hypms['l_d1'][0], 'm_c1' : hypms['m_c1'][0], 'm_c2' : hypms['m_c2'][0], 'm_c3' : hypms['m_c3'][0], 'm_c4' : hypms['m_c4'][0], 'm_d1' : hypms['m_d1'][0], 'r_c1' : hypms['r_c1'][0], 'r_c2' : hypms['r_c2'][0], 'r_c3' : hypms['r_c3'][0], 'r_c4' : hypms['r_c4'][0], 'r_d1' : hypms['r_d1'][0], 'epochs' : 50, 'steps_per_epoch' : 400 } logging.set_verbosity(tf.logging.INFO) result = One_Shot_Model(space, save=True, verbose=True) print(result) logging.set_verbosity(tf.logging.ERROR) model = load_model(recognition_model_path, custom_objects = {"Modified_SGD": Modified_SGD}) print(validate(model, test=True, show=True)) ###Output _____no_output_____
IT/Cicli.ipynb
###Markdown CicliNel capitolo sul [controllo di flusso](./ControlloDiFlusso.ipynb) abbiamo visto che il normale flusso di codice che esegue le istruzioni in sequenza può essere deviato usando il comand `if` e le sue varianti.In questo capitolo vedremo i comandi che permettoo di ripetere ciclicamente dei blocchi di codice.Iniziamo con una domanda: se voleste far stampare `"ciao"` 5 volte, il modo più semplice di tutti sarebbe questo? ###Code print("ciao") print("ciao") print("ciao") print("ciao") print("ciao") ###Output _____no_output_____ ###Markdown Se avete provato ad eseguirlo avrete visto che funziona. Ma se dovessimo stamparlo per 10, 100 o per 1000 volte? Potremmo fare nello stesso modo con 100 o 1000 istruzioni `print`...ma certamente è poco pratico! Per questo ci vengono in soccorso i cicli. Lo stesso programma di prima può essere scritto in questo modo: ###Code i = 0 while i < 5: print("ciao" + str(i)) i = i + 1 ###Output _____no_output_____ ###Markdown Quello sopra è un ciclo che si può fare in quasi tutti i linguaggi di programmazione.il comando `while` (che in inglese significa mentre), ripete le istruzioni elencate sotto di esso fino a quando la condizione verificata è vera. Nel nostro esempio, quindi, le estruzioni saranno ripetute fino a quando `i` sarà minore di 5. Per fare un ciclo molto più "python style" dobbiamo usare il comando `for` abbinato al comando `range`.Iniziamo proprio dal secondo. Sapete cosa significa la parola *range* in inglese? Significa *intervallo*. Sapendo questa cosa credo che possiate facilmente immaginare cosa fa il comando qui sotto. ###Code print(list(range(0,5))) ###Output _____no_output_____ ###Markdown Quello che abbiamo fatto qui sopra è di stampare una lista (di numeri) nell'*intervallo* da 0 a 5. Il comando `range`, insomma, fa per voi quello che abbiamo fatto nel programma sopra: stampa i numeri nell'*intervallo* da 0 a 5. Ma se ora vi dico che il comando `for..in` ci permette di fare un ciclo per ogni elemento di un certo *intervallo*...allora possiamo riscrivere il ciclo `while` sopra in modo più compatto e più "Python style"! ###Code for i in range(0,5): print("ciao" +str(i)) ###Output _____no_output_____
docs/dev/notebooks/tooltip_with_crosshair.ipynb
###Markdown Crosshair examples ###Code # geom_smooth (ggplot(mpg_df, aes(x='displ', y='hwy')) + ggtitle('geom_smooth') + geom_point() + geom_smooth(method='loess', size=1, tooltips=layer_tooltips() .line('min|^ymin') .line('|^y') .line('max|^ymax')) ) # geom_smooth with acnhor (ggplot(mpg_df, aes(x='displ', y='hwy')) + ggtitle('geom_smooth') + geom_point() + geom_smooth(method='loess', size=1, tooltips=layer_tooltips() .line('min|^ymin') .line('|^y') .line('max|^ymax') .anchor('top_right')) ) # geom_ribbon id = ["A", "A", "A", "B", "B", "B"] x = [1, 2, 4, 1, 3, 4] ymin = [-1, 0, 0, 3, 3, 4] ymax = [0, 1, 1, 4, 5, 5] r_dat = {} r_dat = dict(id=id, x=x, ymin=ymin, ymax=ymax) # geom_ribbon ggplot(r_dat) + ggtitle('geom_ribbon') \ + geom_ribbon(aes(x='x', ymin='ymin', ymax='ymax', group='id', fill='id'), color='black', alpha=0.5) # geom_ribbon with anchor ggplot(r_dat) + ggtitle('geom_ribbon') \ + geom_ribbon(aes(x='x', ymin='ymin', ymax='ymax', group='id', fill='id'), color='black', alpha=0.5, tooltips=layer_tooltips().line('@|^ymax').line('@|^ymin').anchor('top_right')) # geom_point (ggplot(mpg_df, aes(x='displ', y='cty', fill='drv', size='hwy')) + ggtitle('geom_point') + scale_size(range=[5, 15], breaks=[15, 40]) + ggsize(600, 350) + geom_point(shape=21, color='white', tooltips=layer_tooltips() .anchor('top_right') .min_width(180) .format('cty', '.0f') .format('hwy', '.0f') .format('drv', '{}wd') .line('@manufacturer @model') .line('cty/hwy [mpg]|@cty/@hwy') .line('@|@class') .line('drive train|@drv') .line('@|@year')) ) # geom_area (ggplot(iris_df) + ggtitle('geom_area') + geom_area(aes(x='sepal_length', fill='species'), stat='density', color='white', tooltips=layer_tooltips() .anchor('top_right') .line('^fill') .line('length|^x') .line('density|^y')) + ggsize(650, 300) ) # geom_line T = 1 N = 1000 t = np.linspace(0, T, N) dt = T / N # brownian motions W1 = np.random.standard_normal(size=N) Wt1 = np.cumsum(W1) * np.sqrt(dt) W2 = np.random.standard_normal(size=N) Wt2 = np.cumsum(W2) * np.sqrt(dt) l_dat = {} l_dat['W1'] = Wt1 l_dat['W2'] = Wt2 l_dat['t'] = t # transform data via melt function # to produce two trajectories l_dat = pd.DataFrame(l_dat) l_dat = pd.melt(l_dat, id_vars=['t'], value_vars=['W1', 'W2']) ggplot(l_dat, aes(x='t', y='value', group='variable')) + ggtitle('geom_line')\ + geom_line(aes(color='variable'), size=1, alpha=0.7, tooltips=layer_tooltips().anchor('top_left')) # geom_freqpoly ggplot(l_dat, aes(x='value')) + ggtitle('geom_freqpoly') \ + geom_freqpoly(size=2, tooltips=layer_tooltips().anchor('top_right')) # geom_path path_dat={} path_dat['x']=[1e-3,2e-3,3e-3,4e-3,5e-3,5e-3,4e-3,3e-3,2e-3,1e-3] path_dat['y']=[1e-3,2e-3,3e-3,4e-3,5e-3,1e-3,2e-3,3e-3,4e-3,5e-3] path_dat['g']=[1,1,1,1,1,2,2,2,2,2] ggplot(path_dat, aes(x='x',y='y',group='g')) + ggtitle('geom_path')\ + geom_path(aes(color='g'), stat='density2d',bins=3, tooltips=layer_tooltips().anchor('middle_center'))\ + scale_color_gradient(low='dark_green',high='red') # geom_contour X_max = 50 Y_max = 50 def z_fun(x, y): z = math.sin(x * 3 * math.pi / X_max) z += math.sin(y * 3 * math.pi / Y_max) z += x * 3 / X_max z += y * 5 / Y_max return z x = [] y = [] z = [] for row in range(0, Y_max - 1): for col in range(0, X_max - 1): x.append(col) y.append(row) z.append(z_fun(col, row)) c_dat = dict(x=x, y=y, z=z) (ggplot(c_dat, aes('x', 'y')) + ggtitle('geom_contour') + scale_color_gradient('green', 'red') + geom_contour(aes(z='z', color='..level..'), bins=30, tooltips=layer_tooltips().anchor('top_right')) ) # geom_density np.random.seed(43) dat={} dat['x'] = np.append(np.random.normal(0,1,1000), np.random.normal(3,1,500)) dat['y'] = np.append(np.random.normal(0,1,1000), np.random.normal(3,1,500)) ggplot(dat,aes('x')) + ggtitle('geom_density') + geom_density(tooltips=layer_tooltips().anchor('top_right')) # geom_density2d ggplot(dat, aes('x', 'y')) + ggtitle('geom_density2d')\ + geom_density2d(aes(color='..level..'), tooltips=layer_tooltips().anchor('top_right')) # geom_tile d={ 'x': [1,2,3,4,5], 'y': [0,0,0,0,0], 'z': [-1,-0.5,0,0.5,1] } ggplot(d, aes('x', fill='z')) + ggtitle('geom_tile')\ + geom_tile(tooltips=layer_tooltips().anchor('top_center')) # geom_bin2d ggplot(dat, aes('x', 'y')) + ggtitle('geom_bin2d')\ + geom_bin2d(tooltips=layer_tooltips().anchor('top_right')) + ggtitle('geom_bin2d') # polygon with points d1 = { 'x': [0.75, 1.75, 0.75, 1.75, 0.75, 1.75], 'y': [2.75, 2.75, 1.75, 1.75, 0.75, 0.35], 'group': [1, 1, 2, 2, 3, 3], } id = ["1.1", "2.1", "1.2", "2.2", "1.3", "2.3"] val = [3, 3.1, 3.1, 3.2, 3.15, 3.5] x = [2, 1, 1.1, 2.2, 1, 0, 0.3, 1.1, 2.2, 1.1, 1.2, 2.5, 1.1, 0.3, 0.5, 1.2, 2.5, 1.2, 1.3, 2.7, 1.2, 0.5, 0.6, 1.3] y = [-0.5, 0, 1, 0.5, 0, 0.5, 1.5, 1, 0.5, 1, 2.1, 1.7, 1, 1.5, 2.2, 2.1, 1.7, 2.1, 3.2, 2.8, 2.1, 2.2, 3.3, 3.2] id4 = [v for v in id for _ in range(4)] val4 = [v for v in val for _ in range(4)] d2 = dict(id=id4, val=val4, x=x, y=y) ggplot(d2, aes(x, y)) + ggtitle('polygon with points') \ + geom_polygon(aes(fill='val', group='id'), tooltips=layer_tooltips().anchor('top_right'))\ + geom_point(data=d1, mapping=aes(x='x', y='y', color='group'), tooltips=layer_tooltips().anchor('top_right')) ###Output _____no_output_____ ###Markdown No crosshair ###Code # geom_contourf (ggplot(c_dat, aes('x', 'y')) + ggtitle('geom_contourf') + scale_fill_gradient('green', 'red') + geom_contourf(aes(z='z', fill='..level..',alpha='..level..'), tooltips=layer_tooltips().anchor('top_right')) ) # geom_density2df ggplot(dat, aes('x', 'y')) + ggtitle('geom_density2df')\ + geom_density2df(aes(fill = '..level..'), tooltips=layer_tooltips().anchor('top_right')) # geom_boxplot (ggplot(mpg_df, aes('class', 'hwy')) + ggtitle('geom_boxplot') + geom_boxplot(tooltips=layer_tooltips() .anchor('top_center') .format('^Y', '.0f') .format('^middle', '.2f') .line('@|^middle') .line('lower/upper|^lower/^upper') .line('min/max|^ymin/^ymax')) ) # geom_histogram + geom_vline np.random.seed(123) data = DataFrame(dict( cond=np.repeat(['A','B'], 200), rating=np.concatenate((np.random.normal(0, 1, 200), np.random.normal(.8, 1, 200))) )) cdat = data.groupby(['cond'], as_index=False).mean() (ggplot(data, aes(x='rating', fill='cond')) + ggtitle('geom_histogram + geom_vline') + geom_histogram(binwidth=.5, alpha=.8, tooltips=layer_tooltips().anchor('top_right')) + geom_vline(data=cdat, mapping=aes(xintercept='rating'), linetype="longdash", size=1, color="red", tooltips=layer_tooltips().anchor('top_left')) ) tdata = dict( supp = ['OJ', 'OJ', 'OJ', 'VC', 'VC', 'VC'], dose = [0.5, 1.0, 2.0, 0.5, 1.0, 2.0], length = [13.23, 22.70, 26.06, 7.98, 16.77, 26.14], len_min = [11.83, 21.2, 24.50, 4.24, 15.26, 23.35], len_max = [15.63, 24.9, 27.11, 10.72, 19.28, 28.93] ) # geom_errorbar (ggplot(tdata, aes(x='dose', color='supp')) + ggtitle('geom_errorbar') + geom_errorbar(aes(ymin='len_min', ymax='len_max'), width=.1, tooltips=layer_tooltips().line('len_min|^ymin').line('len_max|^ymax').anchor('top_left')) + geom_line(aes(y='length')) + geom_point(aes(y='length')) ) # geom_crossbar (ggplot(tdata, aes(x='dose', color='supp')) + ggtitle('geom_crossbar') + geom_crossbar(aes(ymin='len_min', ymax='len_max', middle='length', color='supp'), fatten=5, tooltips=layer_tooltips() .line('len_min|^ymin') .line('|^middle') .line('len_max|^ymax') .anchor('middle_right')) ) #geom_bar (ggplot(tdata, aes(x='dose', color='supp')) + ggtitle('geom_bar') + geom_bar(aes(y='length', fill='supp'), stat='identity', position='dodge', color='black', tooltips=layer_tooltips().anchor('top_center')) ) # geom_linerange (ggplot(tdata, aes(x='dose', color='supp')) + ggtitle('geom_linerange') + geom_linerange(aes(ymin='len_min', ymax='len_max', color='supp'), position=position_dodge(0.1), size=3, tooltips=layer_tooltips() .line('len_min|^ymin') .line('len_max|^ymax') .anchor('top_left')) + geom_line(aes(y='length'), position=position_dodge(0.1)) ) # geom_pointrange (ggplot(tdata, aes(x='dose', color='supp')) + ggtitle('geom_pointrange') + geom_pointrange(aes(y='length', ymin='len_min', ymax='len_max', color='supp'), position=position_dodge(0.1), size=3, shape=23, fatten=1, tooltips=layer_tooltips() .line('len_min|^ymin') .line('|^y') .line('len_max|^ymax') .anchor('top_left')) + geom_line(aes(y='length'), position=position_dodge(0.1)) ) ###Output _____no_output_____
docs/template_geodetic_MB_calibration.ipynb
###Markdown Template: geodetic "frequentist" mass balance calibration with constant precipitation factor ###Code import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt import matplotlib import statsmodels as stats import scipy import scipy.stats as stats import os import oggm from oggm import cfg, utils, workflow, tasks, graphics from oggm.core import massbalance, flowline cfg.initialize(logging_level='WORKFLOW') cfg.PATHS['working_dir'] = utils.gettempdir(dirname='OGGM-ref-mb_geodetic', reset=True) base_url = ('https://cluster.klima.uni-bremen.de/~oggm/gdirs/oggm_v1.4/' 'L1-L2_files/elev_bands') ###Output /home/lilianschuster/oggm/oggm/cfg.py:381: FutureWarning: In future versions of OGGM, the logging config WORKFLOW will no longer print ERROR or WARNING messages, but only high level information (i.e. hiding potential errors in your code but also avoiding cluttered log files for runs with many expected errors, e.g. global runs). If you want to obtain a similar logger behavior as before, set `logging_level='WARNING'`, which will print high level info as well as errors and warnings during the run. If you want to use the new behavior and suppress this warning, set `logging_level='WORKFLOW'` and `future=True`. warnings.warn(msg, category=FutureWarning) 2021-06-09 08:54:19: oggm.cfg: Reading default parameters from the OGGM `params.cfg` configuration file. 2021-06-09 08:54:19: oggm.cfg: Multiprocessing switched OFF according to the parameter file. 2021-06-09 08:54:19: oggm.cfg: Multiprocessing: using all available processors (N=8) ###Markdown geodetic data from Hugonnet et al. (2021) ###Code use_per_region_files = False if use_per_region_files: # you can download this yourself under https://www.sedoo.fr/theia-publication-products/?uuid=c428c5b9-df8f-4f86-9b75-e04c778e29b9 hugonnet_path = '/home/lilianschuster/Schreibtisch/PhD/hugonnet_et_al_2021_per_glacier_time_series/dh_{}_rgi60_pergla_rates.csv' rgi = {} pd_geodetic_l = {} for rgi in ['01','02','03','04','05','06','07','08','09','10','11','12', '13', '14', '15', '16', '17', '18', '19']: pd_geodetic_l[rgi] = pd.read_csv(hugonnet_path.format(rgi), encoding='utf-7', index_col='rgiid') # we only want the measurements from 2000 to the end of 2019 pd_geodetic_l_20 = pd_geodetic_l[rgi].loc[pd_geodetic_l[rgi].period == '2000-01-01_2020-01-01'] pd_geodetic_l_10_0 = pd_geodetic_l[rgi].loc[pd_geodetic_l[rgi].period == '2000-01-01_2010-01-01'] pd_geodetic_l_10_1 = pd_geodetic_l[rgi].loc[pd_geodetic_l[rgi].period == '2010-01-01_2020-01-01'] pd_geodetic_l[rgi] = pd.concat([pd_geodetic_l_20, pd_geodetic_l_10_0, pd_geodetic_l_10_1]) pd_geodetic_l[rgi] = pd_geodetic_l[rgi].sort_values('rgiid') # concatenate all geodetic observations together that are reference glaciers with at least 5 msm in time period 2000 to 2019 pd_geodetic = pd_geodetic_l['01']# [pd_geodetic_l['01'].index.isin(pd_wgms.columns.values)] for rgi in ['02','03','04','05','06','07','08','09','10','11','12', '13', '14', '15', '16', '17', '18', '19']: sel = pd_geodetic_l[rgi]#[pd_geodetic_l[rgi].index.isin(pd_wgms.columns.values)] pd_geodetic = pd.concat([pd_geodetic, sel]) pd_geodetic.to_csv('hugonnet_2021_ds_rgi60_pergla_rates_10_20_worldwide.csv') else: # or ask me for the file with only the 20-year periods all aggregated together pd_geodetic = pd.read_csv('hugonnet_2021_ds_rgi60_pergla_rates_10_20_worldwide.csv', index_col='rgiid') pd_geodetic = pd_geodetic.loc[pd_geodetic.period == '2000-01-01_2020-01-01'] len(set(pd_geodetic.dropna().index)) pd.read_csv(hugonnet_path.format(rgi), encoding='utf-7', index_col='rgiid') # this is the specific mass balance for every glacier pd_geodetic.dmdtda.dropna() *1000 # this is in kg/m2/yr ###Output _____no_output_____ ###Markdown - for some few glaciers, there are no geodetic measurements! calibration: - have to set the hydro_year to 1 (geodetic data corresponds to MB from Jan 2000 to Jan 2020)!!! ###Code cfg.PARAMS['baseline_climate'] = 'CRU' #'ERA5' cfg.PARAMS['hydro_month_nh'] = 1 cfg.PARAMS['prcp_scaling_factor'] = 2.5 # if ERA5 use maybe 1.6 ?! def minimize_bias_geodetic_mu_star(x, gd_mb=None, mb_geodetic=None, h=None, w=None, pf=2.5, ys=np.arange(2000, 2020, 1), ): """ calibrates the melt factor (melt_f/mu_star) by getting the bias to zero comparing modelled mean specific mass balance between 2000 and 2020 to observed geodetic data Parameters ---------- x : float what is optimised (here the melt_f/mu_star) gd_mb: class instance instantiated class of Pastmassbalane, this is updated by mu_star mb_geodetic: float geodetic mass balance between 2000-2020 of the instantiated glacier h: np.array heights of the instantiated glacier w: np.array widths of the instantiated glacier pf: float precipitation scaling factor default is 2.5 ys: np.array years for which specific mass balance is computed default is 2000--2019 Returns ------- float bias: modeled mass balance mean - reference mean (geodetic) if absolute_bias = True: np.abs(bias) is returned """ gd_mb.mu_star = x gd_mb.prcp_fac = pf mb_specific = gd_mb.get_specific_mb(heights=h, widths=w, year=ys).mean() bias_calib = np.mean(mb_specific - mb_geodetic) return bias_calib df = ['RGI60-11.00897'] #df = ['RGI60-11.01450'] gdirs = workflow.init_glacier_directories(df, from_prepro_level=2, prepro_border=10, prepro_base_url=base_url, prepro_rgi_version='62') gdir = gdirs[0] oggm.core.climate.process_climate_data(gdir) ###Output 2021-05-12 10:08:55: oggm.workflow: init_glacier_directories from prepro level 2 on 1 glaciers. 2021-05-12 10:08:55: oggm.workflow: Execute entity task gdir_from_prepro on 1 glaciers ###Markdown **important**: - we need to set the residual/bias to zero in PastMassBalance ###Code # we use here the default OGGM mass balance model! gd_mb = massbalance.PastMassBalance(gdir, mu_star=200, # just set it to any value, this will be calibrated later bias=0, # set to zero! check_calib_params=False) # set the precipitation factor to a constant value of gd_mb.prcp_fac = cfg.PARAMS['prcp_scaling_factor'] # just to make sure that the right prcp factor is applied # get the geodetic measurements of that glacier mb_geodetic = pd_geodetic.loc[df].dmdtda.values *1000 # calibration time period ys = np.arange(2000, 2020) h, w = gdir.get_inversion_flowline_hw() # find the optimal melt factor that minimises the bias of the geodetic measurements (2000-2019) # allow the melt factor to be between 10 and 1000, melt_f = scipy.optimize.brentq(minimize_bias_geodetic_mu_star, 10, 1000, # allow the melt factor to be between 10 and 1000, xtol = 0.01, args=(gd_mb, mb_geodetic, h, w, cfg.PARAMS['prcp_scaling_factor'], ys), # which years should be used: normally 2000-2019 (but for some climate datasets there is no 2019 available) disp=True) # the mu_star is the melt factor ... print(melt_f) # you can save the calibrated melt_f/mu_star for the glacier or you can directly do projections, or whatever with the new calibrated mu_star / melt_f ###Output 184.32415325809973 ###Markdown check and validation**here we only check if the calibration has worked and compare it to the direct glaciological measurements if there are any available for that glacier** ###Code # change in the mass balance model instance the mu_star to the new calibrated mu_star/melt_f gd_mb.mu_star = melt_f # compute the specific mass balance for that glacier using the calibrated melt_f mb_specific = gd_mb.get_specific_mb(heights=h, widths=w, year=ys) np.testing.assert_allclose(mb_geodetic, mb_specific.mean(), rtol=1e-3) SMALL_SIZE = 20 MEDIUM_SIZE = 22 BIGGER_SIZE = 24 plt.rc('font', size=SMALL_SIZE) # controls default text sizes plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title plt.figure(figsize=(24,8)) plt.hlines(mb_geodetic, label = 'geodetic mean from Hugonnet et al. 2021 (Jan 2000-Jan 2020)', xmin=2000, xmax = 2019, ls = '--', color = 'black') # just for visualization: standard deviation from the Hugonnet data mb_geodetic_std = pd_geodetic.loc[df].err_dmdtda.values *1000 plt.fill_between(x=ys, y1=np.repeat(mb_geodetic, len(ys))-mb_geodetic_std, y2=np.repeat(mb_geodetic, len(ys))+mb_geodetic_std, alpha = 0.1, color = 'black', label = 'geodetic mean standard deviation') plt.plot(ys, mb_specific, label='modelled mass balance (2000-2019)', color='orange') plt.hlines(mb_specific.mean(), label = 'modelled mass balance mean (2000-2019)',xmin=2000, xmax = 2019, ls = ':', color = 'orange') try: plt.plot(ys[:-1], gdir.get_ref_mb_data()['ANNUAL_BALANCE'].loc[2000:],label = 'direct glaciological measurements') plt.hlines(gdir.get_ref_mb_data()['ANNUAL_BALANCE'].loc[2000:].values.mean(), xmin=2000, xmax = gdir.get_ref_mb_data()['ANNUAL_BALANCE'].index[-1], label = 'direct glaciological measurements mean (2000-2018)', ls = '--') except: pass plt.xlabel('years') plt.ylabel('specific mass balance\nkg m$^{-2}$ yr$^{-1}$') plt.xticks(ys) plt.legend(); plt.title(gdir.rgi_id); ###Output _____no_output_____
so-co2-airborne-obs/obs-surface-error.ipynb
###Markdown Analytical uncertainty at CO2 measurement stations ###Code %load_ext autoreload %autoreload 2 from collections import OrderedDict import numpy as np import pandas as pd import xarray as xr xr.set_options(display_style='text') import matplotlib.pyplot as plt import figure_panels import obs_surface import util ###Output _____no_output_____ ###Markdown Load the monthly dataSpecify the records at each station to examine. ###Code stn_records = dict( SPO=[ 'SPO_NOAA_insitu_CO2', 'SPO_NOAA_flask_CO2', 'SPO_SIO_O2_flask_CO2', 'SPO_CSIRO_flask_CO2', 'SPO_SIO_CDK_flask_CO2', ], CGO=[ 'CGO_CSIRO_insitu_CO2', 'CGO_NOAA_flask_CO2', 'CGO_CSIRO_flask_CO2', 'CGO_SIO_O2_flask_CO2', ], ) record_list = [ri for r in stn_records.values() for ri in r] ###Output _____no_output_____ ###Markdown 1. Read txt file containing the station data.1. Make all records into a dataset for later plotting.1. Compute mean across records at each station and generate "anomaly" columns. ###Code # generate column names for the "anomaly" columns (minus station mean) stn_records_a = dict() for stn, records in stn_records.items(): stn_records_a[stn] = [f'{rec}_mmedian' for rec in records] # read monthly data file = obs_surface.data_files('CO2', 'obs') df = obs_surface.read_stndata(file) # get dataset stninfo = obs_surface.get_stn_info('CO2') ds = obs_surface.to_dataset( stninfo, df, 'CO2', plot_coverage=False, dropna=False, unique_stn=False, gap_fill=False).to_dataset() # keep only columns from stations specified above df = df[filter(lambda s: '_CO2' in s and any(s in records for records in stn_records.values()) or '_CO2' not in s, df.columns)] # compute station median and add as new columns for (stn, arecords), records in zip(stn_records_a.items(), stn_records.values()): df[stn] = df[records].median(axis=1) df[arecords] = df[records].sub(df[stn], axis=0) df ###Output _____no_output_____ ###Markdown Compute the long-term standard deviation of monthly records ###Code print('Long-term std dev of monthly records:') for stn, records in stn_records_a.items(): print(f'{stn}: {np.nanstd(df[records].values):0.4f}') ###Output Long-term std dev of monthly records: SPO: 0.1269 CGO: 0.1077 ###Markdown Compute seasonal meansLoop over the seasons and generate seasonal averages. Require at least 2 months to define a season. ###Code seasons = OrderedDict([ ('djf', [12, 1, 2]), ('mam', [3, 4, 5]), ('jja', [6, 7, 8]), ('son', [9, 10, 11]), ]) dfs_seasons = {} for season, months in seasons.items(): if season == 'djf': groupby_col = 'polar_year' drop_cols = ['month', 'day', 'year'] else: groupby_col = 'year' drop_cols = ['month', 'day', 'polar_year'] grouped = df.loc[df.month.isin(months)].groupby(groupby_col) dfs_seasons[season] = grouped.mean().where(grouped.count()>=2) dfs_seasons[season] = dfs_seasons[season].set_index('year_frac') dfs_seasons[season] = dfs_seasons[season].drop(drop_cols, axis=1) dfs_seasons['son'] ###Output _____no_output_____ ###Markdown Long-term seasonal means ###Code # make list of *all* records records = [record for stn, records in stn_records_a.items() for record in records] # dimension dictionary with lists df_mean = {'season': list(seasons.keys())} df_mean.update({r: [] for r in records}) # loop over seasons, compute long-term mean for season in seasons: for r in records: df_mean[r].append(dfs_seasons[season][r].mean(axis=0)) df_mean = pd.DataFrame(df_mean).set_index('season') df_mean ###Output _____no_output_____ ###Markdown SD of the long-term mean at each station ###Code error = {'season': list(seasons.keys())} error.update({stn: [] for stn in stn_records.keys()}) for stn, arecords in stn_records_a.items(): for season in seasons: error[stn].append(df_mean.loc[season][arecords].std(ddof=1)) df_error = pd.DataFrame(error).set_index('season') df_error ###Output _____no_output_____ ###Markdown Median across records at SPO and CGO ###Code ds_mSPO_med = (ds.sel(record=stn_records['SPO']) - ds.sel(record=stn_records['SPO']).median('record', skipna=True)) ds_mCGO_med = (ds.sel(record=stn_records['CGO']) - ds.sel(record=stn_records['CGO']).median('record', skipna=True)) ds_m_med = xr.concat((ds_mSPO_med, ds_mCGO_med), 'record') ds_m_med.record ###Output _____no_output_____ ###Markdown Seasonal means of the medians ###Code ds_djf = util.ann_mean(ds_m_med, season='DJF', time_bnds_varname=None, n_req=2,) ds_jja = util.ann_mean(ds_m_med, season='JJA', time_bnds_varname=None, n_req=2,) ds_djf.time ###Output _____no_output_____ ###Markdown Time series of SPO and CGO records ###Code fig = plt.figure(figsize=(6, 8)) ncol = 2 nrow = 2 fig, axs = plt.subplots(nrow, ncol, figsize=(6.5*ncol, 4*nrow)) marker_spec = figure_panels.marker_spec_co2_inst() labels = dict( SPO_NOAA_insitu_CO2='NOAA in situ', SPO_NOAA_flask_CO2='NOAA flasks', SPO_SIO_O2_flask_CO2='SIO O$_2$ Program flasks', SPO_SIO_CDK_flask_CO2='SIO CO$_2$ Program flasks', SPO_CSIRO_flask_CO2='CSIRO flasks', CGO_NOAA_flask_CO2='NOAA flasks', CGO_SIO_O2_flask_CO2='SIO O$_2$ Program flasks', CGO_CSIRO_flask_CO2='CSIRO flasks', CGO_CSIRO_insitu_CO2='CSIRO in situ', ) def ammendments(ax): ax.axhline(0, color='k', lw=1); ax.set_ylabel('$\Delta$CO$_2$ [ppm]') ax.set_xticks(np.arange(1998, 2022, 2)); ax.set_xlim([1998, 2022]) ax.set_ylim([-0.73, 0.63]); plotted_elements = [] legend_elements = [] dset = ds_djf.CO2.sel(record=record_list).copy() # for stn in ['SPO', 'CGO']: # idx = np.where(dset.stncode == stn)[0] # dset[:, idx] = dset[:, idx] x = dset.time + util.season_yearfrac['DJF'] for i, record in enumerate(dset.record.values): ax = axs[0, 0] if 'SPO' in record else axs[1, 0] y = dset.sel(record=record) ls = '--' if 'insitu' in record else '-' inst = str(dset.sel(record=record).institution.values) p = ax.plot(x, y, linestyle=ls, label=labels[record], **marker_spec[inst]) if labels[record] not in plotted_elements: legend_elements.append(p[0]) plotted_elements.append(labels[record]) dset = ds_jja.CO2.sel(record=record_list).copy() # for stn in ['SPO', 'CGO']: # idx = np.where(dset.stncode == stn)[0] # dset[:, idx] = dset[:, idx] x = dset.time + util.season_yearfrac['JJA'] for i, record in enumerate(dset.record.values): ax = axs[0, 1] if 'SPO' in record else axs[1, 1] y = dset.sel(record=record) ls = '--' if 'insitu' in record else '-' inst = str(dset.sel(record=record).institution.values) p = ax.plot(x, y, linestyle=ls, label=labels[record], **marker_spec[inst]) if labels[record] not in plotted_elements: legend_elements.append(p[0]) plotted_elements.append(labels[record]) for ax in axs.ravel(): ammendments(ax) xoff = 1 yoff = 0.05 str_text = f'$\sigma$ = {df_error.loc["djf"].SPO:0.2f} ppm' axs[0, 0].text(ax.get_xlim()[0]+xoff, ax.get_ylim()[0]+yoff, str_text, fontsize=12, fontweight='bold', ) str_text = f'$\sigma$ = {df_error.loc["djf"].CGO:0.2f} ppm' axs[1, 0].text(ax.get_xlim()[0]+xoff, ax.get_ylim()[0]+yoff, str_text, fontsize=12, fontweight='bold', ) str_text = f'$\sigma$ = {df_error.loc["jja"].SPO:0.2f} ppm' axs[0, 1].text(ax.get_xlim()[0]+xoff, ax.get_ylim()[0]+yoff, str_text, fontsize=12, fontweight='bold', ) str_text = f'$\sigma$ = {df_error.loc["jja"].CGO:0.2f} ppm' axs[1, 1].text(ax.get_xlim()[0]+xoff, ax.get_ylim()[0]+yoff, str_text, fontsize=12, fontweight='bold', ) axs[0, 0].set_title('DJF SPO records, SPO median subtracted') axs[1, 0].set_title('DJF CGO records, CGO median subtracted') axs[0, 1].set_title('JJA SPO records, SPO median subtracted') axs[1, 1].set_title('JJA CGO records, CGO median subtracted') util.label_plots(fig, [ax for ax in axs.ravel()]) axs[0, 0].legend(handles=legend_elements, ncol=2, loc=(0.02, 0.78), frameon=False, labelspacing=0.1) util.savefig('SPO-CGO-record-discrepancies') ###Output _____no_output_____
04_Autoencoder/04_AE_05_ConvAE_ae-deconv-nopool.ipynb
###Markdown Module version used- torch 1.4- numpy 1.18.1- CPython 3.6.9- IPython 7.10.2- numpy 1.17.4- PIL.Image 6.2.1- pandas 0.25.3 - Runs on CPU or GPU (if available) Convolutional Autoencoder with Deconvolutions (without pooling operations) A convolutional autoencoder using deconvolutional layers that compresses 768-pixel MNIST images down to a 7x7x8 (392 pixel) representation without using pooling operations but increasing the stride in convolutional layers. Imports ###Code import time import numpy as np import torch import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets from torchvision import transforms if torch.cuda.is_available(): torch.backends.cudnn.deterministic = True ########################## ### SETTINGS ########################## # Device device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print('Device:', device) # Hyperparameters random_seed = 456 learning_rate = 0.005 num_epochs = 10 batch_size = 128 ########################## ### MNIST DATASET ########################## serverAvailable = "no" if serverAvailable == "yes": datapath = "../database/" else: datapath = '../../../../MEGA/DatabaseLocal/' # Note transforms.ToTensor() scales input images # to 0-1 range train_dataset = datasets.MNIST(root=datapath, train=True, transform=transforms.ToTensor(), download=True) test_dataset = datasets.MNIST(root=datapath, train=False, transform=transforms.ToTensor()) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False) # Checking the dataset for images, labels in train_loader: print('Image batch dimensions:', images.shape) print('Image label dimensions:', labels.shape) break ###Output Device: cuda:0 Image batch dimensions: torch.Size([128, 1, 28, 28]) Image label dimensions: torch.Size([128]) ###Markdown Model ###Code ########################## ### MODEL ########################## class ConvolutionalAutoencoder(torch.nn.Module): def __init__(self): super(ConvolutionalAutoencoder, self).__init__() # calculate same padding: # (w - k + 2*p)/s + 1 = o # => p = (s(o-1) - w + k)/2 ### ENCODER # 28x28x1 => 14x14x4 self.conv_1 = torch.nn.Conv2d(in_channels=1, out_channels=4, kernel_size=(3, 3), stride=(2, 2), # floor((2(14-1) - 28 + 3) / 2) = 0 padding=0) # 14x14x4 => 7x7x8 self.conv_2 = torch.nn.Conv2d(in_channels=4, out_channels=8, kernel_size=(3, 3), stride=(2, 2), # ceil((2(7-1) - 14 + 3) / 2) = 1 padding=1) ### DECODER # 7x7x8 => 15x15x4 self.deconv_1 = torch.nn.ConvTranspose2d(in_channels=8, out_channels=4, kernel_size=(3, 3), stride=(2, 2), padding=0) # 15x15x4 => 29x29x1 self.deconv_2 = torch.nn.ConvTranspose2d(in_channels=4, out_channels=1, kernel_size=(3, 3), stride=(2, 2), padding=1) def forward(self, x): ### ENCODER x = self.conv_1(x) x = F.leaky_relu(x) x = self.conv_2(x) x = F.leaky_relu(x) ### DECODER x = self.deconv_1(x) x = F.leaky_relu(x) x = self.deconv_2(x) x = F.leaky_relu(x) x = x[:, :, :-1, :-1] x = torch.sigmoid(x) return x torch.manual_seed(random_seed) model = ConvolutionalAutoencoder() model = model.to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ###Output _____no_output_____ ###Markdown Training ###Code start_time = time.time() for epoch in range(num_epochs): for batch_idx, (features, targets) in enumerate(train_loader): # don't need labels, only the images (features) features = features.to(device) ### FORWARD AND BACK PROP decoded = model(features) cost = F.binary_cross_entropy(decoded, features) optimizer.zero_grad() cost.backward() ### UPDATE MODEL PARAMETERS optimizer.step() ### LOGGING if not batch_idx % 50: print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f' %(epoch+1, num_epochs, batch_idx, len(train_loader), cost)) print('Time elapsed: %.2f min' % ((time.time() - start_time)/60)) print('Total Training Time: %.2f min' % ((time.time() - start_time)/60)) ###Output Epoch: 001/010 | Batch 000/469 | Cost: 0.7184 Epoch: 001/010 | Batch 050/469 | Cost: 0.6902 Epoch: 001/010 | Batch 100/469 | Cost: 0.6586 Epoch: 001/010 | Batch 150/469 | Cost: 0.6014 Epoch: 001/010 | Batch 200/469 | Cost: 0.2719 Epoch: 001/010 | Batch 250/469 | Cost: 0.1936 Epoch: 001/010 | Batch 300/469 | Cost: 0.1920 Epoch: 001/010 | Batch 350/469 | Cost: 0.1699 Epoch: 001/010 | Batch 400/469 | Cost: 0.1610 Epoch: 001/010 | Batch 450/469 | Cost: 0.1503 Time elapsed: 0.07 min Epoch: 002/010 | Batch 000/469 | Cost: 0.1460 Epoch: 002/010 | Batch 050/469 | Cost: 0.1427 Epoch: 002/010 | Batch 100/469 | Cost: 0.1395 Epoch: 002/010 | Batch 150/469 | Cost: 0.1381 Epoch: 002/010 | Batch 200/469 | Cost: 0.1350 Epoch: 002/010 | Batch 250/469 | Cost: 0.1326 Epoch: 002/010 | Batch 300/469 | Cost: 0.1297 Epoch: 002/010 | Batch 350/469 | Cost: 0.1405 Epoch: 002/010 | Batch 400/469 | Cost: 0.1318 Epoch: 002/010 | Batch 450/469 | Cost: 0.1290 Time elapsed: 0.14 min Epoch: 003/010 | Batch 000/469 | Cost: 0.1306 Epoch: 003/010 | Batch 050/469 | Cost: 0.1274 Epoch: 003/010 | Batch 100/469 | Cost: 0.1261 Epoch: 003/010 | Batch 150/469 | Cost: 0.1218 Epoch: 003/010 | Batch 200/469 | Cost: 0.1186 Epoch: 003/010 | Batch 250/469 | Cost: 0.1205 Epoch: 003/010 | Batch 300/469 | Cost: 0.1184 Epoch: 003/010 | Batch 350/469 | Cost: 0.1247 Epoch: 003/010 | Batch 400/469 | Cost: 0.1208 Epoch: 003/010 | Batch 450/469 | Cost: 0.1156 Time elapsed: 0.20 min Epoch: 004/010 | Batch 000/469 | Cost: 0.1226 Epoch: 004/010 | Batch 050/469 | Cost: 0.1186 Epoch: 004/010 | Batch 100/469 | Cost: 0.1134 Epoch: 004/010 | Batch 150/469 | Cost: 0.1161 Epoch: 004/010 | Batch 200/469 | Cost: 0.1178 Epoch: 004/010 | Batch 250/469 | Cost: 0.1158 Epoch: 004/010 | Batch 300/469 | Cost: 0.1173 Epoch: 004/010 | Batch 350/469 | Cost: 0.1157 Epoch: 004/010 | Batch 400/469 | Cost: 0.1123 Epoch: 004/010 | Batch 450/469 | Cost: 0.1183 Time elapsed: 0.27 min Epoch: 005/010 | Batch 000/469 | Cost: 0.1137 Epoch: 005/010 | Batch 050/469 | Cost: 0.1137 Epoch: 005/010 | Batch 100/469 | Cost: 0.1114 Epoch: 005/010 | Batch 150/469 | Cost: 0.1149 Epoch: 005/010 | Batch 200/469 | Cost: 0.1149 Epoch: 005/010 | Batch 250/469 | Cost: 0.1152 Epoch: 005/010 | Batch 300/469 | Cost: 0.1161 Epoch: 005/010 | Batch 350/469 | Cost: 0.1136 Epoch: 005/010 | Batch 400/469 | Cost: 0.1106 Epoch: 005/010 | Batch 450/469 | Cost: 0.1179 Time elapsed: 0.34 min Epoch: 006/010 | Batch 000/469 | Cost: 0.1138 Epoch: 006/010 | Batch 050/469 | Cost: 0.1048 Epoch: 006/010 | Batch 100/469 | Cost: 0.1143 Epoch: 006/010 | Batch 150/469 | Cost: 0.1131 Epoch: 006/010 | Batch 200/469 | Cost: 0.1119 Epoch: 006/010 | Batch 250/469 | Cost: 0.1117 Epoch: 006/010 | Batch 300/469 | Cost: 0.1089 Epoch: 006/010 | Batch 350/469 | Cost: 0.1113 Epoch: 006/010 | Batch 400/469 | Cost: 0.1076 Epoch: 006/010 | Batch 450/469 | Cost: 0.1141 Time elapsed: 0.40 min Epoch: 007/010 | Batch 000/469 | Cost: 0.1090 Epoch: 007/010 | Batch 050/469 | Cost: 0.1071 Epoch: 007/010 | Batch 100/469 | Cost: 0.1092 Epoch: 007/010 | Batch 150/469 | Cost: 0.1111 Epoch: 007/010 | Batch 200/469 | Cost: 0.1108 Epoch: 007/010 | Batch 250/469 | Cost: 0.1032 Epoch: 007/010 | Batch 300/469 | Cost: 0.1061 Epoch: 007/010 | Batch 350/469 | Cost: 0.1106 Epoch: 007/010 | Batch 400/469 | Cost: 0.1085 Epoch: 007/010 | Batch 450/469 | Cost: 0.1082 Time elapsed: 0.47 min Epoch: 008/010 | Batch 000/469 | Cost: 0.1134 Epoch: 008/010 | Batch 050/469 | Cost: 0.1106 Epoch: 008/010 | Batch 100/469 | Cost: 0.1115 Epoch: 008/010 | Batch 150/469 | Cost: 0.1058 Epoch: 008/010 | Batch 200/469 | Cost: 0.1076 Epoch: 008/010 | Batch 250/469 | Cost: 0.1062 Epoch: 008/010 | Batch 300/469 | Cost: 0.1043 Epoch: 008/010 | Batch 350/469 | Cost: 0.1031 Epoch: 008/010 | Batch 400/469 | Cost: 0.1071 Epoch: 008/010 | Batch 450/469 | Cost: 0.1061 Time elapsed: 0.54 min Epoch: 009/010 | Batch 000/469 | Cost: 0.1085 Epoch: 009/010 | Batch 050/469 | Cost: 0.1049 Epoch: 009/010 | Batch 100/469 | Cost: 0.1094 Epoch: 009/010 | Batch 150/469 | Cost: 0.1079 Epoch: 009/010 | Batch 200/469 | Cost: 0.1071 Epoch: 009/010 | Batch 250/469 | Cost: 0.1064 Epoch: 009/010 | Batch 300/469 | Cost: 0.1026 Epoch: 009/010 | Batch 350/469 | Cost: 0.1085 Epoch: 009/010 | Batch 400/469 | Cost: 0.1056 Epoch: 009/010 | Batch 450/469 | Cost: 0.1074 Time elapsed: 0.61 min Epoch: 010/010 | Batch 000/469 | Cost: 0.1074 Epoch: 010/010 | Batch 050/469 | Cost: 0.1040 Epoch: 010/010 | Batch 100/469 | Cost: 0.1028 Epoch: 010/010 | Batch 150/469 | Cost: 0.0998 Epoch: 010/010 | Batch 200/469 | Cost: 0.1033 Epoch: 010/010 | Batch 250/469 | Cost: 0.0993 Epoch: 010/010 | Batch 300/469 | Cost: 0.1042 Epoch: 010/010 | Batch 350/469 | Cost: 0.0956 Epoch: 010/010 | Batch 400/469 | Cost: 0.1016 Epoch: 010/010 | Batch 450/469 | Cost: 0.1015 Time elapsed: 0.68 min Total Training Time: 0.68 min ###Markdown Evaluation ###Code %matplotlib inline import matplotlib.pyplot as plt ########################## ### VISUALIZATION ########################## n_images = 15 image_width = 28 fig, axes = plt.subplots(nrows=2, ncols=n_images, sharex=True, sharey=True, figsize=(20, 2.5)) orig_images = features[:n_images] decoded_images = decoded[:n_images] for i in range(n_images): for ax, img in zip(axes, [orig_images, decoded_images]): curr_img = img[i].detach().to(torch.device('cpu')) ax[i].imshow(curr_img.view((image_width, image_width)), cmap='binary') ###Output _____no_output_____
Time Series Forecasting/Walmart/notebooks/Implementing Prophet.ipynb
###Markdown March 27, 2019.Luis Da Silva.This notebook implements Facebook's package Prophet (https://facebook.github.io/prophet/) to Walmart data. ###Code import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from fbprophet import Prophet def wmae(holiday, y, y_pred): """Computes weighted mean absolute error""" w = holiday*4 + 1 return -1 * (1 / w.sum()) * (w @ abs(y-y_pred)) def read_clean_df(train=True): if train: path = '../data/merged_train_data.csv' else: path = '../data/merged_test_data.csv' df = pd.read_csv(path).iloc[:,1:] df.rename(index=str, columns={'Size (sq ft)':'Size'}, inplace=True) df.drop(['Date-1', 'Date-2', 'Promotion17', 'Promotion114', 'Promotion121', 'Year', 'HighPromoter1', 'LowPromoter1', 'HighPromoter2', 'LowPromoter2', 'HighPromoter3', 'LowPromoter3', 'HighPromoter4', 'LowPromoter4', 'HighPromoter5', 'LowPromoter5',], axis=1, inplace=True) if train: df.drop(['ImportantHoliday'], axis=1, inplace=True) df.loc[df['Weekly_Sales'] < 0, 'Weekly_Sales'] = 0 df['Date'] = pd.to_datetime(df['Date']) df['IsHoliday_weight'] = df['IsHoliday'] df['AllDept'] = df['Dept'] df = pd.get_dummies(df, columns=['Type', 'Dept']) df.sort_values(['Date', 'Store', 'AllDept'], inplace=True) if train: # Very low weekly sales will be replaced by 0s threshold = df.groupby(['Store', 'AllDept'])['Weekly_Sales'].mean()/50 for idx, v in zip(threshold.index, threshold): mask = np.logical_and(df['Store']==idx[0], df['AllDept']==idx[1]) mask = np.logical_and(mask, df['Weekly_Sales']<=v) df.loc[mask, 'Weekly_Sales'] = 0 return df def get_cut_date(dates, n): udates = np.unique(dates) udates.sort() ndates = udates.shape[0] cut_date = udates[-int(ndates/n)] return cut_date df = read_clean_df() df.head() tdf = read_clean_df(False) class Model: ''' Main class to build Prophet model with all the required information. As Prophet is a Time Series framework, and panel data is being handled, one needs to model one department at a time. ''' def __init__(self, df, store, dept): # Creating masks train_mask = np.logical_and(df['Store']==store, df['AllDept']==dept) test_mask = np.logical_and(tdf['Store']==store, tdf['AllDept']==dept) # Cutdate for validation cut_date = get_cut_date(df[train_mask]['Date'], 5) self.validation_mask = np.logical_and(train_mask, df['Date']>=cut_date) train_mask = np.logical_and(train_mask, df['Date']<cut_date) # Main dataframe self.tsdf = df[train_mask][['Date', 'Weekly_Sales']] self.tsdf.columns = ['ds', 'y'] # Holidays superbowl = pd.DataFrame({ 'holiday': 'superbowl', 'ds': pd.to_datetime(['2010-02-12', '2011-02-11', '2012-02-10', '2013-02-08']), 'lower_window': 0, 'upper_window': 1, }) labor = pd.DataFrame({ 'holiday': 'labor', 'ds': pd.to_datetime(['2010-09-10', '2011-09-09', '2012-09-07', '2013-11-29']), 'lower_window': 0, 'upper_window': 0, }) thanks = pd.DataFrame({ 'holiday': 'thanks', 'ds': pd.to_datetime(['2010-11-26', '2011-11-25', '2012-11-23', '2013-11-29']), 'lower_window': -1, 'upper_window': 0, }) christmas = pd.DataFrame({ 'holiday': 'christmas', 'ds': pd.to_datetime(['2010-12-31', '2011-12-30', '2012-12-28', '2013-11-27']), 'lower_window': -1, 'upper_window': 0, }) self.holidays = pd.concat((superbowl, labor, thanks, christmas)) # Future dates to be predicted self.future_df = tdf[['Date']].drop_duplicates() self.future_df.columns = ['ds'] def fit(self, **kwargs): self.prophet = Prophet(holidays=self.holidays, **kwargs) self.prophet.fit(self.tsdf) self.past_fut = pd.concat((self.tsdf[['ds']],self.future_df)) self.forecast = self.prophet.predict(self.past_fut) def plot(self): self.prophet.plot_components(self.forecast) def validate(self): holi = df[self.validation_mask]['IsHoliday'].reset_index(drop=True) val_dates = df[self.validation_mask][['Date']] val_dates.columns = ['ds'] y = df[self.validation_mask]['Weekly_Sales'].reset_index(drop=True) y_pred = self.prophet.predict(val_dates)['yhat'] self.score = wmae(holi, y, y_pred) print(self.score) # Test a department in a store to see if the class behaves accordingly s1d1 = Model(df, 1, 1) s1d1.fit(weekly_seasonality=True, daily_seasonality=False) s1d1.plot() s1d1.validate() %%time # Fit all the departments and stores scores = [] preds = [] for store in df['Store'].unique(): mask = df['Store']==store for dept in df[df['Store']==store]['AllDept'].unique(): model = Model(df, store, dept) if model.tsdf.shape[0] == 0: continue model.fit() model.validate() scores.append(model.score) preds.append(model.forecast) print('Percentiles: ', {i:np.percentile(scores, i) for i in (5, 10, 25, 50)}) print('Mean: ', np.mean(scores)) print('Number of scores: ', len(scores)) ###Output _____no_output_____
DiffusionCLIP_demo.ipynb
###Markdown 論文https://arxiv.org/abs/2110.02711GitHubhttps://github.com/gwang-kim/diffusionclip 環境セットアップ GPU確認 ###Code !nvidia-smi ###Output _____no_output_____ ###Markdown GitHubからコード取得 ###Code %cd /content !git clone https://github.com/gwang-kim/DiffusionCLIP.git ###Output _____no_output_____ ###Markdown ライブラリのインストール ###Code !pip install ftfy regex tqdm !pip install --upgrade gdown !pip install git+https://github.com/openai/CLIP.git ###Output _____no_output_____ ###Markdown ライブラリのインポート ###Code %cd /content/DiffusionCLIP from diffusionclip import DiffusionCLIP from main import dict2namespace import argparse import yaml from PIL import Image import os import warnings warnings.filterwarnings(action='ignore') import torch device = 'cuda' if torch.cuda.is_available() else "cpu" print("using device is", device) # モジュールの再読み込み %load_ext autoreload %autoreload 2 ###Output _____no_output_____ ###Markdown 学習済みモデルのダウンロード ###Code %cd /content/DiffusionCLIP !mkdir pretrained %cd pretrained # Finetune Human face model download if not os.path.exists('human_pixar_t601.pth'): !gdown 'https://drive.google.com/uc?id=1IoT7kZhtaoKf1uvhYhvyqzyG2MOJsqLe' if not os.path.exists('human_neanderthal_t601.pth'): !gdown 'https://drive.google.com/uc?id=1Uo0VI5kbATrQtckhEBKUPyRFNOcgwwne' if not os.path.exists('human_gogh_t601.pth'): !gdown 'https://drive.google.com/uc?id=1NXOL8oKTGLtpTsU_Vh5h0DmMeH7WG8rQ' if not os.path.exists('human_tanned_t201.pth'): !gdown 'https://drive.google.com/uc?id=1k6aDDOedRxhjFsJIA0dZLi2kKNvFkSYk' if not os.path.exists('human_male_t401.pth'): !gdown 'https://drive.google.com/uc?id=1n1GMVjVGxSwaQuWxoUGQ2pjV8Fhh72eh' if not os.path.exists('human_sketch_t601.pth'): !gdown 'https://drive.google.com/uc?id=1V9HDO8AEQzfWFypng72WQJRZTSQ272gb' if not os.path.exists('human_with_makeup_t301.pth'): !gdown 'https://drive.google.com/uc?id=1OL0mKK48wvaFaWGEs3GHsCwxxg7LexOh' if not os.path.exists('human_without_makeup_t301.pth'): !gdown 'https://drive.google.com/uc?id=157pTJBkXPoziGQdjy3SwdyeSpAjQiGRp' if not os.path.exists('512x512_diffusion.pt'): !wget pretrained/ https://openaipublic.blob.core.windows.net/diffusion/jul-2021/512x512_diffusion.pt if not os.path.exists('imagenet_watercolor_t601.pth'): !gdown 'https://drive.google.com/uc?id=1l1vLwdL-6kC9jKcStASZ0KtX2OrmrSj6' if not os.path.exists('imagenet_pointillism_t601.pth'): !gdown 'https://drive.google.com/uc?id=1Am1Iii7jH986XQUuVaDs4v5s1h_acg0w' if not os.path.exists('imagenet_gogh_t601.pth'): !gdown 'https://drive.google.com/uc?id=1ZPeOvMpFStw8RXJga_0pWLJ7iIWEQIVY' if not os.path.exists('imagenet_cubism_t601.pth'): !gdown 'https://drive.google.com/uc?id=1xEx4_MXvbvtSqLzn6z49RUnPDFoDv9Vm' ###Output _____no_output_____ ###Markdown テスト画像ダウンロード ###Code %cd /content/DiffusionCLIP !mkdir test_imgs %cd test_imgs !wget https://www.pakutaso.com/shared/img/thumb/kys150922346900.jpg !wget https://www.pakutaso.com/shared/img/thumb/kawamurassIMGL3813_TP_V4.jpg # 画像の中心から512x512をcrop def crop_center(pil_img, crop_width, crop_height): img_width, img_height = pil_img.size return pil_img.crop(((img_width - crop_width) // 2, (img_height - crop_height) // 2, (img_width + crop_width) // 2, (img_height + crop_height) // 2)) img = Image.open('kawamurassIMGL3813_TP_V4.jpg') im_crop = crop_center(img, 512, 512) im_crop.save('crop.jpg') ###Output _____no_output_____ ###Markdown Human face manipulation (256x256) ###Code %cd /content/DiffusionCLIP model_dict = { 'Pixar': "pretrained/human_pixar_t601.pth", 'Neanderthal': "pretrained/human_neanderthal_t601.pth", 'Painting by Gogh': "pretrained/human_gogh_t601.pth", 'Tanned': "pretrained/human_tanned_t201.pth", 'Female → Male': "pretrained/human_male_t401.pth", 'Sketch': "pretrained/human_sketch_t601.pth", 'With makeup': "pretrained/human_with_makeup_t301.pth", 'Without makeup': "pretrained/human_without_makeup_t301.pth", } ###Output _____no_output_____ ###Markdown パラメータ設定 ###Code %cd /content/DiffusionCLIP # @markdown 入力画像パス img_path = "test_imgs/kys150922346900.jpg" #@param {type:"string"} # @markdown 顔部分切り取り align_face = True #@param {type:"boolean"} # @markdown type edit_type = 'Sketch' #@param ['Pixar', 'Neanderthal','Sketch', 'Painting by Gogh', 'Tanned', 'With makeup', 'Without makeup', 'Female → Male'] degree_of_change = 1 #@param {type:"slider", min:0.0, max:1.0, step:0.01} n_inv_step = 40#@param {type: "integer"} n_test_step = 6 #@param [6] model_path = model_dict[edit_type] t_0 = int(model_path.split('_t')[-1].replace('.pth','')) exp_dir = f"runs/MANI_{img_path.split('/')[-1]}_align{align_face}" os.makedirs(exp_dir, exist_ok=True) args_dic = { 'config': 'celeba.yml', 't_0': t_0, 'n_inv_step': int(n_inv_step), 'n_test_step': int(n_test_step), 'sample_type': 'ddim', 'eta': 0.0, 'bs_test': 1, 'model_path': model_path, 'img_path': img_path, 'deterministic_inv': 1, 'hybrid_noise': 0, 'n_iter': 1, 'align_face': align_face, 'image_folder': exp_dir, 'model_ratio': degree_of_change, 'edit_attr': None, 'src_txts': None, 'trg_txts': None, } args = dict2namespace(args_dic) with open(os.path.join('configs', args.config), 'r') as f: config_dic = yaml.safe_load(f) config = dict2namespace(config_dic) config.device = device # Edit runner = DiffusionCLIP(args, config) runner.edit_one_image() # Result print() n_result = 1 img = Image.open(os.path.join(exp_dir, '0_orig.png')) img = img.resize((int(img.width), int(img.height))) grid = Image.new("RGB", (img.width*(n_result+1), img.height)) grid.paste(img, (0, 0)) for i in range(n_result): img = Image.open(os.path.join(exp_dir, f"3_gen_t{t_0}_it0_ninv{n_inv_step}_ngen{n_test_step}_mrat{degree_of_change}_{model_path.split('/')[-1].replace('.pth','')}.png")) img = img.resize((int(img.width), int(img.height))) grid.paste(img, (int(img.height * (i+1)), 0)) grid ###Output _____no_output_____ ###Markdown ImageNet Style Transfer (512x512) ###Code %cd /content/DiffusionCLIP model_dict = { 'Watercolor art': "pretrained/imagenet_watercolor_t601.pth", 'Pointillism art': "pretrained/imagenet_pointillism_t601.pth", 'Painting by Gogh': "pretrained/imagenet_gogh_t601.pth", 'Cubism art': "pretrained/imagenet_cubism_t601.pth", } ###Output _____no_output_____ ###Markdown パラメータ設定 ###Code %cd /content/DiffusionCLIP # @markdown 入力画像パス img_path = "test_imgs/crop.jpg" #@param {type:"string"} # @markdown type edit_type = 'Watercolor art' #@param ['Watercolor art', 'Pointillism art','Painting by Gogh', 'Cubism art'] degree_of_change = 1 #@param {type:"slider", min:0.0, max:1.0, step:0.01} n_inv_step = 40#@param {type: "integer"} n_test_step = 6 #@param [6] model_path = model_dict[edit_type] t_0 = int(model_path.split('_t')[-1].replace('.pth','')) exp_dir = f"runs/MANI_{img_path.split('/')[-1]}" os.makedirs(exp_dir, exist_ok=True) args_dic = { 'config': 'imagenet.yml', 't_0': t_0, 'n_inv_step': int(n_inv_step), 'n_test_step': int(n_test_step), 'sample_type': 'ddim', 'eta': 0.0, 'bs_test': 1, 'model_path': model_path, 'img_path': img_path, 'deterministic_inv': 1, 'hybrid_noise': 0, 'n_iter': 1, 'align_face': 0, 'image_folder': exp_dir, 'model_ratio': degree_of_change, 'edit_attr': None, 'src_txts': None, 'trg_txts': None, } args = dict2namespace(args_dic) with open(os.path.join('configs', args.config), 'r') as f: config_dic = yaml.safe_load(f) config = dict2namespace(config_dic) config.device = device # Edit runner = DiffusionCLIP(args, config) runner.edit_one_image() # Result print() n_result = 1 img = Image.open(os.path.join(exp_dir, '0_orig.png')) img = img.resize((int(img.width), int(img.height))) grid = Image.new("RGB", (img.width*(n_result+1), img.height)) grid.paste(img, (0, 0)) for i in range(n_result): img = Image.open(os.path.join(exp_dir, f"3_gen_t{t_0}_it0_ninv{n_inv_step}_ngen{n_test_step}_mrat{degree_of_change}_{model_path.split('/')[-1].replace('.pth','')}.png")) img = img.resize((int(img.width), int(img.height))) grid.paste(img, (int(img.height * (i+1)), 0)) grid ###Output _____no_output_____
market-basket-analysis/association_rule_mining.ipynb
###Markdown Brahmanand Singh Association Analysis One Item set with support greater than 0.5 : 4 Two Items set with Support greater than 0.5 : 4 Three Items set with Support greater than 0.5 : 0 ###Code import pandas as pd # data processing #Read the input file df = pd.read_csv("~/input.csv",quoting=True) df.info() df ###Output <class 'pandas.core.frame.DataFrame'> RangeIndex: 11 entries, 0 to 10 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Transaction_ID 11 non-null int64 1 Items 11 non-null object dtypes: int64(1), object(1) memory usage: 304.0+ bytes ###Markdown Cleaning the items by removing the curly Braces ({}) ###Code df['Items']=df['Items'].str.replace('{','').str.replace('}','') df ###Output ipykernel_launcher:1: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True. ###Markdown Dropping the transaction ID as this is not needed ###Code df.drop(["Transaction_ID"], axis=1, inplace=True) ###Output _____no_output_____ ###Markdown Adding values in a list and then splitting them in order to encode them properly ###Code transactions = [] for i in range(0,len(df)): transactions.append(list(df.iloc[i,].values)) transactions ###Output _____no_output_____ ###Markdown As all the items are coming as single element, need to split them for different elements ###Code final_trans = [sublist[0].split(',') for sublist in transactions] final_trans ###Output _____no_output_____ ###Markdown Need to encode the values in 1/0 , or True Flase and then convert into dataframe Making as an array ###Code from mlxtend.preprocessing import TransactionEncoder te = TransactionEncoder() te_ary = te.fit(final_trans).transform(final_trans) te_ary.astype("int") #this will print the 1 or 0 # te_ary #this will print in True or False Format ###Output _____no_output_____ ###Markdown Displaying the final data, each items are encoded properly for True/false ###Code final_df=pd.DataFrame(te_ary, columns=te.columns_) final_df final_df=pd.DataFrame(te_ary.astype("int"), columns=te.columns_) final_df ###Output _____no_output_____ ###Markdown As we have data in Binary format in the DF, we can sum it and count the number of rows and then compute the percentage This will help in answering questions like which item is ordered most / least etc. ###Code item_sum = final_df.sum().sort_values(ascending = False).reset_index().head() item_sum.rename(columns={item_sum.columns[0]:'Item_name',item_sum.columns[1]:'Item_count'}, inplace=True) # item percent and then cumulative percent. tot_item_count = sum(final_df.sum()) # Answer is 34 item_sum['Item_percent'] = item_sum['Item_count']/tot_item_count item_sum['Tot_percent'] = item_sum.Item_percent.cumsum() item_sum.head() # List of items with cumulative percentage import matplotlib.pyplot as plt import numpy as np obj = (list(item_sum['Item_name'].head(n=20))) y_pos = np.arange(len(obj)) performance = list(item_sum['Item_count'].head(n=20)) plt.bar(y_pos, performance, align='center', alpha=0.5) plt.xticks(y_pos, obj, rotation='vertical') plt.ylabel('Item count') plt.title('Item sales distribution') ###Output _____no_output_____ ###Markdown From the above graph we can see the most sold Item is item2 and least sold item is Item4 Now data is ready, I am going to use MLXtend implementation of Apriori algorithm ###Code from mlxtend.frequent_patterns import apriori, association_rules # Applying the Apriori Algorithm and using column names for itemset ap_out= apriori(final_df, min_support=0.0001,use_colnames=True) ap_out.count() #number of records in the dataframe ###Output _____no_output_____ ###Markdown Need to count the number of item sets which are having single item and 2 items We can generate the length column and then apply filter on that ###Code #generating all the itemsets with min_support > 0.001 (so that all the combination gets listed) frequent_itemsets = apriori(final_df, min_support=0.001, use_colnames=True) frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x)) frequent_itemsets # now , we can select the results that satisfy our desired criteria as follows: # Itemsets with length 1 or 2 and support greater than 0.5 frequent_itemsets[ ((frequent_itemsets['length'] == 1) | (frequent_itemsets['length'] == 2)) & (frequent_itemsets['support'] >= 0.5) ] out=frequent_itemsets[ ((frequent_itemsets['length'] == 1) | (frequent_itemsets['length'] == 2)) & (frequent_itemsets['support'] >= 0.5) ] out.groupby(['length']).count() ###Output _____no_output_____ ###Markdown From above output we can conclude that we have 4 itemsets with confidence greater than 0.5 for one item and two items ###Code # now , we can select the results that satisfy our desired criteria as follows: # Itemsets with length 3 and support greater than 0.5 frequent_itemsets[ (frequent_itemsets['length'] == 3) & (frequent_itemsets['support'] >= 0.5) ] # we are not having any itemset which is of length 3 and support value greater or equal to 0.5 frequent_itemsets[ (frequent_itemsets['length'] == 3) ] out=frequent_itemsets[ (frequent_itemsets['length'] == 3) ] out.groupby(['length']).max() ###Output _____no_output_____ ###Markdown From above Dataset we can see that for 3 items max support is 0.45 Now running the association Rule package to get the support,confidence and lift ###Code rule=association_rules(frequent_itemsets, metric='confidence', min_threshold=0.01, support_only=False) rule ###Output _____no_output_____ ###Markdown Now applying filter based on given association Rules R1: {item1,item2} => {item3} R2: {item3,item5} => {item2} ###Code selected_rule1=rule[(rule['antecedents'] == {'item1', 'item2'}) & (rule['consequents']=={'item3'}) ] #selecting the required columns from the dataframe selected_rule1[['antecedents','consequents','support','confidence','lift']] selected_rule2=rule[(rule['antecedents'] == {'item3', 'item5'}) & (rule['consequents']=={'item2'}) ] #selecting the required columns from the dataframe selected_rule2[['antecedents','consequents','support','confidence','lift']] ###Output _____no_output_____ ###Markdown Combining the both the rules into single dataframe for comparision ###Code frames = [selected_rule1,selected_rule2] result=pd.concat(frames) result ###Output _____no_output_____ ###Markdown Based on the above output, we can see that Rule2 confidence value is more than Rule1 Confidence Value the confidence of a rule A->C is the probability of seeing the consequent in a transaction given that it also contains the antecedent Based on given data, we should go with Rule2 for better prediction of cross sale. Visualizing the metrics Support vs Confidence ###Code plt.scatter(rule['support'], rule['confidence'], alpha=0.5) plt.xlabel('support') plt.ylabel('confidence') plt.title('Support vs Confidence') plt.show() ###Output _____no_output_____ ###Markdown Support vs Lift ###Code plt.scatter(rule['support'], rule['lift'], alpha=0.5) plt.xlabel('support') plt.ylabel('lift') plt.title('Support vs Lift') plt.show() ###Output _____no_output_____ ###Markdown Lift vs Confidence ###Code fit = np.polyfit(rule['lift'], rule['confidence'], 1) fit_fn = np.poly1d(fit) plt.title('Lift vs Confidence') plt.plot(rule['lift'], rule['confidence'], 'yo', rule['lift'], fit_fn(rule['lift'])) ###Output _____no_output_____
lendo-arquivo-csv-da-internet.ipynb
###Markdown pandas lendo csv da internet Essa é uma das formas de importar arquivos da internet, direto do pandas. Mas nem sempre você vai conseguir abrir assim direto. 2 situações principais em que você consegue fazer direto1. Arquivo csv direto no link (melhor dos mundos)2. O arquivo csv é gerado para você, mas fica no meio de uma requisição que precisa ser tratada. Caso 1: csv direto no link- Criei um arquivo csv e disponibilizei o link para download no Drive: https://drive.google.com/uc?authuser=0&id=1UzlPy6CZQeAzDXhfc_2sHEyK_Jb50vJs&export=download ###Code import pandas as pd url = 'https://drive.google.com/uc?authuser=0&id=1UzlPy6CZQeAzDXhfc_2sHEyK_Jb50vJs&export=download' cotacao_df = pd.read_csv(url) display(cotacao_df) ###Output _____no_output_____ ###Markdown Caso 2: csv em uma requisição que precisa ser tratadaPesquisei por histórico de preços do café no Google e cheguei nesse site: http://portalweb.cooxupe.com.br:8080/portal/precohistoricocafe_2.jsp ###Code import pandas as pd import requests import io url = 'http://portalweb.cooxupe.com.br:8080/portal/precohistoricocafe_2.jsp?d-3496238-e=2&6578706f7274=1' # passe o link conteudo_url = requests.get(url).content arquivo = io.StringIO(conteudo_url.decode('latin1')) cafe_df = pd.read_csv(arquivo, sep=r'\t', engine='python') # o valor do sep vai variar de arquivo para arquivo display(cafe_df) ###Output _____no_output_____