code
stringlengths
2.5k
150k
kind
stringclasses
1 value
# 基于Tensorflow的softmax回归 Tensorflow是近年来非常非常流行的一个分布式的机器学习框架,之前一直想学习但是一直被各种各样的事情耽搁着。这学期恰好选了“人工神经网络”这门课,不得不接触这个框架了。最开始依照书上的教程通过Anaconda来配置环境,安装tensorflow。结果tensorflow是安装好了但是用起来是真麻烦。最后卸载了Anaconda在裸机上用`pip install tensorflow`来安装,可是裸机上的python是3.6.3版本的,似乎不支持tensorflow,于是在电脑上安装了另一个版本的python才算解决了这个问题,哎!说多了都是泪。言归正传,现在通过一个softmax实现手写字母识别的例子来正式进入tensorflow学习之旅。 softmax是一个非常常见的函数处理方式,它允许我们将模型的输出归一化并且以概率的形式输出,是非常有用的一种处理方式。具体的内容可以参见这个知乎问题[Softmax 函数的特点和作用是什么?](https://www.zhihu.com/question/23765351) ## 数据集 本例采用的是`mnist`数据集,它在机器学习领域非常有名。首先我们来认识一下这个数据集,tensorflow能够自动下载并使用这个数据集。获取到数据集之后首先查看一下训练集的大小,由于这次softmax回归使用的是mnist中的手写图片作为训练集,因此为了直观地了解一下数据集还需要查看其中的一些手写图片,在这里就用到了matplotlib这个框架来绘图。 ``` from tensorflow.examples.tutorials.mnist import input_data import os import matplotlib.pyplot as plt import numpy as np os.environ["TF_CPP_MIN_LOG_LEVEL"]='3'#禁止输出警告信息 #加载mnist数据集,one_hot设定为True是使用向量的形式编码数据类别,这主要是考虑到使用softmax mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) print("训练集数据的大小为:{0}".format(mnist.train.images.shape)) fig = plt.figure("数据展示") for k in range(3): result = [] temp = [] img = mnist.train.images[k]#获得第一幅图片,是一个28*28的图片展开成的784维的向量, for i in range(img.shape[0]): temp.append(img[i]) if (i + 1) % 28 == 0: result.append(temp) temp = [] img = np.matrix(result, dtype=np.float)#获取第一幅图片的矩阵形式 ax = fig.add_subplot(130 + k + 1) ax.imshow(img) plt.show() ``` 从上面代码的输出可以看到:该数据集的训练集的规模是55000x784。数据集中每一个行向量代表着一个28x28图片的一维展开,虽然说在图片识别中像素点的位置页蕴含着非常大的信息,但是在这里就不在意那么多了,仅仅将其一维展开就可以。笔者用mat将数据集中的前三个图像画了出来展示在上面,接下来就要用到softmax回归的方法实现一个基本的首写字母识别。 ## softmax回归 在这里简要介绍一下softmax回归的相关知识。softmax回归也是线性回归的一种,只不过是在输出层使用了softmax函数进行处理。具体的训练方法上也是使用了经典的随机梯度下降法训练。其判别函数有如下形式: $$f(x)=wx+b$$ **注意:softmax回归可以用于处理多分类问题,因此上式中的所有变量都是矩阵或向量形式。** 模型的输出f(x)还并不是最终的输出,还需要用softmax函数进行处理,softmax函数的形式如下所示: $$softmax(x)=\frac{exp(x_{i})}{\sum_{j}^{n}exp(x_{j})}$$ 这样处理后的模型输出可以表达输入数据在各个标签分类上的概率表达,此外使用softmax函数还有着其他很多的好处,最主要的还是在损失函数上的便利。此处不一一列举。 softmax回归的损失函数采用信息熵的形式给出: $$H_{y^{'}}(y)=-\sum y_{i}^{'}ln(y_{i})$$ 最后,笔者想在softmax函数下推导上述损失函数的随机梯度下降法的迭代公式,虽然tensorflow为我们做了这件事,但是作为算法编写者的我们依然有必要了解这其中的细节。首先,我们需要得到损失函数关于`w`的梯度: $$\frac{\partial H_{y^{'}}(y)}{\partial w}=\frac{\partial -\sum y_{i}^{'}ln(y_{i})}{\partial w}=\frac{\partial -yln(softmax(f(xw + b))}{\partial w}$$ 该求导比较复杂,采用链式求导法: $$\frac{\partial Loss}{\partial w}=\frac{\partial Loss}{\partial softmax(xw + b)}\frac{\partial softmax(xw + b)}{\partial xw + b}\frac{\partial xw + b}{\partial w}$$ 上述链式求导就比较简单,第一项和最后一项的求导都很容易得到,关键是第二项的求导。在这里我们直接给出softmax函数的求导公式。 $$\frac{\partial softmax(xw + b)}{\partial xw + b}=softmax(xw + b)(1 - softmax(xw + b))$$ 又由于上述第一项和第二项的求导为: $$\frac{\partial Loss}{\partial softmax(xw + b)}=\frac{\partial -yln(softmax(f(xw + b))}{\partial softmax(xw + b)}=\frac{-y}{softmax(xw + b)}$$ $$\frac{\partial xw + b}{\partial w}=x$$ 因此: $$\frac{\partial -yln(softmax(f(xw + b))}{\partial w}=y(softmax(xw + b) - 1)x$$ 接下来就可以使用随机梯度下降法的迭代公式来迭代求解`w`: $$w=w+\alpha \frac{\partial -yln(softmax(f(xw + b))}{\partial w}$$ ## tensroflow实现softmax回归 首先我们定义模型中的各个参数,其中x和真实的标签值y_不设定死,使用`placeholder`占据计算图的一个节点。代码如下: ``` import tensorflow as tf session = tf.InteractiveSession()#定义一个交互式会话 x = tf.placeholder(tf.float32, [None, 784]) w = tf.Variable(tf.zeros([784, 10]))#权重w初始化为0 b = tf.Variable(tf.zeros([1, 10]))#bias初始化为0 y = tf.nn.softmax(tf.matmul(x, w) + b) y_ = tf.placeholder(tf.float32, [None, 10]) ``` 设定其损失函数`cross_entry`,规定优化目标,初始化全局参数: ``` cross_entry = -tf.reduce_mean(tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])) train_set = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entry) tf.global_variables_initializer().run() ``` 准备工作都差不多做完了,接下来应该进行模型的训练。在本例中迭代一千次,每次从训练集中随机选取100组数据训练模型。 ``` for i in range(1000): trainSet, trainLabel = mnist.train.next_batch(100) train_set.run(feed_dict={x: trainSet, y_: trainLabel}) ``` 以上,我们已经完成了模型的训练,现在应该检测一下模型的效果。我们使用mnist的测试数据来测试模型的分类效果: ``` accuracy = tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)), dtype=tf.float32) accuracy = tf.reduce_mean(accuracy) print("模型的分类正确率为{0:.3f}".format(session.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))) ``` 由上面代码的输出可以看到,该例程的分类准确率还是非常高的,达到了接近92%。这篇入门介绍中用到的softmax回归其实本质上可以看作是一个无隐层的神经网络模型,只拥有一个输入层和一个输出层,二者中间使用了简单的线性方法连接。在实际的手写图片识别中可能并不会用到这样的线性模型,更多的是使用卷积神经网络(CNN)。 ## 后记 这篇文章算是笔者tensorflow入门的一个小应用。说一下我对神经网络,机器学习和tensorflow的看法吧。最近这几年最热的名词可能就是人工智能,深度学习了。笔者也未能免俗,随着这股洪流加入了浩浩荡荡的机器学习大军。从最开始最简单的随机梯度下降法,线性回归到后来有些难的序列最小优化算法,支持向量机。一步步走来发现这个计算机学科的分枝还是非常有意思的,看似非常严谨,枯燥却又十分优美的数学模型竟然能够表现出一丝丝“智能”,有时候真的会惊艳到我。 从2006年起,随着计算机计算能力的快速提升,曾经被冷落的神经网络又开始热了起来。对于神经网络,个人对它的未来不是很确定,一则是因为神经网络在历史上的命运可谓是大起大落,曾经的感知机,BP都是炙手可热,却又都草草收场,谁知道这一次的AI火热是不是能持续下去,也许碰到某一个天花板就又归于沉积了呢?说到底目前的AI行业所研究的“弱人工智能”距离真正的AI还是相差甚远。都已经说不清这到底是人类技术的问题,还是哲学的问题。二则是,目前神经网络模型的可解释性太差,相较于SVM这样的模型,人们似乎说不出为什么神经网络能够表现出如此强大的分类能力。三则是,人类在AI行业的探索上总是在刻意模仿自己的大脑,在神经网络的设计中融入了很多人脑中的机制,但这真的是一条正确的路吗?人类飞上蓝天不是靠着挥舞的翅膀而是飞机的机翼。 Tensorflow是Google公司推出的一款非常流行的机器学习框架,目前看来已经占据了机器学习框架的绝对霸主地位。对于这些机器学习框架我个人的感觉是不能脱离它们自己闭门造车,但也不能过度依赖。之前笔者是排斥一切框架的,很多经典的机器学习算法都是自己编写。然而到了神经网络这一块,自己再手动编程的代价太大了,于是就不得不入了框架的坑。我的观点是:对于一个算法一定要自己将其弄懂,其中的数学推导搞清楚再看代码,用框架。切忌将模型作为一个黑箱工具来使用,虽然这在短期来看确实效率很高,但长期来看绝对是百害无一利的。学习的过程中还可以自己动手做一些小demo,提升一下编程的乐趣,还是一件非常有益的事情。
github_jupyter
# Functional Python ## BProf Python course ### June 25-29, 2018 #### Judit Ács Python has 3 built-in functions that originate from functional programming. ## Map - `map` applies a function on each element of a sequence ``` def double(e): return e * 2 l = [2, 3, "abc"] list(map(double, l)) map(double, l) %%python2 def double(e): return e * 2 l = [2, 3, "abc"] print(map(double, l)) list(map(lambda x: x * 2, [2, 3, "abc"])) class Doubler: def __call__(self, arg): return arg * 2 list(map(Doubler(), l)) [x * 2 for x in l] ``` ## Filter - filter creates a list of elements for which a function returns true ``` def is_even(n): return n % 2 == 0 l = [2, 3, -1, 0, 2] list(filter(is_even, l)) list(filter(lambda x: x % 2 == 0, range(8))) [e for e in l if e % 2 == 0] ``` ### Most comprehensions can be rewritten using map and filter ``` l = [2, 3, 0, -1, 2, 0, 1] signum = [x / abs(x) if x != 0 else x for x in l] print(signum) list(map(lambda x: x / abs(x) if x != 0 else 0, l)) even = [x for x in l if x % 2 == 0] print(even) print(list(filter(lambda x: x % 2 == 0, l))) ``` ## Reduce - reduce applies a rolling computation on a sequence - the first argument of `reduce` is two-argument function - the second argument is the sequence - the result is accumulated in an accumulator ``` from functools import reduce l = [1, 2, -1, 4] reduce(lambda x, y: x*y, l) ``` an initial value for the accumulator may be supplied ``` reduce(lambda x, y: x*y, l, 10) reduce(lambda x, y: max(x, y), l) reduce(max, l) reduce(max, map(lambda n: n*n, l)) reduce(lambda x, y: x + int(y % 2 == 0), l, 0) ``` # `any` and `all` Checks if any or every element of an iterable evaluates to `False` in a boolean context. ``` def is_even(num): if num % 2 == 0: print("{} is even".format(num)) return True print("{} is odd".format(num)) return False l = [2, 4, 0, -1, 6, 8, 1] # all(map(is_even, l)) all(is_even(i) for i in l) l = [3, 1, 5, 0, 7, 0, 0] # any(map(is_even, l)) any(is_even(i) for i in l) ``` ## `zip` ``` x = [1, 2, 0] y = [-2, 6, 0, 2] for pair in zip(x, y): print(type(pair), pair) for pair in zip(x, y, x, y): print(type(pair), pair) ``` ## for and while loops do not create a new scope but functions do ``` y = "outside foo" def foo(): i = 2 for _ in range(4): y = 3 print(y) print("Calling foo") foo() print("Global y unchanged: {}".format(y)) ``` # Global Interpreter Lock (GIL) - CPython, the reference implementation has a reference counting garbage collector - reference counting GC is **not** thread-safe :( - "GIL, is a mutex that protects access to Python objects, preventing multiple threads from executing Python bytecodes at once" - IO, image processing and Numpy (numerical computation and matrix library) heavy lifting happens outside the GIL - other computations cannot fully take advantage of multithreading :( - Jython and IronPython do not have a GIL ## See also [Python wiki page on the GIL](https://wiki.python.org/moin/GlobalInterpreterLock) [Live GIL removal (advanced)](https://www.youtube.com/watch?v=pLqv11ScGsQ)
github_jupyter
``` #r "nuget:Microsoft.ML,1.4.0" #r "nuget:Microsoft.ML.AutoML,0.16.0" #r "nuget:Microsoft.Data.Analysis,0.1.0" using Microsoft.Data.Analysis; using XPlot.Plotly; using Microsoft.AspNetCore.Html; Formatter<DataFrame>.Register((df, writer) => { var headers = new List<IHtmlContent>(); headers.Add(th(i("index"))); headers.AddRange(df.Columns.Select(c => (IHtmlContent) th(c.Name))); var rows = new List<List<IHtmlContent>>(); var take = 20; for (var i = 0; i < Math.Min(take, df.RowCount); i++) { var cells = new List<IHtmlContent>(); cells.Add(td(i)); foreach (var obj in df[i]) { cells.Add(td(obj)); } rows.Add(cells); } var t = table( thead( headers), tbody( rows.Select( r => tr(r)))); writer.Write(t); }, "text/html"); using System.IO; using System.Net.Http; string housingPath = "housing.csv"; if (!File.Exists(housingPath)) { var contents = new HttpClient() .GetStringAsync("https://raw.githubusercontent.com/ageron/handson-ml2/master/datasets/housing/housing.csv").Result; File.WriteAllText("housing.csv", contents); } var housingData = DataFrame.LoadCsv(housingPath); housingData housingData.Description() Chart.Plot( new Graph.Histogram() { x = housingData["median_house_value"], nbinsx = 20 } ) var chart = Chart.Plot( new Graph.Scattergl() { x = housingData["longitude"], y = housingData["latitude"], mode = "markers", marker = new Graph.Marker() { color = housingData["median_house_value"], colorscale = "Jet" } } ); chart.Width = 600; chart.Height = 600; display(chart); static T[] Shuffle<T>(T[] array) { Random rand = new Random(); for (int i = 0; i < array.Length; i++) { int r = i + rand.Next(array.Length - i); T temp = array[r]; array[r] = array[i]; array[i] = temp; } return array; } int[] randomIndices = Shuffle(Enumerable.Range(0, (int)housingData.RowCount).ToArray()); int testSize = (int)(housingData.RowCount * .1); int[] trainRows = randomIndices[testSize..]; int[] testRows = randomIndices[..testSize]; DataFrame housing_train = housingData[trainRows]; DataFrame housing_test = housingData[testRows]; display(housing_train.RowCount); display(housing_test.RowCount); using Microsoft.ML; using Microsoft.ML.Data; using Microsoft.ML.AutoML; #!time var mlContext = new MLContext(); var experiment = mlContext.Auto().CreateRegressionExperiment(maxExperimentTimeInSeconds: 15); var result = experiment.Execute(housing_train, labelColumnName:"median_house_value"); var scatters = result.RunDetails.Where(d => d.ValidationMetrics != null).GroupBy( r => r.TrainerName, (name, details) => new Graph.Scattergl() { name = name, x = details.Select(r => r.RuntimeInSeconds), y = details.Select(r => r.ValidationMetrics.MeanAbsoluteError), mode = "markers", marker = new Graph.Marker() { size = 12 } }); var chart = Chart.Plot(scatters); chart.WithXTitle("Training Time"); chart.WithYTitle("Error"); display(chart); Console.WriteLine($"Best Trainer:{result.BestRun.TrainerName}"); var testResults = result.BestRun.Model.Transform(housing_test); var trueValues = testResults.GetColumn<float>("median_house_value"); var predictedValues = testResults.GetColumn<float>("Score"); var predictedVsTrue = new Graph.Scattergl() { x = trueValues, y = predictedValues, mode = "markers", }; var maximumValue = Math.Max(trueValues.Max(), predictedValues.Max()); var perfectLine = new Graph.Scattergl() { x = new[] {0, maximumValue}, y = new[] {0, maximumValue}, mode = "lines", }; var chart = Chart.Plot(new[] {predictedVsTrue, perfectLine }); chart.WithXTitle("True Values"); chart.WithYTitle("Predicted Values"); chart.WithLegend(false); chart.Width = 600; chart.Height = 600; display(chart); #!lsmagic new [] { 1,2,3 } new { foo ="123" } #!fsharp [1;2;3] b("hello").ToString() ```
github_jupyter
# Matching Registry and PA State Business License Data ``` import pandas as pd import mwdsbe import mwdsbe.datasets.licenses as licenses import schuylkill as skool import time def drop_duplicates_by_date(df, date_column): df.sort_values(by=date_column, ascending=False, inplace=True) df = df.loc[~df.index.duplicated(keep="first")] df.sort_index(inplace=True) return df ``` ## Data ``` registry = mwdsbe.load_registry() # geopandas df license = licenses.CommercialActivityLicenses().get() registry.head() state_license = pd.read_csv('./data/PAStateBusinessLicense/Sales_Tax_Licenses_and_Certificates_Current_Monthly_County_Revenue.csv') print('Size of state_license data:', len(state_license)) # convert state_license column names from titlecase to snakecase def to_snake_case(aList): res = [] for item in aList: words = item.strip().lower().split(' ') item = '_'.join(words) res.append(item) return res state_license.columns = to_snake_case(state_license.columns.tolist()) # clean data ignore_words = ['inc', 'group', 'llc', 'corp', 'pc', 'incorporated', 'ltd', 'co', 'associates', 'services', 'company', 'enterprises', 'enterprise', 'service', 'corporation'] cleaned_registry = skool.clean_strings(registry, ['company_name', 'dba_name'], True, ignore_words) cleaned_license = skool.clean_strings(license, ['company_name'], True, ignore_words) cleaned_state_license = skool.clean_strings(state_license, ['legal_name', 'trade_name'], True, ignore_words) cleaned_registry = cleaned_registry.dropna(subset=['company_name']) cleaned_license = cleaned_license.dropna(subset=['company_name']) cleaned_state_license = cleaned_state_license.dropna(subset=['legal_name']) len(cleaned_license) cleaned_state_license.head() # just getting PA state in registry pa_registry = cleaned_registry[cleaned_registry.location_state == 'PA'] len(pa_registry) ``` ## Merge registry and state_license by company_name and legal_name / trade name ``` # t1 = time.time() # merged = ( # skool.tf_idf_merge(pa_registry, cleaned_state_license, left_on="company_name", right_on="legal_name", score_cutoff=85) # .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on="company_name", right_on="trade_name", score_cutoff=85) # .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on="dba_name", right_on="legal_name", score_cutoff=85) # .pipe(skool.tf_idf_merge, pa_registry, cleaned_state_license, left_on="dba_name", right_on="trade_name", score_cutoff=85) # ) # t = time.time() - t1 # print('Execution time:', t/60, 'min') # matched = merged.dropna(subset=['legal_name']) # matched.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\state_license\pa-registry-full-state-license\tf-idf-85.xlsx', header=True) matched_state = pd.read_excel(r'C:\Users\dabinlee\Desktop\mwdsbe\data\state_license\pa-registry-full-state-license\tf-idf-85.xlsx') len(matched_state) exact_matches = matched_state[matched_state.match_probability == 1] len(exact_matches) ``` ##### Eliminate companies with different zip code ``` matched_state['postal_code_clean'] = matched_state.postal_code.astype(str).apply(lambda x : x.split("-")[0]).astype(float) matched_state = matched_state.set_index('left_index') matched_state_zip = matched_state[matched_state.zip_code == matched_state.postal_code_clean] len(matched_state_zip) matched_state_zip['expiration_date'] = pd.to_datetime(matched_state_zip['expiration_date'], errors='coerce') matched_state_zip = drop_duplicates_by_date(matched_state_zip, 'expiration_date') len(matched_state_zip) # state_license, same zip code, without duplicates matched_state_zip ``` ## Comparing between match between "registry-opendata license" and "registry-state license" How many more new companies do we get from registry-opendata license matching? ``` # t1 = time.time() # merged = ( # skool.tf_idf_merge(cleaned_registry, cleaned_license, on="company_name", score_cutoff=85) # .pipe(skool.tf_idf_merge, cleaned_registry, cleaned_license, left_on="dba_name", right_on="company_name", score_cutoff=85) # ) # t = time.time() - t1 # print('Execution time:', t/60, 'min') # matched_openphilly_license = merged.dropna(subset=['company_name_y']) # len(matched_openphilly_license) # matched_openphilly_license.issue_date = matched_openphilly_license.issue_date.astype(str) # matched_openphilly_license.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\license-opendataphilly\tf-idf\tf-idf-85.xlsx', header=True) ``` ##### Loading matched of registry and opendataphilly_license data ``` matched_opendataphilly_license = pd.read_excel(r'C:\Users\dabinlee\Desktop\mwdsbe\data\license-opendataphilly\tf-idf\tf-idf-85.xlsx') matched_opendataphilly_license = matched_opendataphilly_license.set_index('left_index') len(matched_opendataphilly_license) matched_opendataphilly_license = drop_duplicates_by_date(matched_opendataphilly_license, "issue_date") # without duplicates len(matched_opendataphilly_license) matched_opendataphilly_license.tail() # unique company? len(matched_opendataphilly_license.index.unique()) # yes diff = matched_state_zip.index.difference(matched_opendataphilly_license.index).tolist() len(diff) matched_state_zip.loc[diff][['company_name', 'dba_name', 'legal_name', 'trade_name']] # newly matched: matching with state_license data difference = matched_state_zip.loc[diff] difference # difference.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\difference.xlsx', header=True) ``` ## Investigate missing companies in opendataphilly license data We found 94 newly matched companies from matching between registry and state_license, why these are not appeared in matching between registry and opendataphilly license data? ``` t1 = time.time() merged = ( skool.tf_idf_merge(cleaned_registry, cleaned_license, on="company_name", score_cutoff=0) .pipe(skool.tf_idf_merge, cleaned_registry, cleaned_license, left_on="dba_name", right_on="company_name", score_cutoff=0) ) t = time.time() - t1 print('Execution time:', t/60, 'min') matched = merged.dropna(subset=['company_name_y']) matched = drop_duplicates_by_date(matched, 'issue_date') matched cleaned_registry.loc[cleaned_registry.index.difference(matched.index)] matched = matched.loc[difference.index] matched.issue_date = matched.issue_date.astype(str) matched.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\missing94.xlsx', header=True) matched.match_probability.median() difference.match_probability.median() ``` ### Compare intersection between matched_opendataphilly_license and matched_state_zip * matched_opendataphilly_license: matched data between pa_registry and license data from opendataphilly * matched_state_zip: matched data between pa_registry and license data from state_registry and filter matches which do not match zipcodes ``` intersection = matched_state_zip.index.intersection(matched_opendataphilly_license.index).tolist() len(intersection) # 246 - 94 intersection1 = matched_opendataphilly_license.loc[intersection] len(intersection1) intersection2 = matched_state_zip.loc[intersection] len(intersection2) intersection1 = intersection1[['company_name_x', 'dba_name', 'match_probability', 'company_name_y']] intersection2 = intersection2[['match_probability', 'legal_name', 'trade_name']] intersection = intersection1.merge(intersection2, left_index=True, right_index=True) intersection # intersection.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\intersection.xlsx', header=True) ``` ## Merge by address - Not matching well ``` cleaned_state_license.head() # split street address from address_with_lat/long cleaned_state_license['street_address'] = cleaned_state_license['address_with_lat/long'].astype(str).apply(lambda x : x.split("\n")[0]) cleaned_state_license.head() pa_registry.head() # clean street information cleaned_pa_registry = skool.clean_strings(pa_registry, ['location'], True) cleaned_state_license = skool.clean_strings(cleaned_state_license, ['street_address'], True) cleaned_pa_registry = cleaned_pa_registry.dropna(subset=['location']) cleaned_state_license = cleaned_state_license.dropna(subset=['street_address']) t1 = time.time() merged_by_street = skool.tf_idf_merge(cleaned_pa_registry, cleaned_state_license, left_on='location', right_on='street_address', score_cutoff=95) t = time.time() - t1 print('Execution time:', t/60, 'min') matched_by_street = merged_by_street.dropna(subset=['street_address']) len(matched_by_street) len(matched_by_street.index.unique()) # bug in tf-idf merge: not doing best match matched_by_street[['company_name', 'location', 'match_probability', 'legal_name', 'trade_name', 'street_address']] # matched_by_street.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\state_license\by_street\tf-idf-95.xlsx', header=True) diff = matched_by_street.index.difference(matched_opendataphilly_license.index) # newly catched matches len(diff) newly_matched_by_street = matched_by_street.loc[diff][['company_name', 'dba_name', 'location', 'legal_name', 'trade_name', 'street_address']] # newly_matched_by_street.to_excel (r'C:\Users\dabinlee\Desktop\mwdsbe\data\state_license\by_street\tf-idf-95-diff.xlsx', header=True) ```
github_jupyter
``` import pandas as pd from datetime import datetime, timedelta import pickle start_date = datetime(2018, 10, 16, 0, 0, 0) end_date = datetime(2018, 12, 31, 0, 0, 0) num_days = (end_date - start_date).days + 1 dfs = pd.DataFrame(index=range(num_days)) entries = [] for d in range(num_days): day = start_date + timedelta(days=d) dstr = day.strftime('%Y%m%d') url = 'http://www.espn.com/nba/schedule/_/date/{0}'.format(dstr) x = pd.read_html(url) df = x[0] if (len(df) > 1): for j in range(len(df)): t1 = df['matchup'].iloc[j] t2 = df['Unnamed: 1'].iloc[j] t1s = t1.split(' ') home = t1s[-1:][0] t2s = t2.split(' ') away = t2s[-1:][0] entries.append((day, home, away)) print(dstr, home, away) dfs = pd.DataFrame(entries, columns=['day', 'home', 'away']) dfs.home.unique() mydf = dfs.copy() atlantic = ['BOS', 'BRK', 'NYK', 'PHI', 'TOR'] central = ['CHI', 'CLE', 'DET', 'IND', 'MIL'] southeast = ['ATL', 'CHA', 'MIA', 'ORL', 'WAS'] southwest = ['DAL', 'HOU', 'MEM', 'NOP', 'SAS'] northwest = ['DEN', 'MIN', 'OKC', 'POR', 'UTA'] pacific = ['GSW', 'LAC', 'LAL', 'PHX', 'SAC'] mydf.home.replace({'NO': 'NOP', 'BKN': 'BRK', 'NY': 'NYK', 'UTAH': 'UTA', 'GS': 'GSW', 'SA': 'SAS', 'WSH': 'WAS'}, inplace=True) mydf.away.replace({'NO': 'NOP', 'BKN': 'BRK', 'NY': 'NYK', 'UTAH': 'UTA', 'GS': 'GSW', 'SA': 'SAS', 'WSH': 'WAS'}, inplace=True) mydf.head() mydf['month'] = mydf['day'].dt.month mydf.head() mydf.shape nbs_df = pd.read_csv('social_nba_2.csv') nbs_df2 = nbs_df[(nbs_df.year==2018) & (nbs_df.month==8)] nbs_df2.team.unique() dx = nbs_df2.copy() dx.columns dx[(dx.team=="MIA") | (dx.team=="BOS") | (dx.team=="OKC") | (dx.team=="NYK") | (dx.team=="CHI") ] dx[(dx.team=="SAS")] for j in range(len(mydf)): mydf.loc[mydf.index==j, 'gts'] = dx[dx.team==mydf.iloc[j].home]['gts'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['gts'].iloc[0] mydf.loc[mydf.index==j, 'wp'] = dx[dx.team==mydf.iloc[j].home]['wp_pageviews'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['wp_pageviews'].iloc[0] mydf.loc[mydf.index==j, 'tts'] = dx[dx.team==mydf.iloc[j].home]['TTS'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['TTS'].iloc[0] mydf.loc[mydf.index==j, 'unq'] = dx[dx.team==mydf.iloc[j].home]['UNQ'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['UNQ'].iloc[0] mydf.loc[mydf.index==j, 'fb_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Facebook'].iloc[0] mydf.loc[mydf.index==j, 'tw_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Twitter'].iloc[0] mydf.loc[mydf.index==j, 'inst_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Instagram'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Instagram'].iloc[0] mydf.loc[mydf.index==j, 'snap_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Snapchat'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Snapchat'].iloc[0] mydf.loc[mydf.index==j, 'wb_foll'] = dx[dx.team==mydf.iloc[j].home]['Followers_Weibo'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Followers_Weibo'].iloc[0] mydf.loc[mydf.index==j, 'fb_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Facebook'].iloc[0] mydf.loc[mydf.index==j, 'tw_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Twitter'].iloc[0] mydf.loc[mydf.index==j, 'inst_eng'] = dx[dx.team==mydf.iloc[j].home]['Engagements_Instagram'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Engagements_Instagram'].iloc[0] mydf.loc[mydf.index==j, 'fb_imps'] = dx[dx.team==mydf.iloc[j].home]['Impressions_Facebook'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Impressions_Facebook'].iloc[0] mydf.loc[mydf.index==j, 'tw_imps'] = dx[dx.team==mydf.iloc[j].home]['Impressions_Twitter'].iloc[0] + dx[dx.team==mydf.iloc[j].away]['Impressions_Twitter'].iloc[0] mydf.head() mydf.shape social_feats = [None]*4 for i in range(4): j = str(i+1) social_feats[i] = [None]*14 mydf['gts_'+j] = mydf['gts'] mydf['wp_'+j] = mydf['wp'] mydf['tts_'+j] = mydf['tts'] mydf['unq_'+j] = mydf['unq'] mydf['fb_foll_'+j] = mydf['fb_foll'] mydf['inst_foll_'+j] = mydf['inst_foll'] mydf['tw_foll_'+j] = mydf['tw_foll'] mydf['snap_foll_'+j] = mydf['snap_foll'] mydf['wb_foll_'+j] = mydf['wb_foll'] mydf['fb_eng_'+j] = mydf['fb_eng'] mydf['inst_eng_'+j] = mydf['inst_eng'] mydf['tw_eng_'+j] = mydf['tw_eng'] mydf['fb_imps_'+j] = mydf['fb_imps'] mydf['tw_imps_'+j] = mydf['tw_imps'] social_feats[i] = ['gts_'+j, 'wp_'+j, 'tts_'+j, 'unq_'+j, 'fb_foll_'+j, 'inst_foll_'+j, 'tw_foll_'+j, 'snap_foll_'+j, 'wb_foll_'+j, 'fb_eng_'+j, 'inst_eng_'+j, 'tw_eng_'+j, 'fb_imps_'+j, 'tw_imps_'+j] mydf.shape mydf.head() !ls *.pkl modelfile = 'RF_model_Unique_Viewers_2.pkl' with open(modelfile, 'rb') as fd: model = pickle.load(fd) mydf1 = mydf[mydf.month==10] mydf2 = mydf[mydf.month==11] mydf3 = mydf[mydf.month==12] j=1 tgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm'] for f in tgt_features: modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl' with open(modelfile, 'rb') as fd: model = pickle.load(fd) mydf1.loc[:, f] = model.predict(mydf1[social_feats[j]]) j=2 tgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm'] for f in tgt_features: modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl' with open(modelfile, 'rb') as fd: model = pickle.load(fd) mydf2.loc[:, f] = model.predict(mydf2[social_feats[j]]) j=3 tgt_features = ['Unique_Viewers', 'norm_minutes', 'avg_markup_norm'] for f in tgt_features: modelfile = 'RF_model_'+f+'_'+str(j+1)+'.pkl' with open(modelfile, 'rb') as fd: model = pickle.load(fd) mydf3.loc[:, f] = model.predict(mydf3[social_feats[j]]) mydf_final = pd.concat([mydf1, mydf2, mydf3]) mydf_final.shape mydf_final.columns mydf_save = mydf_final[['day', 'home', 'away', 'gts', 'wp', 'tts', 'unq', 'fb_foll', 'tw_foll', 'inst_foll', 'snap_foll', 'wb_foll', 'fb_eng', 'tw_eng', 'inst_eng', 'fb_imps', 'tw_imps', 'Unique_Viewers', 'norm_minutes', 'avg_markup_norm']] mydf_save.to_csv('final_predictions.csv', index=None) def get_games_for_day(df, day): dfg = df[df.day==day][['home', 'away', 'Unique_Viewers', 'norm_minutes', 'avg_markup_norm']] print(dfg.head()) get_games_for_day(mydf_save, '2018-10-16') ```
github_jupyter
``` import sys import gym import numpy as np import random import math from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values #create an instance of CliffWalking environment env = gym.make('CliffWalking-v0') print("Actions_space: {0}".format(env.action_space)) print("State Space: {0}".format(env.observation_space)) print("Action Space (env.action_space.n) {0}: ".format(env.action_space.n)) def epsilon_greedy(Q, state, nA, eps): if random.random() > eps: return np.argmax(Q[state]) else: return random.choice(np.arange(nA)) """ def update_Q_expected_sarsa(alpha, gamma, Q, \ eps, nA,\ state, action, reward, next_state=None): current = Q[state][action] #construct an epsilon-greedy policy policy_s = np.ones(nA) * ( eps / nA) #epsilon-greedy strategy for equiprobable selection policy_s[np.argmax(Q[state])] = 1 - eps + ( eps /nA) # epsilon-greedy strategy for greedy selection #In case of Expected SARSA , each state_action is multiplied with the probability next_reward = np.dot(Q[next_state] , policy_s) target = reward + gamma * next_reward new_current_reward = current + (alpha * ( target - current)) return new_current_reward """ def update_Q_expected_sarsa(alpha, gamma, Q, nA, eps, state,action, reward, next_state=None): #print("The state is : {0}".format(state)) #print("The action is : {0}".format(action)) current = Q[state][action] policy_s = get_probs(Q, eps, nA) Qsa_next = np.dot(Q[next_state] , policy_s) target = reward + gamma * Qsa_next new_value = current + (alpha * (target - current)) return new_value def get_probs(Q, epsilon, nA): """ obtains the action probabilities corresponding to epsilon-greedy policy """ policy_s = np.ones(nA) * (epsilon / nA) greedy_action = np.argmax(Q) policy_s[greedy_action] = 1 - epsilon + (epsilon / nA) return policy_s def expected_sarsa(env, num_episodes, alpha, gamma=1.0,plot_every=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=plot_every) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = 0.05 state = env.reset() score = 0 while True: action = epsilon_greedy(Q, state, env.nA, eps ) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, env.nA, eps, state,action, reward, next_state) state = next_state if done: tmp_scores.append(score) # append score break if (i_episode % plot_every == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores)) return Q def expected_sarsa(env, num_episodes, alpha, gamma=1.0,max_steps_per_episode=100): # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(env.nA)) tmp_scores = deque(maxlen=max_steps_per_episode) avg_scores = deque(maxlen=num_episodes) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() eps = 0.05 state = env.reset() score = 0 for step in range(max_steps_per_episode): action = epsilon_greedy(Q, state, env.nA, eps ) next_state, reward, done, info = env.step(action) score += reward Q[state][action] = update_Q_expected_sarsa(alpha, gamma, Q, env.nA, eps, state,action, reward, next_state) state = next_state if done: tmp_scores.append(score) # append score break if (i_episode % max_steps_per_episode == 0): avg_scores.append(np.mean(tmp_scores)) # plot performance plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores)) plt.xlabel('Episode Number') plt.ylabel('Average Reward (Over Next %d Episodes)' % max_steps_per_episode) plt.show() # print best 100-episode performance print(('Best Average Reward over %d Episodes: ' % max_steps_per_episode), np.max(avg_scores)) return Q # obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)]) ```
github_jupyter
# *bienvenue sur notre page qui presentera la fonction traiter des differents suports robotiques mindstorm* ![lego1](img\lego1.jpg "lego mindstorm") ### Ici vous trouverez toute les information sur les briques de controle des lego Mindstorm EV3. la brique de controlle du EV3 est un mini aurdinateur sous linux avec un ecran de petite taille qui permet de faire de faire de petite action avec une configuration minimale: Système d'exploitation – LINUX Processeur ARM9 300 MHz Mémoire flash – 16 Mo Mémoire vive – 64 Mo Résolution de l'écran – 178x128/noir & blanc Communication USB 2.0 vers PC – Jusqu'à 480 Mbit/s Communication USB 1.1 – Jusqu'à 12 Mbit/s Carte MicroSD – Compatible SDHC, version 2.0, max. 32 Go Ports pour moteurs et capteurs Connecteurs – RJ12 Compatible Auto ID Alimentation – 6 piles AA (rechargeables) ![fdfsdf](img/lego2.jpg "brique de controlle") ### Le fontionnement de la brique de contrôle: la brique de controlle est programmer par ordinateur et gere toute les info elle même grace a son processeur il y a plusieur capteurs qui peuvent etre mis sur le robot ce qui permet un nombre infinit de possibilité de creation Avec tout ces capteur le robot peut effectue avec la bonne programmation plusieur action en autonomie # la programmation de la brique ### le logiciel : le lego mindstorm est programmable sur un logiciel spécial fournit gratuitement. ![fdfsdf](img/lego5.png "brique de controlle") le logiciel est un logiciel de programmation graphique il existe un lçogiciel tiers qui s'appele [enchanting](http://enchanting.robotclub.ab.ca/tiki-index.php "site officiel") ![fdfsdf](img/System-1.png "brique de controlle") ## la brique de version anterieur la NXT ![fdfsdf](img/lego6.jpg "brique de controlle") elle est la petite soeur de la brique EV3 et embarque moins de possibilité tel que le bluetooth pour la programmer avec une tablet ou un smartphone ![fdfsdf](img/lego7.jpg "brique de controlle") on voit sont anteriorité sur sont logiciel de programmation qui est moins disigne et plus technique. les deux logiciel on un points commun c'est que la programmation s'effectue equipement après equipement avec des block comme dans scratch la brique NXT peut aussi être programmé avec le logiciel [enchanting](http://enchanting.robotclub.ab.ca/tiki-index.php "site officiel") ## _le but du projet_ le but de notre projet est de participer a robofesta qui est une conpetition en 2 épreuve, une epreuve de sauvetage et une épreuve de chorégraphie.
github_jupyter
``` import cv2 import numpy as np import os import math from scipy.spatial import distance as dist from collections import OrderedDict from scipy.optimize import linear_sum_assignment from kalman_utils.KFilter import * from filterpy.kalman import KalmanFilter, UnscentedKalmanFilter, MerweScaledSigmaPoints from filterpy.common import Q_discrete_white_noise a = [[170, 175, 196, 209], [150, 557, 174, 577], [625, 194, 640, 209], [170, 175, 196, 209], [173, 225, 202, 253], [435, 526, 476, 568], [435, 576, 476, 603]] b = np.array([[170, 175, 196, 209], [150, 557, 174, 577], [625, 194, 640, 209], [170, 175, 196, 209], [173, 225, 202, 253], [435, 526, 476, 568], [435, 576, 476, 603]]) no_ball_box = list(set([tuple(set(i)) for i in a])) no_ball_box for i in a: print(set(i)) for i in a: print(set(i)) new_array = [tuple(row) for row in b] c = np.unique(new_array) c new_array ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'].index("tennis racket") for i in range(640, 1080): a = i if ((9 * a)/16).is_integer(): print("x : ",i) print("y : ",(9 * a)/16) print("total_pixel : ", a * (9 * a)/16 *2 ) 1276,600 x_meter2pix = 23.77 / x_pix_length y_meter2pix = 10.97 / y_pix_length y_pix_length, x_pix_length = 600, 1276 x_meter2pix * 1500 810 * y_meter2pix (1500 - 1276)/3 def trans_xy(img_ori, point_list): for i in range(len(point_list[1])): x_cen, y_cen = point_list[1][i] point_list[1][i][1] = y_cen - (img_ori.shape[0] / 2) return point_list img = np.zeros([752, 423 * 2, 3]) trans_xy(img,a) def cal_ball_position(ball_stats_list): net_length = 13.11 post_hegith_avg = 1.125 for i in range(len(ball_stats_list)): ball_distance_list = ball_stats_list[i][0] ball_height_list = ball_stats_list[i][1] height = sum(ball_height_list) / 2 - post_hegith_avg if sum(ball_distance_list) < 13: return [np.nan, np.nan, np.nan] ball2net_length_x_L = ball_distance_list[0] * np.sin(theta_L) ball_position_y_L = ball_distance_list[0] * np.cos(theta_L) ball_plate_angle_L = np.arcsin(height / ball2net_length_x_L) ball_position_x_L = ball2net_length_x_L * np.cos(ball_plate_angle_L) ball2net_length_x_R = ball_distance_list[1] * np.sin(theta_R) ball_position_y_R = ball_distance_list[1] * np.cos(theta_R) ball_plate_angle_R = np.arcsin(height / ball2net_length_x_R) ball_position_x_R = ball2net_length_x_R * np.cos(ball_plate_angle_R) """print("theta_L, theta_R : ", np.rad2deg(self.theta_L), np.rad2deg(self.theta_R)) print("ball_plate_angle_L, ball_plate_angle_R : ", np.rad2deg(ball_plate_angle_L), np.rad2deg(ball_plate_angle_R)) print([-ball_position_x_L, ball_position_y_L - 6.4, height + 1]) print([-ball_position_x_R, 6.4 - ball_position_y_R, height + 1])""" if theta_L > theta_R: ball_position_y = ball_position_y_L - (net_length / 2) else : ball_position_y = (net_length / 2) - ball_position_y_R return [-ball_position_x_L, ball_position_y, height + post_hegith_avg] def get_depth_height(L_pos, R_pos): depth_height = [] cx = 360 cy = 204 focal_length = 320.754 net_length = 13.11 post_hegith_left = 1.13 post_hegith_right = 1.12 for i in range(len(L_pos)): x_L, y_L = L_pos[i][0] - cx, L_pos[i][1] - cy for j in range(len(R_pos)): x_R, y_R = R_pos[j][0] - cx, R_pos[j][1] - cy c_L = np.sqrt(focal_length ** 2 + x_L ** 2 + y_L ** 2) a_L = np.sqrt(focal_length ** 2 + x_L ** 2) if x_L < 0: th_L = 0.785398 + np.arccos(focal_length / a_L) else : th_L = 0.785398 - np.arccos(focal_length / a_L) b_L = a_L * np.cos(th_L) c_R = np.sqrt(focal_length ** 2 + x_R ** 2 + y_R ** 2) a_R = np.sqrt(focal_length ** 2 + x_R ** 2) if x_R > 0: th_R = 0.785398 + np.arccos(focal_length / a_R) else : th_R = 0.785398 - np.arccos(focal_length / a_R) b_R = a_R * np.cos(th_R) theta_L = np.arccos(b_L/c_L) theta_R = np.arccos(b_R/c_R) D_L = net_length * np.sin(theta_R) / np.sin(3.14 - (theta_L + theta_R)) D_R = net_length * np.sin(theta_L) / np.sin(3.14 - (theta_L + theta_R)) height_L = abs(D_L * np.sin(np.arcsin(y_L/c_L))) height_R = abs(D_R * np.sin(np.arcsin(y_R/c_R))) #height_L = abs(D_L * np.sin(np.arctan(y_L/a_L))) #height_R = abs(D_R * np.sin(np.arctan(y_R/a_R))) if y_L < 0: height_L += post_hegith_left else: height_L -= post_hegith_left if y_R < 0: height_R += post_hegith_right else: height_R -= post_hegith_right print(L_pos[i],R_pos[j]) print([D_L, D_R, height_L, height_R]) depth_height.append([[D_L, D_R], [height_L, height_R]]) return depth_height ball_cen_left = [[260, 162]] ball_cen_right = [[351, 167]] ball_stats_list = get_depth_height(ball_cen_left,ball_cen_right) cal_ball_position(ball_stats_list) def get_ball_pos(L_pos, R_pos): depth_height = [] cx = 360 cy = 204 focal_length = 320.754 net_length = 13.11 post_hegith_left = 1.13 post_hegith_right = 1.12 post_hegith_avg = (post_hegith_left + post_hegith_right) / 2 for i in range(len(L_pos)): x_L, y_L = L_pos[i][0] - cx, L_pos[i][1] - cy for j in range(len(R_pos)): x_R, y_R = R_pos[j][0] - cx, R_pos[j][1] - cy c_L = np.sqrt(focal_length ** 2 + x_L ** 2 + y_L ** 2) a_L = np.sqrt(focal_length ** 2 + x_L ** 2) if x_L < 0: th_L = 0.785398 + np.arccos(focal_length / a_L) else : th_L = 0.785398 - np.arccos(focal_length / a_L) b_L = a_L * np.cos(th_L) c_R = np.sqrt(focal_length ** 2 + x_R ** 2 + y_R ** 2) a_R = np.sqrt(focal_length ** 2 + x_R ** 2) if x_R > 0: th_R = 0.785398 + np.arccos(focal_length / a_R) else : th_R = 0.785398 - np.arccos(focal_length / a_R) b_R = a_R * np.cos(th_R) theta_L = np.arccos(b_L/c_L) theta_R = np.arccos(b_R/c_R) D_L = net_length * np.sin(theta_R) / np.sin(3.14 - (theta_L + theta_R)) D_R = net_length * np.sin(theta_L) / np.sin(3.14 - (theta_L + theta_R)) height_L = abs(D_L * np.sin(np.arcsin(y_L/c_L))) height_R = abs(D_R * np.sin(np.arcsin(y_R/c_R))) #height_L = abs(D_L * np.sin(np.arctan(y_L/a_L))) #height_R = abs(D_R * np.sin(np.arctan(y_R/a_R))) if y_L < 0: height_L += post_hegith_left else: height_L -= post_hegith_left if y_R < 0: height_R += post_hegith_right else: height_R -= post_hegith_right ball_height_list = [height_L, height_R] ball_distance_list = [D_L, D_R] height = sum(ball_height_list) / 2 - post_hegith_avg ball2net_length_x_L = ball_distance_list[0] * np.sin(theta_L) ball_position_y_L = ball_distance_list[0] * np.cos(theta_L) ball_plate_angle_L = np.arcsin(height / ball2net_length_x_L) ball_position_x_L = ball2net_length_x_L * np.cos(ball_plate_angle_L) ball2net_length_x_R = ball_distance_list[1] * np.sin(theta_R) ball_position_y_R = ball_distance_list[1] * np.cos(theta_R) ball_plate_angle_R = np.arcsin(height / ball2net_length_x_R) ball_position_x_R = ball2net_length_x_R * np.cos(ball_plate_angle_R) if theta_L > theta_R: ball_position_y = ball_position_y_L - (net_length / 2) else : ball_position_y = (net_length / 2) - ball_position_y_R print(L_pos[i],R_pos[j]) #print([D_L, D_R, height_L, height_R]) print([-ball_position_x_L, ball_position_y, height + post_hegith_avg]) depth_height.append([[D_L, D_R], [height_L, height_R]]) return [-ball_position_x_L, ball_position_y, height + post_hegith_avg] ball_cen_left = [[298, 153]] ball_cen_right = [[319, 160]] ball_stats_list = get_ball_pos(ball_cen_left,ball_cen_right) def check_vel_noise(): y_vel_list = np.array(esti_ball_val_list)[:,1] if len(y_vel_list) > 3 : vel_mean = np.mean(y_vel_list) if abs(abs(vel_mean) - abs(y_vel_list[-1])) > 2: vel_mean = np.mean(y_vel_list[:-1]) esti_ball_val_list[-1][1] = vel_mean return esti_ball_val_list[-1] else: return esti_ball_val_list[-1] def cal_landing_point(pos): t_list = [] #vel = self.check_vel_noise() x0, y0, z0 = pos[0], pos[1], pos[2] vx, vy, vz = vel[0], vel[1], vel[2] a = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 ) b = vz c = z0 t_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t = max(t_list) x = np.array(x0 + vx * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 ) * (t ** 2) / 0.057,float) y = np.array(y0 + vy * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 ) * (t ** 2) / 0.057,float) z = np.array(z0 + vz * t - ((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2) * (t ** 2),float) return [np.round(x,3), np.round(y,3), np.round(z,3)] cal_landing_point(cal_landing_point) for i in range(3): for j in range(4): print(j) if j == 3: break class Ball_Pos_Estimation(): def __init__(self): self.pre_ball_cen_left_list = [] self.pre_ball_cen_right_list = [] def check_ball_move_update(self, ball_cen_left_list, ball_cen_right_list): self.swing_check = True left_flag = False right_flag = False self.ball_cen_left_list = ball_cen_left_list self.ball_cen_right_list = ball_cen_right_list if len(self.pre_ball_cen_left_list) or len(self.pre_ball_cen_right_list): for i in range(len(self.ball_cen_left_list)): if left_flag: break x_cen = self.ball_cen_left_list[i][0] for j in range(len(self.pre_ball_cen_left_list)): pre_x_cen = self.pre_ball_cen_left_list[j][0] if x_cen > pre_x_cen: self.pre_ball_cen_left_list = self.ball_cen_left_list left_flag = True break for i in range(len(self.ball_cen_right_list)): if right_flag: break x_cen = self.ball_cen_right_list[i][0] for j in range(len(self.pre_ball_cen_right_list)): pre_x_cen = self.pre_ball_cen_right_list[j][0] if x_cen < pre_x_cen: self.pre_ball_cen_right_list = self.ball_cen_right_list right_flag = True break if left_flag == False and right_flag == False : self.pre_ball_cen_left_list = [] self.pre_ball_cen_right_list = [] self.swing_check = False return False return True else: self.pre_ball_cen_left_list = self.ball_cen_left_list self.pre_ball_cen_right_list = self.ball_cen_right_list self.swing_check = False return True estimation_ball = Ball_Pos_Estimation() ball_cen_left = [[419, 151]] ball_cen_right = [[201, 153]] if estimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right): pass print(estimation_ball.swing_check) ball_cen_left = [[392, 160]] ball_cen_right = [[223, 160]] if estimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right): pass print(estimation_ball.swing_check) ball_cen_left = [[281, 194], [716, 230]] ball_cen_right = [] estimation_ball.check_ball_move_update(ball_cen_left, ball_cen_right) ball_pos_list = [np.nan,np.nan,np.nan] np.isnan(ball_pos_list[0]) a = 2 if a == 1: print(1) elif a == 2: print(2) def fx(x, dt): # state transition function - predict next state based # on constant velocity model x = vt + x_0 F = np.matrix([[1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0, 1/2.0*dt**2], [0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, dt], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]]) return np.dot(F,x) def hx(x): # measurement function - convert state into a measurement # where measurements are [x_pos, y_pos] H = np.matrix([[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]) return np.array([x[0], x[3], x[6]]) class UK_filter(): def __init__(self, dt, std_acc, x_std_meas, y_std_meas,z_std_meas, init_x, init_y, init_z): self.init_x = init_x self.init_y = init_y self.init_z = init_z self.dt = dt self.z_std = 0.1 self.points = MerweScaledSigmaPoints(9, alpha=.1, beta=2., kappa=-1) self.f = UnscentedKalmanFilter(dim_x=9, dim_z=3, dt=self.dt, fx=fx, hx=hx, points=self.points) self.f.x = np.array([self.init_x,0,0,self.init_y,0,0, self.init_z, 0,0]) self.f.P = np.eye(9) self.f.Q = np.matrix([[(dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6, 0, 0], [0, (dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6, 0], [0, 0, (dt**6)/36, 0, 0, (dt**5)/12, 0, 0, (dt**4)/6], [(dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2, 0, 0], [0, (dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2, 0], [0, 0, (dt**5)/12, 0, 0, (dt**4)/4, 0, 0, (dt**3)/2], [(dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2), 0, 0], [0, (dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2), 0], [0, 0, (dt**4)/6, 0, 0, (dt**3)/2, 0, 0, (dt**2)]]) *std_acc**2 #self.f.Q = Q_discrete_white_noise(2, dt = self.dt, var = 0.01**2, block_size = 2) self.f.R = np.array([[x_std_meas**2, 0, 0], [0, y_std_meas**2, 0], [0, 0, z_std_meas**2]]) #self.f.predict() a = UK_filter(dt = 0.1, std_acc = 10, x_std_meas = 1, y_std_meas = 1,z_std_meas = 1, init_x = 0, init_y = 0, init_z = 0) a.f.predict() a.f.update([1,1,1]) a.f.x.reshape([3,3]) a.f.update([2,2,2]) a.f.update([10,10,10]) a.f.update([20,20,20]) a.f.update([30,30,30]) ball_cand_trajectory = [[], [[[-8.499031225886979, 0.311750401425118, 1.404671677765355]]], [[[-7.654383459951477, 0.37339492038427213, 1.528884790477008]]], [[[-6.812550514254101, 0.4358689710714039, 1.6515142171759307]]], [[[-6.005829995239098, 0.5240465335704965, 1.7282085158171043]]], [[[-5.174512338451282, 0.5984737933774342, 1.802634978706663]]], [[[-4.387167345972433, 0.6819632256971389, 1.8761140620728325]]], [[[-3.625711866264724, 0.7480947422039854, 1.9102820296223895]]], [[[-2.8421647085007566, 0.8618655406628255, 1.950691295218776]]], [[[-2.058537458662639, 1.064584863759384, 1.9592570246492662]]], [[]], [[]], [[]]] len(ball_cand_trajectory) ball_cand_trajectory[3][0][0] ball_pos_list = [[-1,0,0], [1,3,4],[2,4,5]] b = np.array(ball_pos_list) b[:,0].argmin() b[:][0] a = [0, 0, 0] b = [1, 2, 3] def get_distance(point_1, point_2): return (np.sqrt((point_2[0]-point_1[0])**2 + (point_2[1]-point_1[1])**2 + (point_2[2]-point_1[2])**2)) get_distance(a,b) ball_pos_list = [[-7.923465928004007, -0.6755867599611189, 2.580941671512611]] x_pos, y_pos, z_pos = ball_pos_list[np.array(ball_pos_list)[:,0].argmin()] np.array(ball_pos_list)[:,0].argmin() x_pos, y_pos, z_pos a = np.array([[1,2,3],[0,-5,0],[-8,0,0]]) a.append([[1,2],[3,4]]) a from kalman_utils.KFilter import * dT = 1 / 25 ball_pos = [-9.285799665284836, -1.5959832449913565, 2.874695965876609] kf = Kalman_filiter(ball_pos[0], ball_pos[1], ball_pos[2], dT) kf.get_predict() ball_pos_list = [[-8.3324296128703, -1.426689529754115, 2.8019436403665923],[-7.459506668999277, -1.286501720742379, 2.720148095378055],[-6.540641694266555, -1.135716227096026, 2.6019241593327513],[-5.68329300302514, -0.9730271651650142, 2.519130845990981]] kf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT) ball_pos = ball_pos_list.pop(0) kf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT) ball_pos = ball_pos_list.pop(0) kf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT) ball_pos = ball_pos_list.pop(0) kf.update(ball_pos[0], ball_pos[1], ball_pos[2], dT) kf.predict(dT) kf.predict(dT) kf.get_predict() kf.KF.getPostState() np .array(ball_pos_list)[:,1][:-1] np.mean(np.array(ball_pos_list)[:,1][:-1]) a = [[[-9.039131345617234, -0.8137119203553631, 2.7444703070749017]], [[-7.900829857194978, -0.654493068471937, 2.6104207999239812]], [[-6.849655790366586, -0.5241522117656086, 2.4875451129360613]], [[-5.793390039377407, -0.39658750289812605, 2.3529254544909817]]] b = np.array(a).reshape([-1,3]) b sum(np.diff(b[:,1])) / 0.04 for i in []: print(1) ball_cand_pos = [[-9.03913135, -0.81371192, 2.74447031], [-7.90082986, -0.65449307, 2.6104208 ], [-6.84965579, -0.52415221, 2.48754511], [-5.79339004, -0.3965875 , 2.35292545]] del_list = [1,2,3] a= np.array(ball_cand_pos) np.delete(a,del_list,axis = 0) ball_cand_pos.pop(tuple(del_list)) tuple(del_list) if 3 == 3: print(1) estimation_ball_trajectory_list = np.array([[-8.075992233062022, -2.0712591029119727, 2.0143476038492176], [-7.041614424760605, -1.9654479963268496, 1.998508488845969], [-6.065044696824322, -1.879832764271466, 1.9635872373067076], [-5.107183755941063, -1.7968180783990304, 1.925614986159784], [-4.169832727705863, -1.6838107370497974, 1.884759164010521], [-3.2611958072772738, -1.5644043096331428, 1.81101188769961], [-2.368649316849956, -1.4805807124989778, 1.737028408823873], [-1.4834200661316506, -1.378965700223568, 1.6496367839315553], [-0.7006541039923906, -1.2862463955187806, 1.5580097756344515]]) estimation_ball_trajectory_list x_pos_list = estimation_ball_trajectory_list[:,0] y_pos_list = estimation_ball_trajectory_list[:,1] z_pos_list = estimation_ball_trajectory_list[:,2] np.diff(estimation_ball_trajectory_list) x_pos_list np.diff(x_pos_list)[-1] def cal_landing_point(pos_list): t_list = [] if len(pos_list) < 4 : return [np.nan, np.nan, np.nan] pos = pos_list[-1] x0, y0, z0 = pos[0], pos[1], pos[2] vx, vy, vz = get_velocity(pos_list) a = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 ) b = vz c = z0 t_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t = max(t_list) x = np.array(x0 + vx * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 ) * (t ** 2) / 0.057,float) y = np.array(y0 + vy * t - (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 ) * (t ** 2) / 0.057,float) z = np.array(z0 + vz * t - ((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2) * (t ** 2),float) return [np.round(x,3), np.round(y,3), np.round(z,3)] def get_velocity(pos_list): t = 1 / 30 np_pos_list = np.array(pos_list) x_pos_list = np_pos_list[:,0] y_pos_list = np_pos_list[:,1] z_pos_list = np_pos_list[:,2] vel_x_list = np.diff(x_pos_list) / t vel_y_list = np.diff(y_pos_list) / t vel_z_list = np.diff(z_pos_list) / t return vel_x_list[-1], vel_y_list[-1], vel_z_list[-1] ball_pos_jrajectory = [[-7.654350032583985, 0.37375046201544926, 1.5602039272816657], [-6.812516855329211, 0.4362023210314696, 1.6809758292885233], [-6.005632695613204, 0.5230876456730043, 1.7703496818844444], [-5.17434312120148, 0.5969135946437563, 1.842121410014573], [-4.40319049584631, 0.6610575247176333, 1.90162399321874], [-3.6256202508366666, 0.7485738124651196, 1.9551274411299535], [-2.8420151676001075, 0.8677493333923483, 1.980619130934988], [-2.058931199304543, 0.9710952377297057, 1.9892246369063118], [-1.3541772365570068, 1.0641107559204102, 1.9812870025634766], [-0.7198931574821472, 1.1478245258331299, 1.9584616422653198], [-0.14903749525547028, 1.223166823387146, 1.9222372770309448]] ball_pos_jrajectory cal_landing_point(ball_pos_jrajectory) a = np.array([ -3.1019, -2.3294, -1.513]) np.mean(np.diff(a))/(1/25) np.diff(a)/(1/25) x0, y0, z0 = -0.14778512716293335, -3.731870412826538, 2.6436607837677 vx, vy, vz = 13.437911868095398, 0.15355348587036133, 0.08598566055297852 t_list = [] a = -((0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) / 0.057 + 9.8 / 2 ) b = vz c = z0 t_list.append((-b + np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t_list.append((-b - np.sqrt(b ** 2 - 4 * a * c))/(2 * a)) t = max(t_list) drag_x = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vx ** 2 ) drag_y = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vy ** 2 ) drag_z = (0.5 * 0.507 * 1.2041 * np.pi * (0.033 ** 2) * vz ** 2 ) drag_x = 0 drag_y = 0 drag_z = 0 x = np.array(x0 + vx * t - drag_x * (t ** 2) / 0.057,float) y = np.array(y0 + vy * t - drag_y * (t ** 2) / 0.057,float) z = np.array(z0 + vz * t - (drag_z / 0.057 + 9.8 / 2) * (t ** 2),float) [np.round(x,3), np.round(y,3), np.round(z,3)] t_list ```
github_jupyter
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-publish-and-run-using-rest-endpoint.png) # How to Publish a Pipeline and Invoke the REST endpoint In this notebook, we will see how we can publish a pipeline and then invoke the REST endpoint. ## Prerequisites and Azure Machine Learning Basics If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc. ### Initialization Steps ``` import azureml.core from azureml.core import Workspace, Datastore, Experiment, Dataset from azureml.core.compute import AmlCompute from azureml.core.compute import ComputeTarget # Check core SDK version number print("SDK version:", azureml.core.VERSION) from azureml.data.data_reference import DataReference from azureml.pipeline.core import Pipeline, PipelineData from azureml.pipeline.steps import PythonScriptStep from azureml.pipeline.core.graph import PipelineParameter print("Pipeline SDK-specific imports completed") ws = Workspace.from_config() print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n') # Default datastore (Azure blob storage) # def_blob_store = ws.get_default_datastore() def_blob_store = Datastore(ws, "workspaceblobstore") print("Blobstore's name: {}".format(def_blob_store.name)) ``` ### Compute Targets #### Retrieve an already attached Azure Machine Learning Compute ``` from azureml.core.compute_target import ComputeTargetException aml_compute_target = "cpu-cluster" try: aml_compute = AmlCompute(ws, aml_compute_target) print("found existing compute target.") except ComputeTargetException: print("creating new compute target") provisioning_config = AmlCompute.provisioning_configuration(vm_size = "STANDARD_D2_V2", min_nodes = 1, max_nodes = 4) aml_compute = ComputeTarget.create(ws, aml_compute_target, provisioning_config) aml_compute.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20) # For a more detailed view of current Azure Machine Learning Compute status, use get_status() # example: un-comment the following line. # print(aml_compute.get_status().serialize()) ``` ## Building Pipeline Steps with Inputs and Outputs A step in the pipeline can take [dataset](https://docs.microsoft.com/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) as input. This dataset can be a data source that lives in one of the accessible data locations, or intermediate data produced by a previous step in the pipeline. ``` # Uploading data to the datastore data_path = def_blob_store.upload_files(["./20news.pkl"], target_path="20newsgroups", overwrite=True) # Reference the data uploaded to blob storage using file dataset # Assign the datasource to blob_input_data variable blob_input_data = Dataset.File.from_files(data_path).as_named_input("test_data") print("Dataset created") # Define intermediate data using PipelineData processed_data1 = PipelineData("processed_data1",datastore=def_blob_store) print("PipelineData object created") ``` #### Define a Step that consumes a dataset and produces intermediate data. In this step, we define a step that consumes a dataset and produces intermediate data. **Open `train.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step. ``` # trainStep consumes the datasource (Datareference) in the previous step # and produces processed_data1 source_directory = "publish_run_train" trainStep = PythonScriptStep( script_name="train.py", arguments=["--input_data", blob_input_data, "--output_train", processed_data1], inputs=[blob_input_data], outputs=[processed_data1], compute_target=aml_compute, source_directory=source_directory ) print("trainStep created") ``` #### Define a Step that consumes intermediate data and produces intermediate data In this step, we define a step that consumes an intermediate data and produces intermediate data. **Open `extract.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** ``` # extractStep to use the intermediate data produced by step4 # This step also produces an output processed_data2 processed_data2 = PipelineData("processed_data2", datastore=def_blob_store) source_directory = "publish_run_extract" extractStep = PythonScriptStep( script_name="extract.py", arguments=["--input_extract", processed_data1, "--output_extract", processed_data2], inputs=[processed_data1], outputs=[processed_data2], compute_target=aml_compute, source_directory=source_directory) print("extractStep created") ``` #### Define a Step that consumes multiple intermediate data and produces intermediate data In this step, we define a step that consumes multiple intermediate data and produces intermediate data. ### PipelineParameter This step also has a [PipelineParameter](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter?view=azure-ml-py) argument that help with calling the REST endpoint of the published pipeline. ``` # We will use this later in publishing pipeline pipeline_param = PipelineParameter(name="pipeline_arg", default_value=10) print("pipeline parameter created") ``` **Open `compare.py` in the local machine and examine the arguments, inputs, and outputs for the script. That will give you a good sense of why the script argument names used below are important.** ``` # Now define step6 that takes two inputs (both intermediate data), and produce an output processed_data3 = PipelineData("processed_data3", datastore=def_blob_store) source_directory = "publish_run_compare" compareStep = PythonScriptStep( script_name="compare.py", arguments=["--compare_data1", processed_data1, "--compare_data2", processed_data2, "--output_compare", processed_data3, "--pipeline_param", pipeline_param], inputs=[processed_data1, processed_data2], outputs=[processed_data3], compute_target=aml_compute, source_directory=source_directory) print("compareStep created") ``` #### Build the pipeline ``` pipeline1 = Pipeline(workspace=ws, steps=[compareStep]) print ("Pipeline is built") ``` ## Run published pipeline ### Publish the pipeline ``` published_pipeline1 = pipeline1.publish(name="My_New_Pipeline", description="My Published Pipeline Description", continue_on_step_failure=True) published_pipeline1 ``` Note: the continue_on_step_failure parameter specifies whether the execution of steps in the Pipeline will continue if one step fails. The default value is False, meaning when one step fails, the Pipeline execution will stop, canceling any running steps. ### Publish the pipeline from a submitted PipelineRun It is also possible to publish a pipeline from a submitted PipelineRun ``` # submit a pipeline run pipeline_run1 = Experiment(ws, 'Pipeline_experiment').submit(pipeline1) # publish a pipeline from the submitted pipeline run published_pipeline2 = pipeline_run1.publish_pipeline(name="My_New_Pipeline2", description="My Published Pipeline Description", version="0.1", continue_on_step_failure=True) published_pipeline2 ``` ### Get published pipeline You can get the published pipeline using **pipeline id**. To get all the published pipelines for a given workspace(ws): ```css all_pub_pipelines = PublishedPipeline.get_all(ws) ``` ``` from azureml.pipeline.core import PublishedPipeline pipeline_id = published_pipeline1.id # use your published pipeline id published_pipeline = PublishedPipeline.get(ws, pipeline_id) published_pipeline ``` ### Run published pipeline using its REST endpoint [This notebook](https://aka.ms/pl-restep-auth) shows how to authenticate to AML workspace. ``` from azureml.core.authentication import InteractiveLoginAuthentication import requests auth = InteractiveLoginAuthentication() aad_token = auth.get_authentication_header() rest_endpoint1 = published_pipeline.endpoint print("You can perform HTTP POST on URL {} to trigger this pipeline".format(rest_endpoint1)) # specify the param when running the pipeline response = requests.post(rest_endpoint1, headers=aad_token, json={"ExperimentName": "My_Pipeline1", "RunSource": "SDK", "ParameterAssignments": {"pipeline_arg": 45}}) try: response.raise_for_status() except Exception: raise Exception('Received bad response from the endpoint: {}\n' 'Response Code: {}\n' 'Headers: {}\n' 'Content: {}'.format(rest_endpoint, response.status_code, response.headers, response.content)) run_id = response.json().get('Id') print('Submitted pipeline run: ', run_id) ``` # Next: Data Transfer The next [notebook](https://aka.ms/pl-data-trans) will showcase data transfer steps between different types of data stores.
github_jupyter
## LIDAR to 2D grid map example This simple tutorial shows how to read LIDAR (range) measurements from a file and convert it to occupancy grid. Occupancy grid maps (_Hans Moravec, A.E. Elfes: High resolution maps from wide angle sonar, Proc. IEEE Int. Conf. Robotics Autom. (1985)_) are a popular, probabilistic approach to represent the environment. The grid is basically discrete representation of the environment, which shows if a grid cell is occupied or not. Here the map is represented as a `numpy array`, and numbers close to 1 means the cell is occupied (_marked with red on the next image_), numbers close to 0 means they are free (_marked with green_). The grid has the ability to represent unknown (unobserved) areas, which are close to 0.5. ![Example](grid_map_example.png) In order to construct the grid map from the measurement we need to discretise the values. But, first let's need to `import` some necessary packages. ``` import math import numpy as np import matplotlib.pyplot as plt from math import cos, sin, radians, pi ``` The measurement file contains the distances and the corresponding angles in a `csv` (comma separated values) format. Let's write the `file_read` method: ``` def file_read(f): """ Reading LIDAR laser beams (angles and corresponding distance data) """ measures = [line.split(",") for line in open(f)] angles = [] distances = [] for measure in measures: angles.append(float(measure[0])) distances.append(float(measure[1])) angles = np.array(angles) distances = np.array(distances) return angles, distances ``` From the distances and the angles it is easy to determine the `x` and `y` coordinates with `sin` and `cos`. In order to display it `matplotlib.pyplot` (`plt`) is used. ``` ang, dist = file_read("lidar01.csv") ox = np.sin(ang) * dist oy = np.cos(ang) * dist plt.figure(figsize=(6,10)) plt.plot([oy, np.zeros(np.size(oy))], [ox, np.zeros(np.size(oy))], "ro-") # lines from 0,0 to the plt.axis("equal") bottom, top = plt.ylim() # return the current ylim plt.ylim((top, bottom)) # rescale y axis, to match the grid orientation plt.grid(True) plt.show() ``` The `lidar_to_grid_map.py` contains handy functions which can used to convert a 2D range measurement to a grid map. For example the `bresenham` gives the a straight line between two points in a grid map. Let's see how this works. ``` import lidar_to_grid_map as lg map1 = np.ones((50, 50)) * 0.5 line = lg.bresenham((2, 2), (40, 30)) for l in line: map1[l[0]][l[1]] = 1 plt.imshow(map1) plt.colorbar() plt.show() line = lg.bresenham((2, 30), (40, 30)) for l in line: map1[l[0]][l[1]] = 1 line = lg.bresenham((2, 30), (2, 2)) for l in line: map1[l[0]][l[1]] = 1 plt.imshow(map1) plt.colorbar() plt.show() ``` To fill empty areas, a queue-based algorithm can be used that can be used on an initialized occupancy map. The center point is given: the algorithm checks for neighbour elements in each iteration, and stops expansion on obstacles and free boundaries. ``` from collections import deque def flood_fill(cpoint, pmap): """ cpoint: starting point (x,y) of fill pmap: occupancy map generated from Bresenham ray-tracing """ # Fill empty areas with queue method sx, sy = pmap.shape fringe = deque() fringe.appendleft(cpoint) while fringe: n = fringe.pop() nx, ny = n # West if nx > 0: if pmap[nx - 1, ny] == 0.5: pmap[nx - 1, ny] = 0.0 fringe.appendleft((nx - 1, ny)) # East if nx < sx - 1: if pmap[nx + 1, ny] == 0.5: pmap[nx + 1, ny] = 0.0 fringe.appendleft((nx + 1, ny)) # North if ny > 0: if pmap[nx, ny - 1] == 0.5: pmap[nx, ny - 1] = 0.0 fringe.appendleft((nx, ny - 1)) # South if ny < sy - 1: if pmap[nx, ny + 1] == 0.5: pmap[nx, ny + 1] = 0.0 fringe.appendleft((nx, ny + 1)) ``` This algotihm will fill the area bounded by the yellow lines starting from a center point (e.g. (10, 20)) with zeros: ``` flood_fill((10, 20), map1) map_float = np.array(map1)/10.0 plt.imshow(map1) plt.colorbar() plt.show() ``` Let's use this flood fill on real data: ``` xyreso = 0.02 # x-y grid resolution yawreso = math.radians(3.1) # yaw angle resolution [rad] ang, dist = file_read("lidar01.csv") ox = np.sin(ang) * dist oy = np.cos(ang) * dist pmap, minx, maxx, miny, maxy, xyreso = lg.generate_ray_casting_grid_map(ox, oy, xyreso, False) xyres = np.array(pmap).shape plt.figure(figsize=(20,8)) plt.subplot(122) plt.imshow(pmap, cmap = "PiYG_r") plt.clim(-0.4, 1.4) plt.gca().set_xticks(np.arange(-.5, xyres[1], 1), minor = True) plt.gca().set_yticks(np.arange(-.5, xyres[0], 1), minor = True) plt.grid(True, which="minor", color="w", linewidth = .6, alpha = 0.5) plt.colorbar() plt.show() ```
github_jupyter
<a href="https://colab.research.google.com/github/DebjitHore/Complete-Data-Structures-and-Algorithms-in-Python/blob/main/Recursion_Udemy.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Recursion- A way of solving a problem by having a function call itself. Performing the same operation with different inputs. Usually smaller inputs for convergence, with a base condition to prevent infinite loop. IRL example- Russian Doll. #Pseudocode ``` def openRussianDoll(doll): if doll==1: #Smallest Doll print('All Dolls opened') else: openRussianDoll(doll-1) ``` #Recursion logic. Stack Memory is used and last in, first out is followed. Recursion is less time and space efficient as against iteration. But it is easier to code. #How to write Recursion ##Factorial ``` def factorial(n): assert n>=0 and int (n)==n, 'The number must be positive integer only' if n in [0,1]: return 1 else: return n*factorial(n-1) factorial(5) ``` ##Fibonnaci n-th term of a Fibonnaci series ``` fib_list=[] def fibonacci(n): assert n>=0 and int (n) ==n, 'The number must be integer and positive' if n in [0,1]: return n else: return fibonacci(n-1)+fibonacci(n-2) fibonacci(5) ``` Fibonacci series containing of n terms ``` a=0 b=1 fib_list=[0,1] n= int(input('No of terms you want')) assert int (n)==n and n>0 i=0 if n==1: print(0) else: while i<n-2: temp=a+b fib_list.append(temp) a=b b=temp i+=1 print(fib_list) ``` ##Sum of digits of an integer number ``` def sumOfDigits(n): sum=0 assert int (n) == n and n>=0, 'Must be a positive integer' while n>0: return n%10 +sumOfDigits(int (n/10)) else: return 0 sumOfDigits(546) #sumOfDigits(-87) ``` ##Power of a number ``` def powerOfNumber(x,n): assert int (n) ==n and n >=0 if n==0: return 1 if n==1: return x else: return x*powerOfNumber(x, n-1) powerOfNumber( -2, 5) def gcd(a, b): assert int (a) == a and int (b) ==b, 'Integer numbers only' if a%b==0: return b else: return gcd(b, a%b) print(gcd(48,18)) ``` ## Decimal to Binary ``` def decToBinary(n): assert int (n) ==n if n is 0: return 0 if n is 1: return 1 else: return (int (n%2))+10*decToBinary(int (n/2)) decToBinary(13) ``` ##Reverse of a number using Recursion ``` def reverseNumber(n, r): #assert int (n) == n and n >0 if n ==0: return r else: return reverseNumber( int ( n/10), r*10+n%10) reverseNumber(647, 0) ``` #Largest number in an Array. ``` def findMaxNumRec( sampleArray, n): #n is the length of array if n==1: return sampleArray[0] else: return max(sampleArray[n-1], findMaxNumRec(sampleArray, n-1)) findMaxNumRec([11, 14, 7, 9, 3, 12], 6) ```
github_jupyter
# Settings/scenario management Capacity expansion modeling is often an exercise in exploring the difference between different technical, cost, or policy scenarios across a range of planning years, so PowerGenome has a built-in method for creating modified versions of a single baseline scenario. Within the settings file this shows up in how planning periods are defined and a nested dictionary that allows any "normal" parameter to be modified for different scenarios. ## Scenario management files Scenario management is deeply build into the input file structure. So much so, in fact, that it might be difficult to create inputs for a single scenario without following the layout designed for multiple scenarios. ### Scenario names Each scenario has a long name and a short identifier, defined in the `case_id_description_fn` file (`test_case_id_description.csv` in the example). These cases are assumed to be the same across planning periods. When using the command line interface, case folders are created using the format `<case_id>_<model_year>_<case_description>`, so they look something like `p1_2030_Tech_CES_with_RPS`. Case IDs are used in the `scenario_definitions_fn` file (it's `test_scenario_inputs.csv` or `test_scenario_inputs_short.csv` in the example), and the `emission_policies_fn` (`test_rps_ces_emission_limits.csv`). ## Planning periods When running a single planning period, many functions expect the parameters `model_year` and `model_first_planning_year` to be integers (a single year). In a multi-planning period settings file, each of these parameters should be a list of integers and they should be the same length. They now represent a paired series of the first and last years in each of the planning periods to be investigated. ``` model_year: [2030, 2045] model_first_planning_year: [2020, 2031] ``` In this case, planning years of 2030 and 2045 will be investigated. Hourly demand is calculated for planning years. The first year in a planning period is needed because technology costs are calculated as the average of all costs over a planning period. So for the first planning period of 2020-2030, load/demand will be calculated for 2030 and the cost of building a new generator will be the average of all values from 2020-2030. ## Settings management The parameter `settings_management` is a nested dictionary with alternative values for any parameters that will be modified as part of a sensitivity, or that might have different values across planning periods. The structure of this dictionary is: ``` settings_management: <model year>: <sensitivity column name>: <sensitivity value name>: <settings parameter name>: <settings parameter value> ``` `<sensitivity column name>` is the name of a column in the `scenario_definitions_fn` parameter (it's `test_scenario_inputs.csv` in the example). The first columns of this file have a `case_id` and `year` that uniquely define each model run. Model runs might test the effect of different natural gas prices (`ng_price` in the example file), with values of `reference` and `low`. The corresponding section of the `settings_management` parameter for the planning year 2030 will look like: ``` settings_management: 2030: ng_price: # <sensitivity column name> reference: # <sensitivity value name> aeo_fuel_scenarios: # <settings parameter name> naturalgas: reference # <settings parameter value> low: aeo_fuel_scenarios: naturalgas: high_resource ``` So in this case we're modifying the settings parameter `aeo_fuel_scenarios` by defining different AEO scenario names for the `naturalgas` fuel type. By default, this section of the settings file looks like: ``` eia_series_scenario_names: reference: REF2020 low_price: LOWPRICE high_price: HIGHPRICE high_resource: HIGHOGS low_resource: LOWOGS aeo_fuel_scenarios: coal: reference naturalgas: reference distillate: reference uranium: reference ``` So we're changing the AEO case from `reference` to `high_resource` (which correspond to `REF2020` and `HIGHOGS` in the EIA open data API). It's important to understand that parameter values are updated by searching for `key:value` pairs in a dictionary and updating them. This means that in the example above I was able to change the AEO scenario for just natural gas prices, and I didn't have to list the other fuel types. But if the `value` is a list and only one item should be changed, then the entire list must be included in `settings_management`. As an example, cost scenarios for new-build generators are usually defined like: ``` # Format for each list item is <technology>, <tech_detail>, <cost_case>, <size> atb_new_gen: - [NaturalGas, CCCCSAvgCF, Mid, 500] - [NaturalGas, CCAvgCF, Mid, 500] - [NaturalGas, CTAvgCF, Mid, 100] - [LandbasedWind, LTRG1, Mid, 1] - [OffShoreWind, OTRG10, Mid, 1] - [UtilityPV, LosAngeles, Mid, 1] - [Battery, "*", Mid, 1] ``` If I want to have low cost renewables capex in a scenario, the corresponding section of `settings_management` should include all technologies, even if they don't change. This is because the ATB technologies are defined in a list of lists. ``` settings_management: 2030: renewable_capex: low: atb_new_gen: - [NaturalGas, CCCCSAvgCF, Mid, 500] - [NaturalGas, CCAvgCF, Mid, 500] - [NaturalGas, CTAvgCF, Mid, 100] - [LandbasedWind, LTRG1, Low, 1] - [OffShoreWind, OTRG10, Low, 1] - [UtilityPV, LosAngeles, Low, 1] - [Battery, "*", Low, 1] ```` ``` %load_ext autoreload %autoreload 2 from pathlib import Path import pandas as pd from powergenome.util import ( build_scenario_settings, init_pudl_connection, load_settings, check_settings ) ``` ## Import settings Settings are imported by reading the YAML file and converting it to a Python dictionary. In the code below I'm loading the settings and creating a nested dictionary `scenario_settings` that has all of the modified parameters for each case. Settings can also be checked for some common errors using the `check_settings` function. ``` cwd = Path.cwd() settings_path = ( cwd.parent / "example_systems" / "CA_AZ" / "test_settings.yml" ) settings = load_settings(settings_path) settings["input_folder"] = settings_path.parent / settings["input_folder"] scenario_definitions = pd.read_csv( settings["input_folder"] / settings["scenario_definitions_fn"] ) scenario_settings = build_scenario_settings(settings, scenario_definitions) pudl_engine, pudl_out, pg_engine = init_pudl_connection( freq="AS", start_year=min(settings.get("data_years")), end_year=max(settings.get("data_years")), ) check_settings(settings, pg_engine) ``` We can check to see if the natural gas price has changed from case `p1` to `s1`, and confirm that they are different. ``` scenario_settings[2030]["p1"]["aeo_fuel_scenarios"] scenario_settings[2030]["s1"]["aeo_fuel_scenarios"] ``` The values of `model_year` and `model_first_planning_year` have also changed from lists to integers. ``` settings["model_year"], settings["model_first_planning_year"] scenario_settings[2030]["p1"]["model_year"], scenario_settings[2030]["p1"]["model_first_planning_year"] scenario_settings[2045]["p1"]["model_year"], scenario_settings[2045]["p1"]["model_first_planning_year"] ``` ## Scenario data not defined in the settings file Some case/scenario data is defined in input CSV files rather that the settings YAML file. This is true for demand response (`demand_response_fn`, or the example file `test_ev_load_shifting.csv`). If you are supplying your own hourly demand profiles, it is also true for `regional_load_fn` (`test_regional_load_profiles.csv`). ### Demand response The demand response CSV file has 4 header rows, which correspond to the resource type, the model planning year, the scenario name, and the model region. The resource type should match a resource defined in the settings parameter `demand_response_resources`. ``` # Name of the DSM resource, fraction of load that can be shifted, and number of hours # that it can be shifted demand_response_resources: 2030: ev_load_shifting: fraction_shiftable: 0.8 parameter_values: Max_DSM_delay: 5 DR: 2 2045: ev_load_shifting: fraction_shiftable: 0.8 parameter_values: Max_DSM_delay: 5 DR: 2 demand_response: 'moderate' ``` The settings parameter `demand_response` - which can be changed via `settings_management` - is used to select the DR scenario in the CSV file. ### User-supplied load If you want to use your own load projections, define an input file with the parameter `regional_load_fn`. The first three rows are headers corresponding to the model year, electrification scenario, and model region. The electrification scenario names should match values in the column `electrification` of `scenario_definitions_fn`. This doesn't match with how demand response is handled and may be changed in the future.
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np import statistics import math from sklearn.linear_model import LinearRegression from scipy.optimize import curve_fit er_cas_100_data = pd.read_csv('proc_er_cas_100.csv') del er_cas_100_data['Unnamed: 0'] er_500_50_0012 = pd.read_csv('proc_er_500_50_0012.csv') del er_500_50_0012['Unnamed: 0'] er_1000_50_0006 = pd.read_csv('proc_er_1000_50_0006.csv') del er_1000_50_0006['Unnamed: 0'] er_1500_50_0004 = pd.read_csv('proc_er_1500_50_0004.csv') del er_1500_50_0004['Unnamed: 0'] er_cas_100_data er_500_50_0012 er_1000_50_0006 er_1500_50_0004 er_cas_100_dict = {} for i in range(100): target = list(range(i*30, (i+1)*30)) temp_er_cas_100 = er_cas_100_data[i*30 + 0 : (i+1)*30] alive = 0 for index in target: if (temp_er_cas_100['alive_nodes'][index] != 0) and (temp_er_cas_100['fin_larg_comp_a'][index] != 0): alive += 1 p_k = 0.8 * 499 * temp_er_cas_100['t'][index] if i == 0: er_cas_100_dict['attack_size'] = [statistics.mean(temp_er_cas_100['attack_size'].values.tolist())] er_cas_100_dict['t'] = [statistics.mean(temp_er_cas_100['t'].values.tolist())] er_cas_100_dict['init_intra_edge_a'] = [statistics.mean(temp_er_cas_100['init_intra_edge_a'].values.tolist())] er_cas_100_dict['alive ratio'] = [alive / 30] er_cas_100_dict['p<k>'] = [p_k] else: er_cas_100_dict['attack_size'].append(statistics.mean(temp_er_cas_100['attack_size'].values.tolist())) er_cas_100_dict['t'].append(statistics.mean(temp_er_cas_100['t'].values.tolist())) er_cas_100_dict['init_intra_edge_a'].append(statistics.mean(temp_er_cas_100['init_intra_edge_a'].values.tolist())) er_cas_100_dict['alive ratio'].append(alive / 30) er_cas_100_dict['p<k>'].append(p_k) plt.plot(er_cas_100_dict['p<k>'], er_cas_100_dict['alive ratio']) plt.title('The ratio that shows whether largest component is alive or not') plt.show() er_500_50_0012_dict = {} for i in range(100): target = list(range(i*50, (i+1)*50)) temp_er_500_50_0012 = er_500_50_0012[i*50 + 0 : (i+1)*50] alive = 0 for index in target: if (temp_er_500_50_0012['alive_nodes'][index] != 0) and (temp_er_500_50_0012['fin_larg_comp_a'][index] != 0): alive += 1 p_k = 0.8 * 499 * temp_er_500_50_0012['t'][index] if i == 0: er_500_50_0012_dict['attack_size'] = [statistics.mean(temp_er_500_50_0012['attack_size'].values.tolist())] er_500_50_0012_dict['t'] = [statistics.mean(temp_er_500_50_0012['t'].values.tolist())] er_500_50_0012_dict['init_intra_edge_a'] = [statistics.mean(temp_er_500_50_0012['init_intra_edge_a'].values.tolist())] er_500_50_0012_dict['alive ratio'] = [alive / 50] er_500_50_0012_dict['p<k>'] = [p_k] er_500_50_0012_dict['alive_nodes'] = [statistics.mean(temp_er_cas_100['alive_nodes'].values.tolist())] else: er_500_50_0012_dict['attack_size'].append(statistics.mean(temp_er_500_50_0012['attack_size'].values.tolist())) er_500_50_0012_dict['t'].append(statistics.mean(temp_er_500_50_0012['t'].values.tolist())) er_500_50_0012_dict['init_intra_edge_a'].append(statistics.mean(temp_er_500_50_0012['init_intra_edge_a'].values.tolist())) er_500_50_0012_dict['alive ratio'].append(alive / 50) er_500_50_0012_dict['p<k>'].append(p_k) er_500_50_0012_dict['alive_nodes'].append(statistics.mean(temp_er_cas_100['alive_nodes'].values.tolist())) plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('N=500, K=100') plt.xlabel("p<k>") plt.ylabel("proportion of survived largest component") plt.savefig("er_n500_k100") plt.show() X = er_500_50_0012_dict['p<k>'] Y = er_500_50_0012_dict['log_reg_p<k>'] def sigmoid(x, L ,x0, k, b): y = L / (1 + np.exp(-k*(x-x0)))+b return (y) p0 = [max(Y), np.median(X),1,min(Y)] # this is an mandatory initial guess popt, pcov = curve_fit(sigmoid, X, Y,p0, method='dogbox') plt.scatter(X, Y, marker='.') plt.plot(X, Y, linewidth=2) plt.plot(X, sigmoid(X, *popt), color='red', linewidth=2) plt.show() plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['log_reg_p<k>']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('N=500, K=100') plt.xlabel("p<k>") plt.ylabel("percentage of survived largest component") plt.savefig("er_n500_k100") plt.show() er_1000_50_0006_dict = {} for i in range(100): target = list(range(i*50, (i+1)*50)) temp_er_1000_50_0006 = er_1000_50_0006[i*50 + 0 : (i+1)*50] alive = 0 for index in target: if (temp_er_1000_50_0006['alive_nodes'][index] != 0) and (temp_er_1000_50_0006['fin_larg_comp_a'][index] != 0): alive += 1 p_k = 0.8 * 999 * temp_er_1000_50_0006['t'][index] if i == 0: er_1000_50_0006_dict['attack_size'] = [statistics.mean(temp_er_1000_50_0006['attack_size'].values.tolist())] er_1000_50_0006_dict['t'] = [statistics.mean(temp_er_1000_50_0006['t'].values.tolist())] er_1000_50_0006_dict['init_intra_edge_a'] = [statistics.mean(temp_er_1000_50_0006['init_intra_edge_a'].values.tolist())] er_1000_50_0006_dict['alive ratio'] = [alive / 50] er_1000_50_0006_dict['p<k>'] = [p_k] else: er_1000_50_0006_dict['attack_size'].append(statistics.mean(temp_er_1000_50_0006['attack_size'].values.tolist())) er_1000_50_0006_dict['t'].append(statistics.mean(temp_er_1000_50_0006['t'].values.tolist())) er_1000_50_0006_dict['init_intra_edge_a'].append(statistics.mean(temp_er_1000_50_0006['init_intra_edge_a'].values.tolist())) er_1000_50_0006_dict['alive ratio'].append(alive / 50) er_1000_50_0006_dict['p<k>'].append(p_k) plt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('N=1000, K=200') plt.xlabel("p<k>") plt.ylabel("proportion of survived largest component") plt.savefig("er_n1000_k200") plt.show() er_1500_50_0004_dict = {} for i in range(100): target = list(range(i*50, (i+1)*50)) temp_er_1500_50_0004 = er_1500_50_0004[i*50 + 0 : (i+1)*50] alive = 0 for index in target: if (temp_er_1500_50_0004['alive_nodes'][index] != 0) and (temp_er_1500_50_0004['fin_larg_comp_a'][index] != 0): alive += 1 p_k = 0.8 * 1499 * temp_er_1500_50_0004['t'][index] if i == 0: er_1500_50_0004_dict['attack_size'] = [statistics.mean(temp_er_1500_50_0004['attack_size'].values.tolist())] er_1500_50_0004_dict['t'] = [statistics.mean(temp_er_1500_50_0004['t'].values.tolist())] er_1500_50_0004_dict['init_intra_edge_a'] = [statistics.mean(temp_er_1500_50_0004['init_intra_edge_a'].values.tolist())] er_1500_50_0004_dict['alive ratio'] = [alive / 50] er_1500_50_0004_dict['p<k>'] = [p_k] else: er_1500_50_0004_dict['attack_size'].append(statistics.mean(temp_er_1500_50_0004['attack_size'].values.tolist())) er_1500_50_0004_dict['t'].append(statistics.mean(temp_er_1500_50_0004['t'].values.tolist())) er_1500_50_0004_dict['init_intra_edge_a'].append(statistics.mean(temp_er_1500_50_0004['init_intra_edge_a'].values.tolist())) er_1500_50_0004_dict['alive ratio'].append(alive / 50) er_1500_50_0004_dict['p<k>'].append(p_k) plt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('N=1500, K=300') plt.xlabel("p<k>") plt.ylabel("proportion of survived largest component") plt.savefig("er_n1500_k300") plt.show() plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio']) plt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio']) plt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('Total Graph (Expanded)') plt.xlabel("p<k>") plt.ylabel("proportion of survived largest component") plt.legend(['N=500', 'N=1000', 'N=1500']) plt.savefig("er_total_expanded") plt.show() plt.plot(er_500_50_0012_dict['p<k>'], er_500_50_0012_dict['alive ratio']) plt.plot(er_1000_50_0006_dict['p<k>'], er_1000_50_0006_dict['alive ratio']) plt.plot(er_1500_50_0004_dict['p<k>'], er_1500_50_0004_dict['alive ratio']) plt.axvline(x=2.4554, color='r', linestyle='--') plt.title('Total Graph') plt.xlabel("p<k>") plt.ylabel("proportion of survived largest component") plt.legend(['N=500', 'N=1000', 'N=1500']) plt.xlim([2.36, 2.5]) plt.savefig("er_total") plt.show() ```
github_jupyter
``` from pymongo import MongoClient import pandas as pd import datetime client = MongoClient() characters = client.ck2.characters ``` This notebook tries to build a world tree by drawing and edge between every character in the save file with their father and mother. Running this code will generate a network with over 270,000 nodes out of a total of almost 400,000. This was taking far too long for Gephi to graph so I did not continue. The next 3 notebooks contain the first code I wrote for extracting data from the save file. I wild manually copy and paste out the dynasty data from both files, the character data and the title data and save them in seperate files. ## Get Parent/Child Edges ``` pipeline = [ { "$unwind" : "$parents" }, { "$lookup" : { "from" : "dynasties", "localField" : "dnt", "foreignField" : "_id", "as" : "dynasty" } }, { "$unwind" : "$dynasty" }, { "$match" : {"parents" : {"$nin" : [None]}, "$or" : [{"cul" : "irish"}, {"dynasty.culture" : "irish"}]} }, { "$project" : {"_id" : 1, "parents" : 1} } ] relation_df = pd.DataFrame(list(characters.aggregate(pipeline))) ``` ## Get all Characters ``` pipeline = [ { "$lookup" : { "from" : "dynasties", "localField" : "dnt", "foreignField" : "_id", "as" : "dynasty" } }, { "$unwind" : "$dynasty" }, { "$project" : {"_id" : 1, "name" : {"$concat" : ["$bn", " ", "$dynasty.name"]}, "culture" : {"$ifNull" : ["$cul", "$dynasty.culture"]}, "religion" : {"$ifNull" : ["$rel", "$dynasty.religion"]} } } ] ``` ## Build Network ``` import networkx as nx import matplotlib.pyplot as plt chars = list(characters.aggregate(pipeline)) for char in chars: for key in list(char.keys()): val = char[key] if isinstance(val, type(None)): del char[key] G = nx.Graph() for char in chars: #characters.aggregate(pipeline): if "culture" in char and "religion" in char and "name" in char: G.add_node(char["_id"], name = char['name'], culture = char['culture'], religion = char['religion']) relation_df = relation_df.dropna(axis=0, how='any') for i in range(len(relation_df)): G.add_edge(relation_df.loc[i, "_id"], relation_df.loc[i, "parents"]) G.remove_nodes_from(nx.isolates(G)) #drop unconnected nodes #nx.draw(G) #plt.show() nx.write_graphml(max(nx.connected_component_subgraphs(G), key=len), "ck2-World-Tree-2.graphml") ```
github_jupyter
# Pythonの基礎 ## データ型 ``` # [変数名] = [値] と書くと,その変数名を持った変数に値を代入できる.代入された値は後で利用したりすることができる. x = 1 # 整数 y = 2.1 # 実数 z = 2 * x + y # 四則演算 (×→*, ÷→/) print(z) # print(x)でxを表示.print(x, y, ...)ともかけて,xとyにスペースが入って出力される str1 = "これは文字列" str2 = 'これも文字列' str3 = """ このように書くと 複数行に渡る文字列 を表現できる """ str4 = ''' こちらでも 良い ''' str5 = str1 + str2 # 文字列の+は文字列の結合 print(str5) arr1 = [1, 2, 3] # リスト arr2 = [3, 4, 5] arr3 = arr1 + arr2 # リストの+はリストの結合 print(arr3) arr4 = ["文字列", "文字列"] # 何のリストであっても良い arr5 = [1, "文字列"] # 文字と数字が入り乱れても良い # arr[i] でarrリストのi+1番目の値を返す.iがマイナスの場合,末尾から数えて-i番目の値を返す first = arr1[0] # 1番目の値を返す print(first) second = arr1[1] # 2番目の値を返す print(second) last = arr1[-1] # 後ろから1番目の値を返す print(last) # arr[i:j]でi+1番目以降,j+1番目より前の値をリストで返す.iとjは省略可能で,省略されるとそれぞれ「最初から」,「最後まで」と解釈される. print(arr1[1:]) # 2番目以降の値をリストで返す print(arr1[:2]) # 3番目より前の値をリストで返す print(arr3[1:3]) # 1番目以降,4番目より前の値をリストで返す # 文字列も配列のようにアクセス可能 s = "あいうえお" print(s[1]) print(s[1:3]) # arr.append(x) で 配列arrの末尾に要素xを追加する arr = [1, 2] arr.append(3) print(arr) dict1 = {"a": 1, "b": 3, "c": 4} # 辞書.{key1: val1, key2: val2, ...}と書くと辞書を作れる. # 辞書はキーと値からなる順序付けされていないデータ # 1つのキーに対応する値は1つのみ print(dict1["a"]) # dict[key]とすれば,dict辞書のkeyに対応した値を返す dict2 = {} # 空の辞書も作れる dict2["a"] = 2 # dict[key] = valueとすれば,dict辞書のkeyに値valueを対応させられる dict2["b"] = 10 print(dict2) dict3 = {1: "b", 3: "d"} # keyは数値,文字列などを用いることが可能,リストや辞書は不可.valueは何でも良い. # ブール型.True or False bool1 = True bool2 = False bool3 = 1 == 2 # 1 == 2はFalseとなる bool4 = 5 > 2 # 5 > 2はTrueとなる bool5 = 4 in [1, 2, 3] # x in AはAがリストの場合Aにxが含まれればTrue print(bool5) bool6 = 4 in {1: "a", 3: "b", 4: "c"} # Aが辞書の場合,Aのキーにxが含まれればTrue print(bool6) bool7 = "きく" in "かきくけこ" # Aが文字列の場合,xがAの部分文字列ならTrue print(bool7) # ブール型は特にif文で利用される(詳細は後述) d = {"a": 1, "b": 2} if "a" in d: print("The condition was True") else: print("The condition was False") ``` ## 制御構文 ``` """ if condition: # conditionがTrueのときにここが実行される else: # conditionがFalseのときにここが実行される(else:以降は省略可能) """ # 長いコメントは"""や'''で書ける x = 3 if x > 2: print("x > 2") else: print("x <= 2") if x == 3: print("x == 3") # Pythonでインデントは構文の一部!! # インデントが増えた箇所から元のインデント数に戻るまで,インデント数が同じ箇所は同じコードブロックと見なされる. # コードブロック: if文やfor文,関数定義の影響が及ぼされる範囲 if x < 2: # インデント0 print("x < 2") # インデント1 else: # インデント0 print("x >= 2") # インデント1 print("This line was also executed") # インデント1 この行もx < 2でない場合にのみ実行される if x / 3 == 1: # インデント0 print("x / 3 == 1") # インデント1 else: # インデント0 print("x / 3 != 1") # インデント1 print("This line is executed whatever the condition is") # インデント0 この行は直前の行と同じコードブロックではない """ for x in arr: # リストarrの各要素xに対して,この箇所が実行される """ for x in [0, 1, 2]: print(x * 2) for x in range(4): # range(x)で(厳密には違うが)xより小さい整数までの配列が得られる print(x * 2 + 2) for x in range(1, 4): # range(x, y)で(厳密には違うが)xからyより小さい整数までの配列が得られる print(x * 3) for i, x in enumerate([3, 4, 5]): # for i, x in enumerate(arr)とすれば,iに0, 1, ...というように何番目の繰り返しかを表す整数が代入される. print(i, x) """ while condition: # conditionがTureである間,この箇所が実行される """ i = 0 while i < 3: print(i) i = i + 1 # i += 1とも書ける """ try: # 例外が起こりそうな操作 except: # 例外が起こったときに実行される """ try: z = "文字" + 1 # 文字列と数値は足せない except: print("例外が発生しました") try: z = "文字" + 1 # 文字列と数値は足せない except Exception as e: # 変数eに例外が代入される print(e) ``` ## 関数 ``` """ def function_name(var1, var2, ...): # 関数の中身. e.g. x = var1 + var2 return x # 引数var1, var2, ...を与えて,xを返す関数function_nameを定義できる """ def square(x): result = x * x return result y = square(2) print(y) ```
github_jupyter
``` import numpy as np import itertools import math import scipy import matplotlib.pyplot as plt import matplotlib import matplotlib.patches as patches from matplotlib import animation from matplotlib import transforms from mpl_toolkits.axes_grid1 import make_axes_locatable import xarray as xr import dask from sklearn.cluster import KMeans from sklearn.cluster import AgglomerativeClustering import pandas as pd import netCDF4 def latent_space_analysis(Images, title, iden): mean_image = np.mean(Images, axis=0) var_image = np.std(Images, axis=0) cmap="RdBu_r" fig, ax = plt.subplots(1,2, figsize=(16,2)) cs0 = ax[0].imshow(var_image, cmap=cmap) ax[0].set_title("Image Standard Deviation") cs1 = ax[1].imshow(mean_image, cmap=cmap) ax[1].set_title("Image Mean") ax[0].set_ylim(ax[0].get_ylim()[::-1]) ax[1].set_ylim(ax[1].get_ylim()[::-1]) ax[1].set_xlabel("CRMs") ax[0].set_xlabel("CRMs") ax[0].set_ylabel("Pressure") ax[1].set_yticks([]) y_ticks = np.arange(1300, 0, -300) ax[0].set_yticklabels(y_ticks) ax[1].set_yticklabels(y_ticks) divider = make_axes_locatable(ax[0]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs0, cax=cax) divider = make_axes_locatable(ax[1]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs1, cax=cax) plt.suptitle(title) #plt.savefig("/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/model_graphs/latent_space_components/"+iden+'_'+title+'.png') z_test_tsne = np.load("/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Synoptic_Latent_Spaces/2D_PCA_Latent_Space__31.npy") Test_Images = np.load("/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_W_Test.npy") Max_Scalar = np.load("/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_Max_Scalar.npy") Min_Scalar = np.load("/fast/gmooers/Preprocessed_Data/Centered_50_50/Space_Time_Min_Scalar.npy") Test_Images = np.interp(Test_Images, (0, 1), (Min_Scalar, Max_Scalar)) plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c="#3D9AD1", s=0.1) plt.show() horz_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,1] > -8.1, z_test_tsne[:,1] < -7.90))) vert_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,0] > -12.30, z_test_tsne[:,0] < -11.70))) #horz_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,1] > -8.005, z_test_tsne[:,1] < -7.995))) #vert_line = np.squeeze(np.argwhere(np.logical_and(z_test_tsne[:,0] > -12.025, z_test_tsne[:,0] < -11.975))) horz_line_images = Test_Images[horz_line,:,:] horz_line_latent = z_test_tsne[horz_line,:] vert_line_images = Test_Images[vert_line,:,:] vert_line_latent = z_test_tsne[vert_line,:] horz_line_images_sorted = np.empty(horz_line_images.shape) horz_line_latent_sorted = np.empty(horz_line_latent.shape) vert_line_images_sorted = np.empty(vert_line_images.shape) vert_line_latent_sorted = np.empty(vert_line_latent.shape) count = 0 for i in range(len(horz_line_images_sorted)): ind = np.nanargmin(horz_line_latent[:,0]) horz_line_images_sorted[count,:] = horz_line_images[ind,:] horz_line_latent_sorted[count,:] = horz_line_latent[ind,:] horz_line_latent[ind,:] = np.array([1000.0,1000.0]) #horz_line_images[ind,:] = np.array([1000.0,1000.0]) count = count+1 count = 0 for i in range(len(vert_line_images_sorted)): ind = np.nanargmin(vert_line_latent[:,1]) vert_line_images_sorted[count,:] = vert_line_images[ind,:] vert_line_latent_sorted[count,:] = vert_line_latent[ind,:] vert_line_latent[ind,:] = np.array([10000.0,10000.0]) #vert_line_image[ind,:] = np.array([1000.0,1000.0]) count = count+1 print(np.where(z_test_tsne == horz_line_latent_sorted[0])) print(np.where(z_test_tsne == horz_line_latent_sorted[-1])) print(np.where(z_test_tsne == vert_line_latent_sorted[0])) print(np.where(z_test_tsne == vert_line_latent_sorted[-1])) plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c="#3D9AD1", s=2.0) plt.scatter(x=horz_line_latent_sorted[:, 0], y=horz_line_latent_sorted[:, 1], c="Red", s=2.0) plt.scatter(x=vert_line_latent_sorted[:, 0], y=vert_line_latent_sorted[:, 1], c="Purple", s=2.0) plt.show() print(horz_line_latent_sorted.shape) print(vert_line_latent_sorted.shape) path = "/DFS-L/DATA/pritchard/gmooers/Workflow/MAPS/SPCAM/100_Days/New_SPCAM5/archive/TimestepOutput_Neuralnet_SPCAM_216/atm/hist/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-20-00000.nc" extra_variables = xr.open_dataset(path) ha = extra_variables.hyai.values hb = extra_variables.hybi.values PS = 1e5 Pressures_real = PS*ha+PS*hb fz = 15 lw = 4 siz = 100 XNNA = 1.25 # Abscissa where architecture-constrained network will be placed XTEXT = 0.25 # Text placement YTEXT = 0.3 # Text placement plt.rc('text', usetex=False) matplotlib.rcParams['mathtext.fontset'] = 'stix' matplotlib.rcParams['font.family'] = 'STIXGeneral' #mpl.rcParams["font.serif"] = "STIX" plt.rc('font', family='serif', size=fz) matplotlib.rcParams['lines.linewidth'] = lw others = netCDF4.Dataset("/fast/gmooers/Raw_Data/extras/TimestepOutput_Neuralnet_SPCAM_216.cam.h1.2009-01-01-72000.nc") levs = np.array(others.variables['lev']) new = np.flip(levs) crms = np.arange(1,129,1) Xs, Zs = np.meshgrid(crms, new) horz_line_latent_sorted = np.flip(horz_line_latent_sorted, axis=0) vert_line_latent_sorted = np.flip(vert_line_latent_sorted, axis=0) horz_line_images_sorted = np.flip(horz_line_images_sorted, axis=0) vert_line_images_sorted = np.flip(vert_line_images_sorted, axis=0) # change vx/vy to location on sorted images def mikes_latent_animation(h_coords, v_coords, h_const, v_const, latent_space, xdist, ydist, X, Z, hline, vline, h_images, v_images): fig, ax = plt.subplots(2,2, figsize=(36,16)) feat_list = [] #the real total you need num_steps = len(h_coords) #num_steps = 20 cmap= "RdBu_r" dummy_horz = np.zeros(shape=(30,128)) dummy_horz[:,:] = np.nan dummy_vert = np.zeros(shape=(30,128)) dummy_vert[:,:] = np.nan count = 29 for i in range(num_steps): for j in range(len(dummy_horz)): dummy_horz[count,:] = h_images[i,j,:] if i <= len(v_coords) -1: dummy_vert[count,:] = v_images[i,j,:] else: dummy_vert[count,:] = v_images[-1,j,:] count = count-1 h_rect = patches.Rectangle((h_coords[i],h_const),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') if i <= len(v_coords) -1: v_rect = patches.Rectangle((v_const,v_coords[i]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') else: v_rect = patches.Rectangle((v_const,v_coords[-1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') ax[0,0].scatter(latent_space[:, 0], latent_space[:, 1], c="#3D9AD1", s=0.4, animated=True) ax[0,0].scatter(x=hline[:, 0], y=hline[:, 1], c="Red", s=2.0, animated=True) cs0 = ax[0,0].add_patch(h_rect) cs2 = ax[1,0].scatter(latent_space[:, 0], latent_space[:, 1], c="#3D9AD1", s=0.4, animated=True) ax[1,0].scatter(x=vline[:, 0], y=vline[:, 1], c="Green", s=2.0, animated=True) cs2 = ax[1,0].add_patch(v_rect) cs3 = ax[1,1].pcolor(X, Z, dummy_vert, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0) ax[1,1].set_title("(y) Vertical Velocity", fontsize=fz*2.0) cs1 = ax[0,1].pcolor(X, Z, dummy_horz, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0) ax[0,1].set_title("(x) Vertical Velocity", fontsize=fz*2.0) ax[0,1].set_xlabel("CRMs", fontsize=fz*1.5) ax[1,1].set_xlabel("CRMs", fontsize=fz*1.5) ax[0,1].set_ylabel("Pressure (hpa)", fontsize=fz*1.5) ax[1,1].set_ylabel("Pressure (hpa)", fontsize=fz*1.5) y_ticks = np.array([1000, 800, 600, 400, 200]) ax[1,1].set_yticklabels(y_ticks) ax[0,1].set_yticklabels(y_ticks) divider = make_axes_locatable(ax[1,1]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs1, cax=cax) divider = make_axes_locatable(ax[0,1]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs1, cax=cax) feat_list.append([cs2, cs3, cs1, cs0]) count = 29 ani = animation.ArtistAnimation(fig, feat_list, interval = 125, blit = False, repeat = True) ani.save('/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Animations/Figures/31_W_Axis_Test_Horz_Vert_500.mp4') plt.show() mikes_latent_animation(horz_line_latent_sorted[:,0], vert_line_latent_sorted[:,1], -8.0, -12.0, z_test_tsne, 0.2, 1, Xs, Zs, horz_line_latent_sorted, vert_line_latent_sorted, horz_line_images_sorted, vert_line_images_sorted) x0, y0 = -30, -14 # These are in _pixel_ coordinates!! x1, y1 = 10, 32 length = int(np.hypot(x1-x0, y1-y0)) x, y = np.linspace(x0, x1, 8*length), np.linspace(y0, y1, 8*length) x2, y2 = -38, -12 # These are in _pixel_ coordinates!! x3, y3 = 115, 5 length = int(np.hypot(x3-x2, y3-y2)) x3, y3 = np.linspace(x2, x3, 3*length), np.linspace(y2, y3, 3*length) top_line = np.zeros(shape=(len(x3),2)) shallow_line = np.zeros(shape=(len(x),2)) top_line[:,0] = x3 top_line[:,1] = y3 shallow_line[:,0] = x shallow_line[:,1] = y plt.scatter(x=z_test_tsne[:, 0], y=z_test_tsne[:, 1], c="#3D9AD1", s=0.1) plt.scatter(x=top_line[:, 0], y=top_line[:, 1], c="red", s=0.1) plt.scatter(x=shallow_line[:, 0], y=shallow_line[:, 1], c="green", s=0.1) plt.show() shallow_line.shape z_test_tsne_saved = np.zeros(z_test_tsne.shape) for i in range(len(z_test_tsne)): z_test_tsne_saved[i,:] = z_test_tsne[i,:] def list_maker(original_array, latant_space, image_dataset): new_list = np.empty(original_array.shape) value_list =np.empty(latant_space[:,0].shape) new_images = np.empty(shape=(len(original_array),30,128)) for i in range(len(original_array)): temp_x = original_array[i,0] temp_y = original_array[i,1] for j in range(len(latant_space)): #value_list[j] = np.abs(temp_x-latant_space[j,0])+np.abs(temp_y-latant_space[j,1]) value_list[j] = np.sqrt((temp_x-latant_space[j,0])**2+(temp_y-latant_space[j,1])**2) point = np.argmin(value_list) new_list[i,:] = latant_space[point] new_images[i,:,:] = image_dataset[point,:,:] latant_space[point] = np.array([100000,100000]) value_list[:] = np.nan return new_list, new_images axis_list, axis_images = list_maker(top_line, z_test_tsne, Test_Images) shallow_list, shallow_images = list_maker(shallow_line, z_test_tsne, Test_Images) plt.scatter(x=z_test_tsne_saved[:, 0], y=z_test_tsne_saved[:, 1], c="#3D9AD1", s=0.1) plt.scatter(x=top_line[:, 0], y=top_line[:, 1], c="red", s=0.5) plt.scatter(x=axis_list[:, 0], y=axis_list[:, 1], c="green", s=0.5) plt.scatter(x=shallow_line[:, 0], y=shallow_line[:, 1], c="red", s=0.5) plt.scatter(x=shallow_list[:, 0], y=shallow_list[:, 1], c="green", s=0.5) print(shallow_line.shape) print(axis_list.shape) print(shallow_list.shape) print(top_line.shape) shallow_line = np.flip(shallow_line, axis=0) axis_list = np.flip(axis_list, axis=0) shallow_list = np.flip(shallow_list, axis=0) top_line = np.flip(top_line, axis=0) axis_images = np.flip(axis_images, axis=0) shallow_images = np.flip(shallow_images, axis=0) # change vx/vy to location on sorted images def rotated_latent_animation(h_coords, v_coords, latent_space, xdist, ydist, X, Z, h_images, v_images, hline, vline): fig, ax = plt.subplots(2,2, figsize=(36,16)) feat_list = [] #the real total you need num_steps = len(h_coords) #num_steps = 10 cmap= "RdBu_r" dummy_horz = np.zeros(shape=(30,128)) dummy_horz[:,:] = np.nan dummy_vert = np.zeros(shape=(30,128)) dummy_vert[:,:] = np.nan count = 29 for i in range(num_steps): for j in range(len(dummy_horz)): dummy_horz[count,:] = h_images[i,j,:] if i <= len(v_coords) -1: dummy_vert[count,:] = v_images[i,j,:] else: dummy_vert[count,:] = v_images[-1,j,:] count = count-1 h_rect = patches.Rectangle((h_coords[i,0],h_coords[i,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') if i <= len(v_coords) -1: v_rect = patches.Rectangle((v_coords[i,0],v_coords[i,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') else: v_rect = patches.Rectangle((v_coords[-1,0],v_coords[-1,1]),xdist,ydist,linewidth=4,edgecolor='black',facecolor='none') ax[0,0].scatter(latent_space[:, 0], latent_space[:, 1], c="#3D9AD1", s=0.4, animated=True) ax[0,0].scatter(x=hline[:, 0], y=hline[:, 1], c="Red", s=2.0, animated=True) cs0 = ax[0,0].add_patch(h_rect) cs2 = ax[1,0].scatter(latent_space[:, 0], latent_space[:, 1], c="#3D9AD1", s=0.4, animated=True) ax[1,0].scatter(x=vline[:, 0], y=vline[:, 1], c="Green", s=2.0, animated=True) cs2 = ax[1,0].add_patch(v_rect) cs3 = ax[1,1].pcolor(X, Z, dummy_vert, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0) #ax[1,1].set_title("(y) Shallow convection", fontsize=fz*2.0) cs1 = ax[0,1].pcolor(X, Z, dummy_horz, cmap=cmap, animated=True, vmin = -1.0, vmax = 1.0) #ax[0,1].set_title("(x) Deep Convection", fontsize=fz*2.0) ax[0,1].set_xlabel("CRMs", fontsize=fz*1.5) ax[1,1].set_xlabel("CRMs", fontsize=fz*1.5) ax[0,1].set_ylabel("Pressure (hpa)", fontsize=fz*1.5) ax[1,1].set_ylabel("Pressure (hpa)", fontsize=fz*1.5) y_ticks = np.array([1000, 800, 600, 400, 200]) ax[1,1].set_yticklabels(y_ticks) ax[0,1].set_yticklabels(y_ticks) divider = make_axes_locatable(ax[1,1]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs1, cax=cax) divider = make_axes_locatable(ax[0,1]) cax = divider.append_axes("right", size="5%", pad=0.05) fig.colorbar(cs1, cax=cax) feat_list.append([cs2, cs3, cs1, cs0]) count = 29 ani = animation.ArtistAnimation(fig, feat_list, interval = 125, blit = False, repeat = True) ani.save('/fast/gmooers/gmooers_git/CBRAIN-CAM/MAPS/Animations/Figures/31_W_Diagonals_500.mp4') plt.show() rotated_latent_animation(top_line, shallow_line, z_test_tsne_saved, 0.4, 1.5, Xs, Zs, axis_images, shallow_images, axis_list, shallow_list) ```
github_jupyter
``` # General from os import path from random import randrange from sklearn.model_selection import train_test_split, GridSearchCV #cross validation from sklearn.metrics import confusion_matrix, plot_confusion_matrix, make_scorer from sklearn.metrics import accuracy_score, roc_auc_score, balanced_accuracy_score from sklearn.preprocessing import LabelEncoder import pandas as pd import numpy as np import matplotlib.pyplot as plt import xgboost as xgb from sklearn.ensemble import RandomForestClassifier import pickle import joblib ``` ## TRAIN SET ``` trainDataFull = pd.read_csv("trainData.csv") trainDataFull.head(3) trainDataFull.info() trainDataFull.describe() trainData = trainDataFull.loc[:,'v1':'v99'] trainData.head(3) trainLabels = trainDataFull.loc[:,'target'] trainLabels.unique() # encode string class values as integers label_encoder = LabelEncoder() label_encoder = label_encoder.fit(trainLabels) label_encoded_y = label_encoder.transform(trainLabels) label_encoded_y X_train, X_test, y_train, y_test = train_test_split(trainData.values, label_encoded_y, test_size = 0.3, random_state = 33, shuffle = True, stratify = label_encoded_y) ``` ## MODEL-2 (Random Forest Classifier) ``` RFC_model = RandomForestClassifier(n_estimators=800, verbose=2, random_state=0, criterion='gini') RFC_model RFC_model.fit(X_train, y_train) # make predictions for test data y_pred = RFC_model.predict(X_test) y_pred predictions = [round(value) for value in y_pred] # evaluate predictions accuracy = accuracy_score(y_test, predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) #fig = plt.figure(figsize=(10,10)) plot_confusion_matrix(RFC_model, X_test, y_test, values_format='d') ``` ## Save Valid Score ``` y_score = RFC_model.predict_proba(X_test) y_score[0] valid_score = pd.DataFrame(y_score, columns=['c1','c2','c3','c4','c5','c6','c7','c8','c9']) valid_score valid_score.to_csv('./results/valid-submission-RFC.csv', index = False) ``` ## Save & Load Model ## joblib ``` # Save the model as a pickle in a file joblib.dump(RFC_model, './model/model_RFC.pkl') # Load the model from the file RFC_model_from_joblib = joblib.load('./model/model_RFC.pkl') # Use the loaded model to make predictions RFC_model_predictions = RFC_model_from_joblib.predict(X_test) # evaluate predictions accuracy = accuracy_score(y_test, RFC_model_predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) ``` ## GridSearchCV ``` clf = GridSearchCV(RFC_model_model, {'max_depth': [4, 6], 'n_estimators': [100, 200]}, verbose=1, cv=2) clf.fit(X_train, y_train, early_stopping_rounds=10, eval_metric='mlogloss', eval_set=[(X_train, y_train), (X_test, y_test)], verbose=True) print(clf.best_score_) print(clf.best_params_) # Save the model as a pickle in a file joblib.dump(clf.best_estimator_, './model/clf.pkl') # Load the model from the file clf_from_joblib = joblib.load('./model/clf.pkl') # Use the loaded model to make predictions clf_predictions = clf_from_joblib.predict(X_test) # evaluate predictions accuracy = accuracy_score(y_test, clf_predictions) print("Accuracy: %.2f%%" % (accuracy * 100.0)) ``` # TEST ``` testData = pd.read_csv("testData.csv") testData # Use the loaded model to make predictions test_predictions = RFC_model.predict(testData.values) test_predictions # Use the loaded model to make predictions probability test_predictions = RFC_model.predict_proba(testData.values) test_predictions result = pd.DataFrame(test_predictions, columns=['c1','c2','c3','c4','c5','c6','c7','c8','c9']) result result.to_csv('./results/test-submission-RFC.csv', index = False) ``` ## REFERENCES 1- https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn 2- https://github.com/dmlc/xgboost/blob/master/demo/guide-python/sklearn_examples.py 3- https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html 4- https://www.datacamp.com/community/tutorials/xgboost-in-python 5- https://scikit-learn.org/stable/modules/ensemble.html#voting-classifier 6- https://www.datacamp.com/community/tutorials/random-forests-classifier-python?utm_source=adwords_ppc&utm_campaignid=1455363063&utm_adgroupid=65083631748&utm_device=c&utm_keyword=&utm_matchtype=b&utm_network=g&utm_adpostion=&utm_creative=332602034364&utm_targetid=aud-392016246653:dsa-429603003980&utm_loc_interest_ms=&utm_loc_physical_ms=1012782&gclid=EAIaIQobChMI49HTjNO06wIVB-ztCh23nwMLEAAYASAAEgKKEvD_BwE
github_jupyter
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle"> # Accreditation protocol Accreditation Protocol (AP) is a protocol devised to characterize the reliability of noisy quantum devices.<br> Given a noisy quantum device implementing a "target" quantum circuit, AP certifies an upper-bound on the variation distance between the probability distribution of the outputs returned by the device and the ideal probability distribution. This method is based on Ferracin et al, "Accrediting outputs of noisy intermediate-scale quantum devices", https://arxiv.org/abs/1811.09709. This notebook gives an example for how to use the ignis.characterization.accreditation module. This particular example shows how to accredit the outputs of a 4-qubit quantum circuit of depth 5. All the circuits are run using the noisy Aer simulator. ``` #Import general libraries (needed for functions) import numpy as np from numpy import random import qiskit #Import Qiskit classes from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, Aer, execute from qiskit.providers.aer.noise import NoiseModel from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error #Import the accreditation functions. from qiskit.ignis.verification.accreditation import accreditationFitter from qiskit.ignis.verification.accreditation import accreditation_circuits ``` # Input to the protocol AP can accredit the outputs of a __target circuit__ that<br> 1) Takes as input $n$ qubits in the state $|{0}>$<br> 2) Ends with single-qubit measurements in the Pauli-$Z$ basis<br> 3) Is made of $m$ "bands", each band containing a round of single-qubit gates and a round of controlled-$Z$ gates.<br> The accreditation is made by employing __trap circuits__, circuits that can be efficiently simulated on a classical computer and that whose outputs are used to witness the correct functionality of the device.<br> Let's now draw a target quantum circuit! We start with a simple circuit to generate and measure 4-qubits GHZ states. ``` # Create a Quantum Register with n_qb qubits. q_reg = QuantumRegister(4, 'q') # Create a Classical Register with n_qb bits. c_reg = ClassicalRegister(4, 's') # Create a Quantum Circuit acting on the q register target_circuit = QuantumCircuit(q_reg, c_reg) target_circuit.h(0) target_circuit.h(1) target_circuit.h(2) target_circuit.h(3) target_circuit.cz(0,1) target_circuit.cz(0,2) target_circuit.cz(0,3) target_circuit.h(1) target_circuit.h(2) target_circuit.h(3) target_circuit.measure(q_reg, c_reg) target_circuit.draw(output = 'mpl') ``` # Generating accreditation circuits The function $accreditation\_circuits$ generates all the circuits required by AP, target and traps. It automatically appends random Pauli gates to the circuits (if the implementation is noisy, these random Pauli gates reduce the noise to Pauli errors ! ) <br> It also returns the list $postp\_list$ of strings required to post-process the outputs, as well as the number $v\_zero$ indicating the circuit implementing the target. This is the target circuit with randomly chosen Pauli gates: ``` v = 10 circ_list, postp_list, v_zero = accreditation_circuits(target_circuit, v) circ_list[(v_zero)%(v+1)][0].draw(output = 'mpl') ``` This is how a trap looks like: ``` circ_list[(v_zero+1)%(v+1)][0].draw(output = 'mpl') ``` # Simulate the ideal circuits Let's implement AP. We use $accreditation\_circuits$ to generate target and trap circuits. Then, we use the function $single\_protocol\_run$ to implement all these circuits, keeping the output of the target only if all of the traps return the correct output. ``` simulator = qiskit.Aer.get_backend('qasm_simulator') test_1 = accreditationFitter() # Create target and trap circuits with random Pauli gates circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v) outputs_list = [] for circuit_k in range(v+1): job = execute(circuit_list[circuit_k], simulator, shots=1, memory=True) outputs_list.append([job.result().get_memory()[0]]) # Post-process the outputs and see if the protocol accepts test_1.single_protocol_run(outputs_list, postp_list, v_zero) print("Outputs of the target: ",test_1.outputs," , AP",test_1.flag,"these outputs!") ``` In the absence of noise, all traps return the expected output, therefore we always accept the output of the target.<br> To obtain an upper-bound on the variation distance on the outputs of the target circuit, we need to implement AP $d$ times, each time with ___v___ different trap circuits. ``` # Number of runs d = 20 test_2 = accreditationFitter() for run in range(d): # Create target and trap circuits with random Pauli gates circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v) outputs_list = [] # Implement all these circuits for circuit_k in range(v+1): job = execute(circuit_list[circuit_k], simulator, shots=1, memory=True) outputs_list.append([job.result().get_memory()[0]]) # Post-process the outputs and see if the protocol accepts test_2.single_protocol_run(outputs_list, postp_list, v_zero) print("Protocol run number",run+1,", outputs of the target",test_2.flag) print('\nAfter',test_2.num_runs,'runs, AP has accepted',test_2.N_acc,'outputs!') print('\nList of accepted outputs:\n', test_2.outputs) ``` The function $bound\_variation\_distance$ calculates the upper-bound on the variation distance (VD) using $$VD\leq \frac{\varepsilon}{N_{\textrm{acc}}/d-\theta}\textrm{ ,}$$ where $\theta\in[0,1]$ is a positive number and<br> $$\varepsilon= \frac{1.7}{v+1}$$ is the maximum probability of accepting an incorrect state for the target.<br> The function $bound\_variation\_distance$ also calculates the confidence in the bound as $$1-2\textrm{exp}\big(-2\theta d^2\big)$$ ``` theta = 5/100 test_2.bound_variation_distance(theta) print("AP accepted",test_2.N_acc,"out of",test_2.num_runs,"times.") print("With confidence",test_2.confidence,"AP certifies that VD is upper-bounded by",test_2.bound) ``` # Defining the noise model We define a noise model for the simulator. We add depolarizing error probabilities to the cotrolled-$Z$ and single-qubit gates. ``` noise_model = NoiseModel() p1q = 0.002 noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u1') noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u2') noise_model.add_all_qubit_quantum_error(depolarizing_error(p1q, 1), 'u3') p2q = 0.02 noise_model.add_all_qubit_quantum_error(depolarizing_error(p2q, 2), 'cz') basis_gates = ['u1','u2','u3','cz'] ``` We then implement noisy circuits and pass their outputs to $single\_protocol\_run$. ``` test_3 = accreditationFitter() for run in range(d): # Create target and trap circuits with random Pauli gates circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, v) outputs_list = [] # Implement all these circuits with noise for circuit_k in range(v+1): job = execute(circuit_list[circuit_k], simulator, noise_model=noise_model, basis_gates=basis_gates, shots=1, memory=True) outputs_list.append([job.result().get_memory()[0]]) # Post-process the outputs and see if the protocol accepts test_3.single_protocol_run(outputs_list, postp_list, v_zero) print("Protocol run number",run+1,", outputs of the target",test_3.flag) print("\nAP accepted",test_3.N_acc,"out of",test_3.num_runs,"times.") print('\nList of accepted outputs:\n', test_3.outputs) theta = 5/100 test_3.bound_variation_distance(theta) print("\nWith confidence",test_3.confidence,"AP certifies that VD is upper-bounded by",test_3.bound) ``` Changing the number of trap circuits per protocol run changes the upper-bound on the VD, but not the confidence.<br> What number of trap circuits will ensure the minimal upper-bound for your target circuit? ``` min_traps = 4 max_traps = 10 for num_trap_circs in range(0,max_traps-min_traps): test_4 = accreditationFitter() for run in range(d): # Create target and trap circuits with random Pauli gates circuit_list, postp_list, v_zero = accreditation_circuits(target_circuit, num_trap_circs+min_traps) outputs_list = [] # Implement all these circuits with noise for circuit_k in range(num_trap_circs+min_traps+1): job = execute(circuit_list[circuit_k], simulator, noise_model=noise_model, basis_gates=basis_gates, shots=1, memory=True) outputs_list.append([job.result().get_memory()[0]]) # Post-process the outputs and see if the protocol accepts test_4.single_protocol_run(outputs_list, postp_list, v_zero) print("\nWith", num_trap_circs+min_traps, "traps, AP accepted", test_4.N_acc, "out of", test_4.num_runs, "times.") test_4.bound_variation_distance(theta) print("With confidence", test_4.confidence, "AP with", num_trap_circs+min_traps, "traps certifies that VD is upper-bounded by", test_4.bound) ```
github_jupyter
# **Behavioral Cloning** --- **Behavioral Cloning Project** The goals / steps of this project are the following: * Use the simulator to collect data of good driving behavior * Build, a convolution neural network in Keras that predicts steering angles from images * Train and validate the model with a training and validation set * Test that the model successfully drives around track one without leaving the road * Summarize the results with a written report ## Rubric Points ### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/432/view) individually and describe how I addressed each point in my implementation. --- ### Files Submitted & Code Quality #### 1. Submission includes all required files and can be used to run the simulator in autonomous mode My project includes the following files: * `model.ipynb` containing the script to create and train the model * `drive.py` for driving the car in autonomous mode * `model.h5` containing a trained convolution neural network * `writeup_report.md` summarizing the results #### 2. Submission includes functional code Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing ```sh python drive.py model.h5 ``` #### 3. Submission code is usable and readable The `model.ipynb` file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works. ### Model Architecture and Training Strategy #### 1. An appropriate model architecture has been employed My model consists of a convolution neural network with 5x5 filter sizes and depths between 6 and 120 (`model.ipynb` file, cell 12) The model includes LeakyReLU layers to introduce nonlinearity, and the data is normalized in the model using a Keras lambda layer. #### 2. Attempts to reduce overfitting in the model Overfitting is controlled by following steps. * In each convolution layer, I have used *Max Pooling*. It helps in reducing the dimension as well as makes neurans to perform better. * Using data augumentation techniques, I have distributed the training data across all the output class. * In addition to that, the model was trained and validated on different data sets to ensure that the model was not overfitting (code line 10-16). The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track. #### 3. Model parameter tuning * `Learning rate` : The model used an adam optimizer, so the learning rate was not tuned manually (`model.ipynb` file, cell 12, line 40). #### 4. Appropriate training data I have used Udacity training data. There were **Three** images(Center,Left,Right) for every frame and steering angle for center image. We have *8036* frame details. So, totally there were *24108* images given as input. ***Data Distribution Of Given Input Data*** ![alt_text](./Images/RawInput_Distribution.jpg) From the above graph, it is observed that we didn't have same amount of data in each output classes. we can achieve equal distribution by two ways. 1. We can improve the samples for output classes which are lower 2. Reducing the samples which has large amount of data I chose the second way. As most of the data has roughly 300 images in an average, increaing these output classes is not the good choice. For the given problem, we don't require these much data also. So, I have selected only maximum of 200 images per output class. Additionaly, I have skipped output classes which has less then 10 images. ***Data Distribution Of Selected Data*** ![alt_text](./Images/SelectedInput_DD.jpg) The above data is comparatively well distributed. Agin, this is not evenly distributed in all output classes. As we don't take large turn everytime. Mostly we drive straightly with slight turn. So, these selected data will work without any issue. I have used a combination of central, left and right images for training my model. This will help in recovering from the left and right sides of the road. For details about how I created the training data, see the next section. ### Model Architecture and Training Strategy #### 1. Solution Design Approach I have divided the problem into Data Augumentation & Building the neural network. For each change in the data set, I will check my model on different model architecture. From each set, minimum one model will be selected for further improvement. ***SET 1 :*** ![alt_text](./Images/SET_1_Summary.jpg) ***SET 2*** ![alt](./Images/SET_2_Summary.jpg) ***SET 3*** ![alt](./Images/SET_3_Summary.jpg) ***SET 4*** ![alt](./Images/SET_4_Summary.jpg) > **Note :** For each SET seperate python notebook is used. These notebooks are also uploaded with results for reference.(For example : `SET 1` uses `1_Mode_Training.ipynb` respectively) #### 2. Final Model Architecture My final model consists of **Four** hidden layers.( 3 Convolution layer followed by one dense layer) | Layer | Description | |:---------------------:|:---------------------------------------------:| | Input | 160x320x3 | | Resizing Image | 85x320x3 | | Convolution 5x5 | 1x1 stride, valid padding, outputs 81x316x6 | | Leaky ReLU Activation | | Max pooling | 2x2 stride, 2x2 filter, outputs 40x158x6 | | Convolution 5x5 | 1x1 stride, valid padding, outputs 36x154x36 | | Leaky ReLU Activation | | Max pooling | 2x2 stride, 2x2 filter, outputs 18x77x3 | | Convolution 5x5 | 1x1 stride, valid padding, outputs 14x73x120 | | Leaky ReLU Activation | | Max pooling | 2x2 stride, 2x2 filter, outputs 7x36x120 | | Fully connected#1 | 30240 input, 256 output | | Fully connected#2 | 256 input, 1 output | #### 3. Creation of the Training Set & Training Process **Training Set Selection :** As discussed in the previous section, apart from 24108 training images 10818 images selected. Among them 20 percent of the images are used for validation. The input data is distributed among all output classes to avoid biased output. The whole data set is shuffled to get random classes in each batch. ![alt](./Images/Y_Distribution.jpg) **Data Augumentation :** The upper portion of the image not required for detecting the lanes. so, we are slicing the images in the following way. This will reduce the computation cost as well as increase the accuracy. > Input Image : >> ![alt](./Images/center_2016_12_01_13_30_48_287.jpg) > Output Image : >> ![alt](./Images/Cropped.jpg) The cropped image is normalized using the below formulae: ```python >> x=(x/255.0)-0.5 ``` **Training Process :** * Among 80% of input data is taken for training and remaining 20% for validation. * A batch of 32 augumented image is evaluated by my model * The loss will be calculated using `Mean Square Error` function. * Depending upon the loss, `Adam optimizer` will update the weights by back propogation algorithm * This process is continued for all the batches in our training data. Then, the model is evaluated against the validation data The whole training process is repeated for 20 cycle (Epochs). I am plotting the Epochs Vs Loss function to understand the behaviour of my model > ![alt](./Graphs/4_Model_5.png) >>Red line : Validation loss >>Blue line : Training loss
github_jupyter
<a href="https://colab.research.google.com/github/ashraj98/rbf-sin-approx/blob/main/Lab2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Lab 2 ### Ashwin Rajgopal Start off by importing numpy for matrix math, random for random ordering of samples and pyplot for plotting results. ``` import matplotlib.pyplot as plt import numpy as np import random ``` #### Creating the samples X variables can be generated by using `np.random.rand` to generate a array of random numbers between 0 and 1, which is what is required. The same can be done to generate the noise, but then it needs to be divided by 5 and subtracted by .1 to fit the interval [-0.1, 0.1]. The expected values can then by generated applying the function to the inputs and adding the noise. For plotting the original function that will be approximated by the RBF network, `linspace` can be used to generate equally spaced inputs to make a smooth plot of the function. ``` X = np.random.rand(1, 75).flatten() noise = np.random.rand(1, 75).flatten() / 5 - 0.1 D = 0.5 + 0.4 * np.sin(2 * np.pi * X) + noise func_X = np.linspace(0, 1, 100) func_Y = 0.5 + 0.4 * np.sin(2 * np.pi * func_X) ``` #### K-means algorithm This function finds the centers and variances given uncategorized inputs and number of clusters. It also takes in a flag to determined whether to output an averaged variance for all clusters or use specialized variances for each cluster. The algorithm begins by choosing random points from the inputs as the center of the clusters, so that every cluster will have at least point assigned to it. Then the algorithm repetitively assigns points to each cluster using Euclidean distance and averages the assigned points for each cluster to find the new centers. The new centers are compared with the old centers, and if they are the same, the algorithm is stopped. Then using the last assignment of the points, the variance for each cluster is calculated. If a cluster does not have more than one point assigned to it, it is skipped. If `use_same_width=True`, then an normalized variance is used for all clusters. The maximum distance is used by using an outer subtraction between the centers array and itself, and then it is divided by `sqrt(2 * # of clusters)`. If `use_same_width=False`, then for all clusters that had only one point assigned to it, the average of all the other variances is used as the variance for these clusters. ``` def kmeans(clusters=2, X=X, use_same_width=False): centers = np.random.choice(X, clusters, replace=False) diff = 1 while diff != 0: assigned = [[] for i in range(clusters)] for x in X: assigned_center = np.argmin(np.abs(centers - x)) assigned[assigned_center].append(x.item()) new_centers = np.array([np.average(points) for points in assigned]) diff = np.sum(np.abs(new_centers - centers)) centers = new_centers variances = [] no_var = [] for i in range(clusters): if len(assigned[i]) < 2: no_var.append(i) else: variances.append(np.var(assigned[i])) if use_same_width: d_max = np.max(np.abs(np.subtract.outer(centers, centers))) avg_var = d_max / np.sqrt(2 * clusters) variances = [avg_var for i in range(clusters)] else: if len(no_var) > 0: avg_var = np.average(variances) for i in no_var: variances.insert(i, avg_var) return (centers, np.array(variances)) ``` The function below defines the gaussian function. Given the centers and variances for all clusters, it calculates the output for all gaussians at once for a single input. ``` def gaussian(centers, variances, x): return np.exp((-1 / (2 * variances)) * ((centers - x) ** 2)) ``` #### Training the RBF Network For each gaussian, a random weight is generated in the interval [-1, 1]. The same happens for a bias term as well. Then, for the number of epochs specified, the algorithm calculates the gaussian outputs for each input, and then takes the weighted sum and adds the bias to get the output of the network. Then the LMS algorithm is applied. Afterwards, the `linspace`d inputs are used to generate the outputs, which allows for plotting the approximating function. Then both the approximated function (red) and the approximating function (blue) are plot, as well as the training data with the noise. ``` def train(centers, variances, lr, epochs=100): num_centers = len(centers) W = np.random.rand(1, num_centers) * 2 - 1 b = np.random.rand(1, 1) * 2 - 1 order = list(range(len(X))) for i in range(epochs): random.shuffle(order) for j in order: x = X[j] d = D[j] G = gaussian(centers, variances, x) y = W.dot(G) + b e = d - y W += lr * e * G.reshape(1, num_centers) b += lr * e est_Y = [] for x in func_X: G = gaussian(centers, variances, x) y = W.dot(G) + b est_Y.append(y.item()) est_Y = np.array(est_Y) fig = plt.figure() ax = plt.axes() ax.scatter(X, D, label='Sampled') ax.plot(func_X, est_Y, '-b', label='Approximate') ax.plot(func_X, func_Y, '-r', label='Original') plt.title(f'Bases = ${num_centers}, Learning Rate = ${lr}') plt.xlabel('x') plt.ylabel('y') plt.legend(loc="upper right") ``` The learning rates and number of bases that needed to be tested are defined, and then K-means is run for each combination of base and learning rate. The output of the K-means is used as the input for the RBF training algorithm, and the results are plotted. ``` bases = [2, 4, 7, 11, 16] learning_rates = [.01, .02] for base in bases: for lr in learning_rates: centers, variances = kmeans(base, X) train(centers=centers, variances=variances, lr=lr) ``` The best function approximates seem to be with 2 bases. As soon as the bases are increased to 4, overfitting starts to occur, with 16 bases having extreme overfitting. Increasing the learning rate seems to decrease the training error but in some cases increases the overfitting of the data. Run the same combinations or number of bases and learning rate again, but this time using the same Gaussian width for all bases. ``` for base in bases: for lr in learning_rates: centers, variances = kmeans(base, X, use_same_width=True) train(centers=centers, variances=variances, lr=lr, epochs=100) ``` Using the same width for each base seems to drastically decrease overfitting. Even with 16 bases, the approximating function is very smooth. However, after 100 epochs, the training error is still very high, and the original function is not well approximated. After running the training with significantly more epochs (10,000 to 100,000), the function becomes well approximated for large number of bases. But for smaller number of bases like 2, the approximating function is still not close to the approximated function, whereas when using different Gaussian widths, 2 bases was the best approximator of the original function. So, using the same widths, the training takes significantly longer and requires many bases to be used to approximate the original function well.
github_jupyter
``` import csv import numpy as np posts = [] path = 'data/Constraint_Train.csv' num_long_posts = 0 num_real = 0 with open(path, newline='', encoding='utf-8') as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='\"') spamreader.__next__() # Skip header row for row in spamreader: if len(row[1]) > 280: num_long_posts += 1 else: row[1] = row[1].replace(',', ' , ')\ .replace("'", " ' ")\ .replace('.', ' . ')\ .replace('!', ' ! ')\ .replace('?', ' ? ')\ .replace(';', ' ; ') words = row[1].split() num_real = num_real + 1 if row[2] == 'real' else num_real sentence = [word.lower() for word in words] posts.append(sentence) vocab = set([w for s in posts for w in s]) train = posts[:] print("First 10 posts in training set: \n", train[:10]) from collections import Counter print("- Number of datapoints in training set: ", len(posts)) real_percentage = num_real * 100 / len(posts) print("- Split of training data between real and fake: ", real_percentage, \ "% real, ", 100 - real_percentage, "% fake") lengths = [len(post) for post in train] print("- Average post length in train:", np.mean(lengths)) chars = [] for post in train: length = 0 for word in post: length += len(word) chars.append(length) print("- Average num charachters in post in train: ", np.mean(chars)) print("- Num posts removed because they were longer than 280 charachters: ", num_long_posts) words = [word for post in train for word in post] cnt = Counter(words) print("- Number of unique words in train: ", len(cnt.keys())) print("- 10 most common words in train: ", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)]) tot = np.sum(list(cnt.values())) print("- Total words in train:", tot) import csv import numpy as np posts = [] path = 'data/Constraint_Val.csv' num_long_posts = 0 num_real = 0 with open(path, newline='', encoding='utf-8') as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='\"') spamreader.__next__() # Skip header row for row in spamreader: if len(row[1]) > 280: num_long_posts += 1 else: row[1] = row[1].replace(',', ' , ')\ .replace("'", " ' ")\ .replace('.', ' . ')\ .replace('!', ' ! ')\ .replace('?', ' ? ')\ .replace(';', ' ; ') words = row[1].split() num_real = num_real + 1 if row[2] == 'real' else num_real sentence = [word.lower() for word in words] posts.append(sentence) vocab = set([w for s in posts for w in s]) val = posts[:] print("First 10 posts in validation set: \n", val[:10]) from collections import Counter print("- Number of datapoints in validation set: ", len(posts)) real_percentage = num_real * 100 / len(posts) print("- Split of training data between real and fake: ", real_percentage, \ "% real, ", 100 - real_percentage, "% fake") lengths = [len(post) for post in val] print("- Average post length in val:", np.mean(lengths)) chars = [] for post in val: length = 0 for word in post: length += len(word) chars.append(length) print("- Average num charachters in post in val: ", np.mean(chars)) print("- Num posts removed because they were longer than 280 charachters: ", num_long_posts) words = [word for post in val for word in post] cnt = Counter(words) print("- Number of unique words in val: ", len(cnt.keys())) print("- 10 most common words in val: ", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)]) tot = np.sum(list(cnt.values())) print("- Total words in val:", tot) import csv import numpy as np posts = [] path = 'data/english_test_with_labels.csv' num_long_posts = 0 num_real = 0 with open(path, newline='', encoding='utf-8') as csvfile: spamreader = csv.reader(csvfile, delimiter=',', quotechar='\"') spamreader.__next__() # Skip header row for row in spamreader: if len(row[1]) > 280: num_long_posts += 1 else: row[1] = row[1].replace(',', ' , ')\ .replace("'", " ' ")\ .replace('.', ' . ')\ .replace('!', ' ! ')\ .replace('?', ' ? ')\ .replace(';', ' ; ') words = row[1].split() num_real = num_real + 1 if row[2] == 'real' else num_real sentence = [word.lower() for word in words] posts.append(sentence) vocab = set([w for s in posts for w in s]) test = posts[:] print("First 10 posts in test set: \n", test[:10]) from collections import Counter print("- Number of datapoints in test set: ", len(posts)) real_percentage = num_real * 100 / len(posts) print("- Split of training data between real and fake: ", real_percentage, \ "% real, ", 100 - real_percentage, "% fake") lengths = [len(post) for post in test] print("- Average post length in test:", np.mean(lengths)) chars = [] for post in test: length = 0 for word in post: length += len(word) chars.append(length) print("- Average num charachters in post in test: ", np.mean(chars)) print("- Num posts removed because they were longer than 280 charachters: ", num_long_posts) words = [word for post in test for word in post] cnt = Counter(words) print("- Number of unique words in test: ", len(cnt.keys())) print("- 10 most common words in test: ", [(i, round(cnt[i] / len(words) * 100.0, 2)) for i, ntount in cnt.most_common(10)]) tot = np.sum(list(cnt.values())) print("- Total words in test:", tot) ```
github_jupyter
To finish, check out: http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1992AJ....104.2213L&amp;data_type=PDF_HIGH&amp;whole_paper=YES&amp;type=PRINTER&amp;filetype=.pdf ``` # Third-party from astropy.io import ascii, fits import astropy.coordinates as coord import astropy.units as u from astropy.constants import c import matplotlib as mpl import matplotlib.pyplot as pl import numpy as np from scipy.interpolate import interp1d pl.style.use('apw-notebook') %matplotlib inline # pl.style.use('classic') # %matplotlib notebook data_files = ["../data/apVisit-r5-6994-56770-261.fits", "../data/apVisit-r5-6994-56794-177.fits"] model_file = "../data/apStar-r5-2M00004994+1621552.fits" min_wvln = 15329 max_wvln = 15359 def load_file(filename, chip): hdulist1 = fits.open(filename) wvln = hdulist1[4].data[chip] ix = (wvln >= min_wvln) & (wvln <= max_wvln) wvln = wvln[ix] flux = hdulist1[1].data[chip,ix] flux_err = hdulist1[2].data[chip,ix] return {'wvln': wvln, 'flux': flux, 'flux_err': flux_err} def load_model_file(filename): hdulist1 = fits.open(filename) flux = hdulist1[1].data[0] flux_err = hdulist1[2].data[0] wvln = 10**(hdulist1[0].header['CRVAL1'] + np.arange(flux.size) * hdulist1[0].header['CDELT1']) # ix = (wvln >= min_wvln) & (wvln <= max_wvln) ix = (wvln < 15750) & (wvln > 15150) # HACK: magic numbers return {'wvln': wvln[ix], 'flux': flux[ix], 'flux_err': flux_err[ix]} d = load_file(fn, chip=2) d['wvln'].shape chip = 2 fig,ax = pl.subplots(1,1,figsize=(12,6)) for fn in data_files: d = load_file(fn, chip=chip) ax.plot(d['wvln'], d['flux'], drawstyle='steps', marker=None) ref_spec = load_model_file(model_file) ax.plot(ref_spec['wvln'], 3.2*ref_spec['flux'], drawstyle='steps', marker=None, lw=2.) # HACK: scale up # _d = 175 # ax.set_xlim(15150.+_d, 15175.+_d) # ax.set_ylim(10000, 20000) all_spectra = [load_file(f, chip=2) for f in files] ref_spec['interp'] = interp1d(ref_spec['wvln'], ref_spec['flux'], kind='cubic', bounds_error=False) def get_design_matrix(data, ref_spec, v1, v2): """ Note: Positive velocity is a redshift. """ X = np.ones((3, data['wvln'].shape[0])) X[1] = ref_spec['interp'](data['wvln'] * (1 + v1/c)) # this is only good to first order in (v/c) X[2] = ref_spec['interp'](data['wvln'] * (1 + v2/c)) return X def get_optimal_chisq(data, ref_spec, v1, v2): X = get_design_matrix(data, ref_spec, v1, v2) return np.linalg.solve( X.dot(X.T), X.dot(data['flux']) ) spec_i = 1 v1 = 35 * u.km/u.s v2 = -5 * u.km/u.s X = get_design_matrix(all_spectra[spec_i], ref_spec, v1, v2) opt_pars = get_optimal_chisq(all_spectra[spec_i], ref_spec, v1, v2) opt_pars def make_synthetic_spectrum(X, pars): return X.T.dot(pars) def compute_chisq(data, X, opt_pars): synth_spec = make_synthetic_spectrum(X, opt_pars) return -np.sum((synth_spec - data['flux'])**2) # opt_pars = np.array([1.1E+4, 0.5, 0.5]) synth_spec = make_synthetic_spectrum(X, opt_pars) pl.plot(all_spectra[spec_i]['wvln'], all_spectra[spec_i]['flux'], marker=None, drawstyle='steps') pl.plot(all_spectra[spec_i]['wvln'], synth_spec, marker=None, drawstyle='steps') _v1_grid = np.linspace(25, 45, 32) _v2_grid = np.linspace(-15, 5, 32) shp = (_v1_grid.size, _v2_grid.size) v_grid = np.vstack(map(np.ravel, np.meshgrid(_v1_grid, _v2_grid))).T v_grid.shape chisq = np.zeros(v_grid.shape[0]) for i in range(v_grid.shape[0]): v1,v2 = v_grid[i] opt_pars = get_optimal_chisq(all_spectra[spec_i], ref_spec, v1*u.km/u.s, v2*u.km/u.s) chisq[i] = compute_chisq(all_spectra[spec_i], X, opt_pars) fig,ax = pl.subplots(1,1,figsize=(9,8)) cb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), chisq.reshape(shp), cmap='magma') fig.colorbar(cb) fig,ax = pl.subplots(1,1,figsize=(9,8)) cb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), np.exp(chisq-chisq.max()).reshape(shp), cmap='magma') fig.colorbar(cb) fig,ax = pl.subplots(1,1,figsize=(9,8)) cb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), chisq.reshape(shp), cmap='magma') fig.colorbar(cb) fig,ax = pl.subplots(1,1,figsize=(9,8)) cb = ax.pcolormesh(v_grid[:,0].reshape(shp), v_grid[:,1].reshape(shp), np.exp(chisq-chisq.max()).reshape(shp), cmap='magma') fig.colorbar(cb) ``` --- try using levmar to optimize ``` from scipy.optimize import leastsq def errfunc(pars, data_spec, ref_spec): v1,v2,a,b,c = pars X = get_design_matrix(data_spec, ref_spec, v1*u.km/u.s, v2*u.km/u.s) synth_spec = make_synthetic_spectrum(X, [a,b,c]) return (synth_spec - data_spec['flux']) levmar_opt_pars,ier = leastsq(errfunc, x0=[35,-5]+opt_pars.tolist(), args=(all_spectra[0], ref_spec)) levmar_opt_pars data_spec = all_spectra[0] X = get_design_matrix(data_spec, ref_spec, levmar_opt_pars[0]*u.km/u.s, levmar_opt_pars[1]*u.km/u.s) synth_spec = make_synthetic_spectrum(X, levmar_opt_pars[2:]) pl.plot(data_spec['wvln'], data_spec['flux'], marker=None, drawstyle='steps') pl.plot(data_spec['wvln'], synth_spec, marker=None, drawstyle='steps') ```
github_jupyter
# Final Checks for model ``` import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df = pd.read_csv(f"D:/Docs/train_1.csv", encoding='mac_roman') ``` ## 1. Use ONLY compliance available columns ``` df = df[df['compliance'].notna()] df.shape df['fine_amount'] = df['fine_amount'].fillna(0) df.shape df['compliance'].value_counts() ``` ## 2. Build the actual model ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import train_test_split feature_names_tickets = ['ticket_id', 'fine_amount'] X_tickets = df[feature_names_tickets] y_tickets = df['compliance'] #Test size is chosen to get X_test value of 61,001 as the same is provided test data X_train, X_test, y_train, y_test = train_test_split(X_tickets, y_tickets, test_size = 0.38153900, random_state = 0) clf = LogisticRegression(C=100).fit(X_train, y_train) print(X_train.shape) print(X_test.shape) ``` ## 3. Apply GridSearchCV ``` from sklearn.model_selection import GridSearchCV param_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] } grid_search = GridSearchCV(estimator = clf, param_grid = param_grid, scoring = 'accuracy', cv = 5, verbose=0) grid_search.fit(X_train, y_train) print('Best coring:\n Best C: {}'.format(grid_search.best_score_)) #Fit based on new model now clf_best = LogisticRegression(C = 0.92).fit(X_train, y_train) ``` ## 7. Check ROC / AUC ``` # First we need to load our test dataset df1 = pd.read_csv(f"D:/Docs/test_1.csv", encoding='mac_roman') df1['fine_amount'] = df1['fine_amount'].fillna(0) df1.shape feature_names_test = ['ticket_id', 'fine_amount'] X_test_new = df1[feature_names_test] print(X_test.shape) print(X_test_new.shape) from sklearn.metrics import roc_curve, auc y_score_lr = clf_best.decision_function(X_test_new) fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr) roc_auc_lr = auc(fpr_lr, tpr_lr) plt.figure() plt.xlim([-0.01, 1.00]) plt.ylim([-0.01, 1.01]) plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr)) plt.xlabel('False Positive Rate', fontsize=16) plt.ylabel('True Positive Rate', fontsize=16) plt.title('ROC curve', fontsize=16) plt.legend(loc='lower right', fontsize=13) plt.plot([0, 1], [0, 1], color='red', lw=3, linestyle='--') plt.axes().set_aspect('equal') plt.show() score = clf.score(X_test_new, y_test) print(score) predictions = clf.predict(X_test_new) predictions.shape print(predictions.sum()) pred_values = pd.DataFrame(predictions, columns='Pred') pred_values.to_csv('result_pred.csv') ```
github_jupyter
``` BRANCH = 'main' """ You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab. Instructions for setting up Colab are as follows: 1. Open a new Python 3 notebook. 2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL) 3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator) 4. Run this cell to set up dependencies. """ # If you're using Google Colab and not running locally, run this cell # install NeMo BRANCH = 'main' !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[nlp] import os import wget from nemo.collections import nlp as nemo_nlp from nemo.collections import common as nemo_common from omegaconf import OmegaConf ``` # Tokenizers Background For Natural Language Processing, tokenization is an essential part of data preprocessing. It is the process of splitting a string into a list of tokens. One can think of token as parts like a word is a token in a sentence. Depending on the application, different tokenizers are more suitable than others. For example, a WordTokenizer that splits the string on any whitespace, would tokenize the following string "My first program, Hello World." -> ["My", "first", "program,", "Hello", "World."] To turn the tokens into numerical model input, the standard method is to use a vocabulary and one-hot vectors for [word embeddings](https://en.wikipedia.org/wiki/Word_embedding). If a token appears in the vocabulary, its index is returned, if not the index of the unknown token is returned to mitigate out-of-vocabulary (OOV). # Tokenizers in NeMo In NeMo, we support the most used tokenization algorithms. We offer a wrapper around [Hugging Faces's AutoTokenizer](https://huggingface.co/transformers/model_doc/auto.html#autotokenizer) - a factory class that gives access to all Hugging Face tokenizers. This includes particularly all BERT-like model tokenizers, such as BertTokenizer, AlbertTokenizer, RobertaTokenizer, GPT2Tokenizer. Apart from that, we also support other tokenizers such as WordTokenizer, CharTokenizer, and [Google's SentencePieceTokenizer](https://github.com/google/sentencepiece). We make sure that all tokenizers are compatible with BERT-like models, e.g. BERT, Roberta, Albert, and Megatron. For that, we provide a high-level user API `get_tokenizer()`, which allows the user to instantiate a tokenizer model with only four input arguments: * `tokenizer_name: str` * `tokenizer_model: Optional[str] = None` * `vocab_file: Optional[str] = None` * `special_tokens: Optional[Dict[str, str]] = None` Hugging Face and Megatron tokenizers (which uses Hugging Face underneath) can be automatically instantiated by only `tokenizer_name`, which downloads the corresponding `vocab_file` from the internet. For SentencePieceTokenizer, WordTokenizer, and CharTokenizers `tokenizer_model` or/and `vocab_file` can be generated offline in advance using [`scripts/tokenizers/process_asr_text_tokenizer.py`](https://github.com/NVIDIA/NeMo/blob/stable/scripts/tokenizers/process_asr_text_tokenizer.py) The tokenizers in NeMo are designed to be used interchangeably, especially when used in combination with a BERT-based model. Let's take a look at the list of available tokenizers: ``` nemo_nlp.modules.get_tokenizer_list() ``` # Hugging Face AutoTokenizer ``` # instantiate tokenizer wrapper using pretrained model name only tokenizer1 = nemo_nlp.modules.get_tokenizer(tokenizer_name="bert-base-cased") # the wrapper has a reference to the original HuggingFace tokenizer print(tokenizer1.tokenizer) # check vocabulary (this can be very long) print(tokenizer1.tokenizer.vocab) # show all special tokens if it has any print(tokenizer1.tokenizer.all_special_tokens) # instantiate tokenizer using custom vocabulary vocab_file = "myvocab.txt" vocab = ["he", "llo", "world"] with open(vocab_file, 'w', encoding='utf-8') as vocab_fp: vocab_fp.write("\n".join(vocab)) tokenizer2 = nemo_nlp.modules.get_tokenizer(tokenizer_name="bert-base-cased", vocab_file=vocab_file) # Since we did not overwrite special tokens they should be the same as before print(tokenizer1.tokenizer.all_special_tokens == tokenizer2.tokenizer.all_special_tokens ) ``` ## Adding Special tokens We do not recommend overwriting special tokens for Hugging Face pretrained models, since these are the commonly used default values. If a user still wants to overwrite the special tokens, specify some of the following keys: ``` special_tokens_dict = {"unk_token": "<UNK>", "sep_token": "<SEP>", "pad_token": "<PAD>", "bos_token": "<CLS>", "mask_token": "<MASK>", "eos_token": "<SEP>", "cls_token": "<CLS>"} tokenizer3 = nemo_nlp.modules.get_tokenizer(tokenizer_name="bert-base-cased", vocab_file=vocab_file, special_tokens=special_tokens_dict) # print newly set special tokens print(tokenizer3.tokenizer.all_special_tokens) # the special tokens should be different from the previous special tokens print(tokenizer3.tokenizer.all_special_tokens != tokenizer1.tokenizer.all_special_tokens ) ``` Notice, that if you specify tokens that were not previously included in the tokenizer's vocabulary file, new tokens will be added to the vocabulary file. You will see a message like this: `['<MASK>', '<CLS>', '<SEP>', '<PAD>', '<SEP>', '<CLS>', '<UNK>'] will be added to the vocabulary. Please resize your model accordingly` ``` # A safer way to add special tokens is the following: # define your model pretrained_model_name = 'bert-base-uncased' config = {"language_model": {"pretrained_model_name": pretrained_model_name}, "tokenizer": {}} omega_conf = OmegaConf.create(config) model = nemo_nlp.modules.get_lm_model(cfg=omega_conf) # define pretrained tokenizer tokenizer_default = nemo_nlp.modules.get_tokenizer(tokenizer_name=pretrained_model_name) tokenizer_default.text_to_tokens('<MY_NEW_TOKEN> and another word') ``` As you can see in the above, the tokenizer splits `<MY_NEW_TOKEN>` into subtokens. Let's add this to the special tokens to make sure the tokenizer does not split this into subtokens. ``` special_tokens = {'bos_token': '<BOS>', 'cls_token': '<CSL>', 'additional_special_tokens': ['<MY_NEW_TOKEN>', '<ANOTHER_TOKEN>']} tokenizer_default.add_special_tokens(special_tokens_dict=special_tokens) # resize your model so that the embeddings for newly added tokens are updated during training/finetuning model.resize_token_embeddings(tokenizer_default.vocab_size) # let's make sure the tokenizer doesn't split our special tokens into subtokens tokenizer_default.text_to_tokens('<MY_NEW_TOKEN> and another word') ``` Now, the model doesn't break down our special token into the subtokens. ## Megatron model tokenizer ``` # Megatron tokenizers are instances of the Hugging Face BertTokenizer. tokenizer4 = nemo_nlp.modules.get_tokenizer(tokenizer_name="megatron-bert-cased") ``` # Train custom tokenizer model and vocabulary from text file We use the [`scripts/tokenizers/process_asr_text_tokenizer.py`](https://github.com/NVIDIA/NeMo/blob/stable/scripts/tokenizers/process_asr_text_tokenizer.py) script to create a custom tokenizer model with its own vocabulary from an input file ``` # download tokenizer script script_file = "process_asr_text_tokenizer.py" if not os.path.exists(script_file): print('Downloading script file...') wget.download(f'https://raw.githubusercontent.com/NVIDIA/NeMo/{BRANCH}/scripts/tokenizers/process_asr_text_tokenizer.py') else: print ('Script already exists') # Let's prepare some small text data for the tokenizer data_text = "NeMo is a toolkit for creating Conversational AI applications. \ NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures \ for conversational AI using reusable components - Neural Modules. \ Neural Modules are conceptual blocks of neural networks that take typed inputs and produce typed outputs. \ Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations. \ The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), \ natural language processing (NLP) and text synthesis (TTS). \ Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes." # Write the text data into a file data_file="data.txt" with open(data_file, 'w') as data_fp: data_fp.write(data_text) # Some additional parameters for the tokenizer # To tokenize at unigram, char or word boundary instead of using bpe, change --spe_type accordingly. # More details see https://github.com/google/sentencepiece#train-sentencepiece-model tokenizer_spe_type = "bpe" # <-- Can be `bpe`, `unigram`, `word` or `char` vocab_size = 32 ! python process_asr_text_tokenizer.py --data_file=$data_file --data_root=. --vocab_size=$vocab_size --tokenizer=spe --spe_type=$tokenizer_spe_type # See created tokenizer model and vocabulary spe_model_dir=f"tokenizer_spe_{tokenizer_spe_type}_v{vocab_size}" ! ls $spe_model_dir ``` # Use custom tokenizer for data preprocessing ## Example: SentencePiece for BPE ``` # initialize tokenizer with created tokenizer model, which inherently includes the vocabulary and specify optional special tokens tokenizer_spe = nemo_nlp.modules.get_tokenizer(tokenizer_name="sentencepiece", tokenizer_model=spe_model_dir+"/tokenizer.model", special_tokens=special_tokens_dict) # specified special tokens are added to the vocabuary print(tokenizer_spe.vocab_size) ``` # Using any tokenizer to tokenize text into BERT compatible input ``` text="hello world" # create tokens tokenized = [tokenizer_spe.bos_token] + tokenizer_spe.text_to_tokens(text) + [tokenizer_spe.eos_token] print(tokenized) # turn token into input_ids for a neural model, such as BERTModule print(tokenizer_spe.tokens_to_ids(tokenized)) ```
github_jupyter
# T<sub>2</sub> Ramsey Experiment This experiment serves as one of the series of experiments used to characterize a single qubit. Its purpose is to determine two of the qubit's properties: *Ramsey* or *detuning frequency* and $T_2\ast$. The rough frequency of the qubit was already determined previously. Here, we would like to measure the *detuning*, that is the difference between the qubit's precise frequency and the frequency of the rotation pulses (based on the rough frequency). This part of the experiment is called a *Ramsey Experiment*. $T_2\ast$ represents the rate of decay toward a mixed state, when the qubit is initialized to the |+⟩ state. ``` import qiskit from qiskit_experiments.library import T2Ramsey ``` The circuit used for the experiment comprises the following: 1. Hadamard gate 2. delay 3. p (phase) gate that rotates the qubit in the x-y plane 4. Hadamard gate 5. measurement During the delay time, we expect the qubit to precess about the z-axis. If the p gate and the precession offset each other perfectly, then the qubit will arrive at the |0⟩ state (after the second Hadamard gate). By varying the extension of the delays, we get a series of oscillations of the qubit state between the |0⟩ and |1⟩ states. We can draw the graph of the resulting function, and can analytically extract the desired values. ``` # set the computation units to microseconds unit = 'us' #microseconds qubit = 0 # set the desired delays delays = list(range(1, 150, 2)) # Create a T2Ramsey experiment. Print the first circuit as an example exp1 = T2Ramsey(qubit, delays, unit=unit) print(exp1.circuits()[0]) ``` We run the experiment on a simple, simulated backend, created specifically for this experiment's tutorial. ``` from qiskit_experiments.test.t2ramsey_backend import T2RamseyBackend # FakeJob is a wrapper for the backend, to give it the form of a job from qiskit_experiments.test.utils import FakeJob import qiskit_experiments.matplotlib from qiskit_experiments.matplotlib import pyplot, requires_matplotlib from qiskit_experiments.matplotlib import HAS_MATPLOTLIB conversion_factor = 1E-6 # The behavior of the backend is determined by the following parameters backend = T2RamseyBackend( p0={"a_guess":[0.5], "t2ramsey":[80.0], "f_guess":[0.02], "phi_guess":[0.0], "b_guess": [0.5]}, initial_prob_plus=[0.0], readout0to1=[0.02], readout1to0=[0.02], conversion_factor=conversion_factor, ) ``` The resulting graph will have the form: $ f(t) = a^{-t/T_2*} \cdot cos(2 \pi f t + \phi) + b $ where *t* is the delay, $T_2*$ is the decay factor, and *f* is the detuning frequency. `conversion_factor` is a scaling factor that depends on the measurement units used. It is 1E-6 here, because the unit is microseconds. ``` exp1.set_analysis_options(user_p0=None, plot=True) expdata1 = exp1.run(backend=backend, shots=2000) expdata1.block_for_results() # Wait for job/analysis to finish. # Display the figure display(expdata1.figure(0)) # T2* results: t2ramsey = expdata1.analysis_results(0).data() t2ramsey # Frequency result: frequency = expdata1.analysis_results(1).data() frequency ``` ### Providing initial user estimates The user can provide initial estimates for the parameters to help the analysis process. Because the curve is expected to decay toward $0.5$, the natural choice for parameters $A$ and $B$ is $0.5$. Varying the value of $\phi$ will shift the graph along the x-axis. Since this is not of interest to us, we can safely initialize $\phi$ to 0. In this experiment, `t2ramsey` and `f` are the parameters of interest. Good estimates for them are values computed in previous experiments on this qubit or a similar values computed for other qubits. ``` from qiskit_experiments.library.characterization import T2RamseyAnalysis user_p0={ "A": 0.5, "t2ramsey": 85.0, "f": 0.021, "phi": 0, "B": 0.5 } exp_with_p0 = T2Ramsey(qubit, delays, unit=unit) exp_with_p0.set_analysis_options(user_p0=user_p0, plot=True) expdata_with_p0 = exp_with_p0.run(backend=backend, shots=2000) expdata_with_p0.block_for_results() display(expdata_with_p0.figure(0)) t2ramsey = expdata_with_p0.analysis_results(0).data()["value"] frequency = expdata_with_p0.analysis_results(1).data()["value"] print("T2Ramsey:", t2ramsey) print("Fitted frequency:", frequency) ``` The units can be changed, but the output in the result is always given in seconds. The units in the backend must be adjusted accordingly. ``` from qiskit.utils import apply_prefix unit = 'ns' delays = list(range(1000, 150000, 2000)) conversion_factor = apply_prefix(1, unit) print(conversion_factor) p0={"a_guess":[0.5], "t2ramsey":[80000], "f_guess":[0.00002], "phi_guess":[0.0], "b_guess": [0.5]} backend_in_ns = T2RamseyBackend( p0=p0, initial_prob_plus=[0.0], readout0to1=[0.02], readout1to0=[0.02], conversion_factor=conversion_factor ) exp_in_ns = T2Ramsey(qubit, delays, unit=unit) exp_in_ns.set_analysis_options(user_p0=None, plot=True) expdata_in_ns = exp_in_ns.run(backend=backend_in_ns, shots=2000) expdata_in_ns.block_for_results() display(expdata_in_ns.figure(0)) t2ramsey = expdata_in_ns.analysis_results(0).data()["value"] frequency = expdata_in_ns.analysis_results(1).data()["value"] print("T2Ramsey:", t2ramsey) print("Fitted frequency:", frequency) ``` ### Adding data to an existing experiment It is possible to add data to an experiment, after the analysis of the first set of data. In the next example we add exp2 to `exp_in_ns` that we showed above. ``` more_delays = list(range(2000, 150000, 2000)) exp_new = T2Ramsey(qubit, more_delays, unit=unit) exp_new.set_analysis_options(user_p0=None, plot=True) expdata_new = exp_new.run( backend=backend_in_ns, experiment_data=expdata_in_ns, shots=2000 ) expdata_new.block_for_results() display(expdata_new.figure(1)) # The results of the second execution are indices 2 and 3 of the analysis result t2ramsey = expdata_new.analysis_results(2).data()["value"] frequency = expdata_new.analysis_results(3).data()["value"] print("T2Ramsey:", t2ramsey) print("Fitted frequency:", frequency) ```
github_jupyter
### Note * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps. ``` # Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) file_to_load = "Resources/purchase_data.csv" # Read Purchasing File and store into Pandas data frame purchase_data = pd.read_csv(file_to_load) purchase_data ``` ## Player Count * Display the total number of players ``` players = purchase_data["SN"].unique() player_count = pd.DataFrame([{"Unique User Count":len(players)}]) player_count ``` ## Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc. * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` unique_items =len(purchase_data["Item Name"].unique()) total_revenue = purchase_data["Price"].sum() purchases_count = len(purchase_data["Price"]) average_price = total_revenue / purchases_count #the data frame for Analysis of the purchases purchasing_analysis = pd.DataFrame([{"Number of Unique Items":unique_items, "Average Price":average_price, "Number of Purchases":purchases_count, "Total Revenue":total_revenue}]) #change the currency of the prices and the total revenue purchasing_analysis["Average Price"] = purchasing_analysis["Average Price"].map("${0:,.2f}".format) purchasing_analysis["Total Revenue"] = purchasing_analysis["Total Revenue"].map("${0:,.2f}".format) purchasing_analysis ``` ## Gender Demographics * Percentage and Count of Male Players * Percentage and Count of Female Players * Percentage and Count of Other / Non-Disclosed ``` #graphics summary #Lets create a df so we can store the screen names/gender and purchase data from the script purchaser_genders = purchase_data[["SN","Gender","Price"]].copy() #seperate the differet genders for the players dif_player_gen = purchaser_genders.drop_duplicates(subset=["SN","Gender"], keep="first") dif_player_count = len(dif_player_gen) #save the values to the summary of the Demographics summary gender_counts = dif_player_gen["Gender"].value_counts() gender_demog = pd.DataFrame({"Unique Player by Gender":gender_counts, "% of Total":gender_counts/dif_player_count}) #change the format of the % to a percentile format gender_demog["% of Total"] = gender_demog["% of Total"].map("{0:.1%}".format) ``` ## Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` # THe Analysis for the purchase by gender group_purchaser_gen = purchaser_genders.groupby(["Gender"]) purchase_cby_gen = group_purchaser_gen["Price"].count() average_purchase_pby_gen = group_purchaser_gen["Price"].mean() purchase_sumby_gen = group_purchaser_gen["Price"].sum() #create the summury for purchase by genders purchasing_analysis_gen = pd.DataFrame({"Purchase Count":purchase_cby_gen, "Avg Purchase Price":average_purchase_pby_gen, "Total Purchase Value":purchase_sumby_gen, "% of Total Revenue":purchase_sumby_gen/total_revenue, "Avg Purchase / Person":purchase_cby_gen/gender_counts}) # the format must be set for the summary purchasing_analysis_gen["Avg Purchase Price"] = purchasing_analysis_gen["Avg Purchase Price"].map("${0:,.2f}".format) purchasing_analysis_gen["Total Purchase Value"] = purchasing_analysis_gen["Total Purchase Value"].map("${0:,.2f}".format) purchasing_analysis_gen["% of Total Revenue"] = purchasing_analysis_gen["% of Total Revenue"].map("{0:.1%}".format) purchasing_analysis_gen["Avg Purchase / Person"] = purchasing_analysis_gen["Avg Purchase / Person"].map("{0:.1f}".format) purchasing_analysis_gen ``` ## Age Demographics * Establish bins for ages * Categorize the existing players using the age bins. Hint: use pd.cut() * Calculate the numbers and percentages by age group * Create a summary data frame to hold the results * Optional: round the percentage column to two decimal points * Display Age Demographics Table ``` ##AGE DEMOGRAPGHICS#### #Create data frame to stor names, age, and purchase data purchaser_ages = purchase_data[["SN","Age","Price"]].copy() #eliminate the duplicates and create a for them dif_player_ages = purchaser_ages.drop_duplicates(subset=["SN","Age"],keep="first") #create a bukey for storage for different ages and move them to a new list age_stg = (0,10,15,20,25,30,35,100) age_lb = ("<10","10-14","15-19","20-24","25-29","30-34","50+") age_list = pd.cut(dif_player_ages["Age"], bins=age_stg, right=False, labels=age_lb) dif_age_counts = pd.DataFrame({"Unique User Count":dif_player_ages["SN"], "Age Bin":age_list}) dif_gage_counts = dif_age_counts.groupby(["Age Bin"]).count() dif_gage_counts["% of Total"] = dif_gage_counts["Unique User Count"] / dif_player_count #Format the % of the Total dif_gage_counts["% of Total"] = dif_gage_counts["% of Total"].map("{0:.2%}".format) dif_gage_counts ``` ## Purchasing Analysis (Age) * Bin the purchase_data data frame by age * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below * Create a summary data frame to hold the results * Optional: give the displayed data cleaner formatting * Display the summary data frame ``` #Purchase Analysis #make rows and colums labeled withthe data pulled and age from the lists all_purchase_ab = pd.cut(purchase_data["Age"], bins=age_stg, right=False, labels=age_lb) purchaser_ages["Age Bin"] = all_purchase_ab gpurchaser_acounts = purchaser_ages.groupby(["Age Bin"]).count() gpurchaser_asum = purchaser_ages.groupby(["Age Bin"]).sum() gpurchaser_amean = purchaser_ages.groupby(["Age Bin"]).mean() purchase_cby_age = gpurchaser_acounts["SN"] purchase_sby_age = gpurchaser_asum["Price"] purchase_aby_age = gpurchaser_amean["Price"] avg_purchase_by_person = purchase_sby_age / dif_gage_counts["Unique User Count"] purchasing_analysis_age = pd.DataFrame({"Purchase Count":purchase_cby_age, "Average Purchase Price":purchase_aby_age, "Total Purchase Value":purchase_sby_age, "% of Total Revenue":purchase_sby_age / total_revenue, "Avg Total Purchase / Person":avg_purchase_by_person}) #create dataframe for age of the purchasing analysis purchasing_analysis_age["Average Purchase Price"] = purchasing_analysis_age["Average Purchase Price"].map("${0:,.2f}".format) purchasing_analysis_age["Total Purchase Value"] = purchasing_analysis_age["Total Purchase Value"].map("${0:,.2f}".format) purchasing_analysis_age["% of Total Revenue"] = purchasing_analysis_age["% of Total Revenue"].map("{0:.1%}".format) purchasing_analysis_age["Avg Total Purchase / Person"] = purchasing_analysis_age["Avg Total Purchase / Person"].map("${0:,.2f}".format) purchasing_analysis_age ``` ## Top Spenders * Run basic calculations to obtain the results in the table below * Create a summary data frame to hold the results * Sort the total purchase value column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` #find the top 5 spenders purchase_total_sum_SN = purchase_data.groupby(["SN"]).sum() sort_purchase_total_sum_SN = purchase_total_sum_SN.sort_values("Price", ascending=False) purchase_cby_sn = purchase_data["SN"].value_counts() purchase_sby_sn = sort_purchase_total_sum_SN["Price"] top_spenders = pd.DataFrame({"Purchase Count":purchase_cby_sn, "Avg Purchase Price":purchase_sby_sn / purchase_cby_sn, "Total Purchase Value":purchase_sby_sn}) #create the summary of the spenders from highest to lowest top_spenders = top_spenders.sort_values("Total Purchase Value", ascending=False) # the Formatting for the data summary top_spenders["Avg Purchase Price"] = top_spenders["Avg Purchase Price"].map("${0:,.2f}".format) top_spenders["Total Purchase Value"] = top_spenders["Total Purchase Value"].map("${0:,.2f}".format) top_spenders.head(5) ``` ## Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns * Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value * Create a summary data frame to hold the results * Sort the purchase count column in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the summary data frame ``` #Iteams that are the most popular items= purchase_data[["Item ID","Item Name","Price"]] group_items = items.groupby(["Item ID","Item Name"]) purchase_cby_item = group_items["Item ID"].count() total_purchase_vby_item = group_items["Price"].sum() item_price = total_purchase_vby_item / purchase_cby_item #create a df for most popular items in the shop most_pop_items = pd.DataFrame({"Purchase Count": purchase_cby_item, "Item Price": item_price, "Total Purchase Value": total_purchase_vby_item}) most_pop_items = most_pop_items.sort_values("Purchase Count", ascending=False) #Formating for the summary of pupular items most_pop_items["Item Price"] =most_pop_items["Item Price"].map("${0:,.2f}".format) most_pop_items["Total Purchase Value"] =most_pop_items["Total Purchase Value"].map("${0:,.2f}".format) most_pop_items.head(5) ``` ## Most Profitable Items * Sort the above table by total purchase value in descending order * Optional: give the displayed data cleaner formatting * Display a preview of the data frame ``` ###Profitable items#### #make a copy of the most popular items table and sort by the purchase value most_profit_items = pd.DataFrame({"Purchase Count": purchase_cby_item, "Item Price": item_price, "Total Purchase Value": total_purchase_vby_item}) most_profit_items = most_profit_items.sort_values("Total Purchase Value", ascending=False) most_profit_items["Item Price"] =most_profit_items["Item Price"].map("${0:,.2f}".format) most_profit_items["Total Purchase Value"] =most_profit_items["Total Purchase Value"].map("${0:,.2f}".format) most_profit_items ```
github_jupyter
``` import os import pandas as pd from sklearn.model_selection import StratifiedShuffleSplit from dimensionality_reduction import reduce_dimension import load_database from algorithms import * import warnings warnings.filterwarnings('ignore') database_name = os.environ['DATABASE'] n_components = int(os.environ['N_COMPONENTS']) dimensionality_algorithm = os.environ['DIMENSIONALITY_ALGORITHM'] result_path = 'results/%s_%s_%s.csv' %(database_name, n_components, dimensionality_algorithm) X, y = load_database.load(database_name) X = reduce_dimension(dimensionality_algorithm, X, n_components) if n_components else X X.shape results = {} sss = StratifiedShuffleSplit(n_splits=1, test_size=0.2) for train_index, test_index in sss.split(X, y): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] result = train_test(X_train, y_train, X_test, y_test, 'ada_boost') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'bagging') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'extra_trees') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'random_forest') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'logistic_regression') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'passive_aggressive') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'ridge') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'sgd') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'bernoulli') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'gaussian') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'k_neighbors') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'nearest_centroid') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'mlp') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'linear_svc') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'decision_tree') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'extra_tree') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'gradient_boosting') results.update(result) result = train_test(X_train, y_train, X_test, y_test, 'hist_gradient_boosting') results.update(result) df = pd.DataFrame.from_records(results) df df.to_csv(result_path) ```
github_jupyter
# Experimental data analysis on foil open area ## Brian Larsen, ISR-1 ## Data provided by Phil Fernandes, ISR-1 2016-9-14 The setup is a foil in its holder mounted to a foil holder meant to bock incident ions. The foil has a ~0.6mm hole in it to provide a baseline. The goal is to use the relative intensity of the witness hole to determine the intensity of holes in the foil. A quick summary: * Foil is placed 0.66” from front of MCP surface * Beam is rastered to cover full foil and “witness” aperture * Beam is 1.0 keV Ar+, slightly underfocused * Accumulate data for set period of time (either 60s or 180s, identified in spreadsheet) * Total_cts is the # of counts through the foil and the witness aperture * Witness_cts is the # of counts in the witness aperture only * Foil_cts = total_cts – witness_cts * Open area OA = (foil_cts/witness_cts) * (witness_area/foil_area) ``` import itertools from pprint import pprint from operator import getitem import matplotlib.pyplot as plt from matplotlib.colors import LogNorm import numpy as np import spacepy.plot as spp import pymc as mc import tqdm from MCA_file_viewer_v001 import GetMCAfile def plot_box(x, y, c='r', lw=0.6, ax=None): if ax is None: plt.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c) plt.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c) plt.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c) plt.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c) else: ax.plot((xind[0], xind[0]), (yind[0], yind[1]), lw=lw, c=c) ax.plot((xind[1], xind[1]), (yind[0], yind[1]), lw=lw, c=c) ax.plot((xind[0], xind[1]), (yind[0], yind[0]), lw=lw, c=c) ax.plot((xind[0], xind[1]), (yind[1], yind[1]), lw=lw, c=c) ZZ, XX, YY = GetMCAfile('16090203.mca') # It is believed as of 2016-09-19 that the MCA records 2 counts for each count. # This means all data are even and all the data can be divided by 2 to give the # right number of counts. Per emails Larsen-Fernandes 2016-09-17 # These data are integres and care muct be taken to assure that /2 does not # lead to number that are not representable in float ZZ = ZZ.astype(float) ZZ /= 2 XX = XX.astype(np.uint16) # as they all should be integers anyway xind = (986, 1003) yind = (492, 506) fig = plt.figure(figsize=(20,8)) ax1 = fig.add_subplot(131) ax2 = fig.add_subplot(132) ax3 = fig.add_subplot(133) pc = ax1.pcolormesh(XX, YY, ZZ, norm=LogNorm()) plt.colorbar(pc, ax=ax1) plot_box(xind, yind, ax=ax1) ax2.hist(ZZ.flatten(), 20) ax2.set_yscale('log') ax3.hist(ZZ.flatten(), 20, normed=True) ax3.set_yscale('log') ``` ## Do some calculations to try and match Phil's analysis Phil's data: File name Witness cts Total cts Foil cts Open area 16090203 658 4570 3912 0.00102 ``` total_cnts = ZZ.sum() print('Total counts:{0} -- Phil got {1} -- remember /2'.format(total_cnts, 4570/2)) # remember we did a /2 # Is the whitness hole at x=1000, y=500? XX.shape, YY.shape, ZZ.shape print(ZZ[yind[0]:yind[1], xind[0]:xind[1]]) plt.figure() plt.pcolormesh(XX[xind[0]:xind[1]], YY[yind[0]:yind[1]], ZZ[yind[0]:yind[1], xind[0]:xind[1]] , norm=LogNorm()) plt.colorbar() witness_counts = ZZ[yind[0]:yind[1], xind[0]:xind[1]].sum() print('Witness counts: {0}, Phil got {1}/2={2}'.format(witness_counts, 658, 658/2)) wit_pixels = 46 print('There {0} pixels in the witness peak'.format(wit_pixels)) total_counts = ZZ.sum() print("There are a total of {0} counts".format(total_counts)) ``` ## Can we get a noise estimate? 1) Try all pixels with a value where a neighbor does not. This assumes that real holes are large enough to have a point spread function and therefore cannot be in a single pixel. ``` def neighbor_inds(x, y, xlim=(0,1023), ylim=(0,1023), center=False, mask=False): """ given an x and y index return the 8 neighbor indices if center also return the center index if mask return a boolean mask over the whole 2d array """ xi = np.clip([x + v for v in [-1, 0, 1]], xlim[0], xlim[1]) yi = np.clip([y + v for v in [-1, 0, 1]], ylim[0], ylim[1]) ans = [(i, j) for i, j in itertools.product(xi, yi)] if not center: ans.remove((x,y)) if mask: out = np.zeros((np.diff(xlim)+1, np.diff(ylim)+1), dtype=np.bool) for c in ans: out[c] = True else: out = ans return np.asarray(out) print(neighbor_inds(2,2)) print(neighbor_inds(2,2, mask=True)) print(ZZ[neighbor_inds(500, 992, mask=True)]) def get_alone_pixels(dat): """ loop over all the data and store the value of all lone pixels """ ans = [] for index, x in tqdm.tqdm_notebook(np.ndenumerate(dat)): if (np.sum([ZZ[i, j] for i, j in neighbor_inds(index[0], index[1])]) == 0) and x != 0: ans.append((index, x)) return ans # print((neighbor_inds(5, 4))) alone = get_alone_pixels(ZZ) pprint(alone) # ZZ[neighbor_inds(5, 4)[0]].shape # print((neighbor_inds(5, 4))[0]) # print(ZZ[(neighbor_inds(5, 4))[0]].shape) # ZZ[4,3] ZZ[(965, 485)] print(neighbor_inds(4,3)[0]) print(ZZ[neighbor_inds(4,3)[0]]) print(ZZ[3,2]) ni = neighbor_inds(4,3)[0] print(ZZ[ni[0], ni[1]]) (ZZ % 2).any() # not all even any longer ``` ### Noise estimates Not we assume that all lone counts are noise that can be considered random and uniform over the MCP. This then provides a number of counts per MCA pixel that we can use. ``` n_noise = np.sum([v[1] for v in alone]) n_pixels = 1024*1024 noise_pixel = n_noise/n_pixels print("There were a total of {0} random counts over {1} pixels, {2} cts/pixel".format(n_noise, n_pixels, noise_pixel)) ``` Maybe we should consider just part of the MCP, lets get the min,max X and min,max Y where there are counts and just use that area. This will increase the cts/pixel. ``` minx_tmp = ZZ.sum(axis=0) minx_tmp.shape print(minx_tmp) miny_tmp = ZZ.sum(axis=1) miny_tmp.shape print(miny_tmp) ``` Looks to go all the way to all sides in X-Y. ## Work to total open area calculations Now we can model the total open area of the foil given the noise estimate per pixel and the pixels that are a part of the witness sample and the total area. We model the observed background as Poisson with center at the real background: $obsnbkg \sim Pois(nbkg)$ We model the observed witness sample, $obswit$, as Poisson with center of background per pixel times number of pixels in peak plus the number of real counts: $obswit \sim Pois(nbkg/C + witc)$, $C = \frac{A_w}{A_t}$ This then leaves the number of counts in open areas of the system (excluding witness) as a Poisson with center of background per pixel times number of pixels in the system (less witness) plus the real number of counts. $obsopen \sim Pois(nbkg/D + realc)$, $D=\frac{A_t - A_w}{A_t}$ Then then the open area is given by the ratio number of counts, $realc$, over an unknown area, $A_o$, as related to witness counts, $witc$, to the witness area, $A_w$, which is assumed perfect as as 0.6mm hole. $\frac{A_o}{realc}=\frac{A_w}{witc} => A_o = \frac{A_w}{witc}realc $ ``` Aw = np.pi*(0.2/2)**2 # mm**2 Af = 182.75 # mm**2 this is the area of the foil W_F_ratio = Aw/Af print(Aw, Af, W_F_ratio) C = wit_pixels/n_pixels D = (n_pixels-wit_pixels)/n_pixels print('C', C, 'D', D) nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise) witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts) realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts) @mc.deterministic(plot=True) def open_area(realc=realc, witc=witc): return realc*Aw/witc/Af model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area]) model.sample(200000, burn=100, thin=30, burn_till_tuned=True) mc.Matplot.plot(model) # 1000, burn=100, thin=30 0.000985 +/- 0.000058 # 10000, burn=100, thin=30 0.000982 +/- 0.000061 # 100000, burn=100, thin=30 0.000984 +/- 0.000059 # 200000, burn=100, thin=30 0.000986 +/- 0.000059 # 1000000, burn=100, thin=30 0.000985 +/- 0.000059 print("Foil 1 \n") witc_mean = np.mean(witc.trace()[...]) witc_std = np.std(witc.trace()[...]) print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100)) realc_mean = np.mean(realc.trace()[...]) realc_std = np.std(realc.trace()[...]) print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100)) nbkg_mean = np.mean(nbkg.trace()[...]) nbkg_std = np.std(nbkg.trace()[...]) print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100)) OA_median = np.median(open_area.trace()[...]) OA_mean = np.mean(open_area.trace()[...]) OA_std = np.std(open_area.trace()[...]) print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 )) print("Phil got {0} for 1 measurement\n".format(0.00139)) print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean)) ``` ## Run again allowing some uncertainity on witness and foil areas ``` _Aw = np.pi*(0.2/2)**2 # mm**2 _Af = 182.75 # mm**2 this is the area of the foil Aw = mc.Normal('Aw', _Aw, (_Aw*0.2)**-2) # 20% Af = mc.Normal('Af', _Af, (_Af*0.1)**-2) # 10% print(_Aw, _Af) C = wit_pixels/n_pixels D = (n_pixels-wit_pixels)/n_pixels print('C', C, 'D', D) nbkg = mc.Uniform('nbkg', 1, n_noise*5) # just 1 to some large number obsnbkg = mc.Poisson('obsnbkg', nbkg, observed=True, value=n_noise) witc = mc.Uniform('witc', 0, witness_counts*5) # just 0 to some large number obswit = mc.Poisson('obswit', nbkg*C + witc, observed=True, value=witness_counts) realc = mc.Uniform('realc', 0, total_counts*5) # just 0 to some large number obsopen = mc.Poisson('obsopen', nbkg*D + realc, observed=True, value=total_counts-witness_counts) @mc.deterministic(plot=True) def open_area(realc=realc, witc=witc, Aw=Aw, Af=Af): return realc*Aw/witc/Af model = mc.MCMC([nbkg, obsnbkg, witc, obswit, realc, obsopen, open_area, Af, Aw]) model.sample(200000, burn=100, thin=30, burn_till_tuned=True) mc.Matplot.plot(nbkg) mc.Matplot.plot(witc) mc.Matplot.plot(realc) # mc.Matplot.plot(open_area) mc.Matplot.plot(Aw) _ = spp.plt.hist(open_area.trace(), 20) print("Foil 1 \n") witc_mean = np.mean(witc.trace()[...]) witc_std = np.std(witc.trace()[...]) print("Found witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(witness_counts, witc_mean, witc_std, witc_std/witc_mean*100)) realc_mean = np.mean(realc.trace()[...]) realc_std = np.std(realc.trace()[...]) print("Found non-witness counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(total_counts-witness_counts, realc_mean, realc_std, realc_std/realc_mean*100)) nbkg_mean = np.mean(nbkg.trace()[...]) nbkg_std = np.std(nbkg.trace()[...]) print("Found noise counts of {0} turn into {1} +/- {2} ({3:.2f}%)\n".format(0, nbkg_mean, nbkg_std, nbkg_std/nbkg_mean*100)) OA_median = np.median(open_area.trace()[...]) OA_mean = np.mean(open_area.trace()[...]) OA_std = np.std(open_area.trace()[...]) print("The open area fraction is {0:.6f} +/- {1:.6f} ({2:.2f}%) at the 1 stddev level from 1 measurement\n".format(OA_mean, OA_std,OA_std/OA_mean*100 )) print("Phil got {0} for 1 measurement\n".format(0.00139)) print("The ratio Brian/Phil is: {0:.6f} or {1:.6f}".format(OA_mean/0.00139, 0.00139/OA_mean)) mc.Matplot.plot(Aw) ```
github_jupyter
#1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. ``` !pip install git+https://github.com/google/starthinker ``` #2. Get Cloud Project ID To run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play. ``` CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT) ``` #3. Get Client Credentials To read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play. ``` CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS) ``` #4. Enter SDF Download Parameters Download SDF reports into a BigQuery table. 1. Select your filter types and the filter ideas. 1. Enter the <a href='https://developers.google.com/bid-manager/v1.1/sdf/download' target='_blank'>file types</a> using commas. 1. SDF_ will be prefixed to all tables and date appended to daily tables. 1. File types take the following format: FILE_TYPE_CAMPAIGN, FILE_TYPE_AD_GROUP,... Modify the values below for your use case, can be done multiple times, then click play. ``` FIELDS = { 'auth_write': 'service', # Credentials used for writing data. 'partner_id': '', # The sdf file types. 'file_types': [], # The sdf file types. 'filter_type': '', # The filter type for the filter ids. 'filter_ids': [], # Comma separated list of filter ids for the request. 'dataset': '', # Dataset to be written to in BigQuery. 'version': '5', # The sdf version to be returned. 'table_suffix': '', # Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores) 'time_partitioned_table': False, # Is the end table a time partitioned 'create_single_day_table': False, # Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF. } print("Parameters Set To: %s" % FIELDS) ``` #5. Execute SDF Download This does NOT need to be modified unles you are changing the recipe, click play. ``` from starthinker.util.project import project from starthinker.script.parse import json_set_fields, json_expand_includes USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'dataset': { 'auth': 'user', 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Dataset to be written to in BigQuery.'}} } }, { 'sdf': { 'auth': 'user', 'version': {'field': {'name': 'version','kind': 'choice','order': 6,'default': '5','description': 'The sdf version to be returned.','choices': ['SDF_VERSION_5','SDF_VERSION_5_1']}}, 'partner_id': {'field': {'name': 'partner_id','kind': 'integer','order': 1,'description': 'The sdf file types.'}}, 'file_types': {'field': {'name': 'file_types','kind': 'string_list','order': 2,'default': [],'description': 'The sdf file types.'}}, 'filter_type': {'field': {'name': 'filter_type','kind': 'choice','order': 3,'default': '','description': 'The filter type for the filter ids.','choices': ['FILTER_TYPE_ADVERTISER_ID','FILTER_TYPE_CAMPAIGN_ID','FILTER_TYPE_INSERTION_ORDER_ID','FILTER_TYPE_MEDIA_PRODUCT_ID','FILTER_TYPE_LINE_ITEM_ID']}}, 'read': { 'filter_ids': { 'single_cell': True, 'values': {'field': {'name': 'filter_ids','kind': 'integer_list','order': 4,'default': [],'description': 'Comma separated list of filter ids for the request.'}} } }, 'time_partitioned_table': {'field': {'name': 'time_partitioned_table','kind': 'boolean','order': 7,'default': False,'description': 'Is the end table a time partitioned'}}, 'create_single_day_table': {'field': {'name': 'create_single_day_table','kind': 'boolean','order': 8,'default': False,'description': 'Would you like a separate table for each day? This will result in an extra table each day and the end table with the most up to date SDF.'}}, 'dataset': {'field': {'name': 'dataset','kind': 'string','order': 6,'default': '','description': 'Dataset to be written to in BigQuery.'}}, 'table_suffix': {'field': {'name': 'table_suffix','kind': 'string','order': 6,'default': '','description': 'Optional: Suffix string to put at the end of the table name (Must contain alphanumeric or underscores)'}} } } ] json_set_fields(TASKS, FIELDS) json_expand_includes(TASKS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True) project.execute() ```
github_jupyter
# Project: Part of Speech Tagging with Hidden Markov Models --- ### Introduction Part of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation. In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. ![](_post-hmm.png) The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated to complete the project; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you must provide code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! <div class="alert alert-block alert-info"> **Note:** Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You must then **export the notebook** by running the last cell in the notebook, or by using the menu above and navigating to **File -> Download as -> HTML (.html)** Your submissions should include both the `html` and `ipynb` files. </div> <div class="alert alert-block alert-info"> **Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode. </div> ### The Road Ahead You must complete Steps 1-3 below to pass the project. The section on Step 4 includes references & resources you can use to further explore HMM taggers. - [Step 1](#Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus - [Step 2](#Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline - [Step 3](#Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline - [Step 4](#Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger <div class="alert alert-block alert-warning"> **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine. </div> ``` # Jupyter "magic methods" -- only need to be run once per kernel restart %load_ext autoreload %aimport helpers, tests %autoreload 1 # import python modules -- this cell needs to be run again if you make changes to any of the files import matplotlib.pyplot as plt import numpy as np from IPython.core.display import HTML from itertools import chain from collections import Counter, defaultdict from helpers import show_model, Dataset from pomegranate import State, HiddenMarkovModel, DiscreteDistribution ``` ## Step 1: Read and preprocess the dataset --- We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). You should expect to get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html), but the process you'll follow would be the same. The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line. Example from the Brown corpus. ``` b100-38532 Perhaps ADV it PRON was VERB right ADJ ; . ; . b100-35577 ... ``` ``` data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8) print("There are {} sentences in the corpus.".format(len(data))) print("There are {} sentences in the training set.".format(len(data.training_set))) print("There are {} sentences in the testing set.".format(len(data.testing_set))) assert len(data) == len(data.training_set) + len(data.testing_set), \ "The number of sentences in the training set + testing set should sum to the number of sentences in the corpus" ``` ### The Dataset Interface You can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, then run and review the next few cells to make sure you understand the interface before moving on to the next step. ``` Dataset-only Attributes: training_set - reference to a Subset object containing the samples for training testing_set - reference to a Subset object containing the samples for testing Dataset & Subset Attributes: sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus vocab - an immutable collection of the unique words in the corpus tagset - an immutable collection of the unique tags in the corpus X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) N - returns the number of distinct samples (individual words or tags) in the dataset Methods: stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs __len__() - returns the nubmer of sentences in the dataset ``` For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes: ``` subset.keys == {"s1", "s0"} # unordered subset.vocab == {"See", "run", "ran", "Spot"} # unordered subset.tagset == {"VERB", "NOUN"} # unordered subset.X == (("Spot", "ran"), ("See", "Spot", "run")) # order matches .keys subset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) # order matches .keys subset.N == 7 # there are a total of seven observations over all sentences len(subset) == 2 # because there are two sentences ``` <div class="alert alert-block alert-info"> **Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. </div> #### Sentences `Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`. ``` key = 'b100-38532' print("Sentence: {}".format(key)) print("words:\n\t{!s}".format(data.sentences[key].words)) print("tags:\n\t{!s}".format(data.sentences[key].tags)) ``` <div class="alert alert-block alert-info"> **Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. </div> #### Counting Unique Elements You can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`. ``` print("There are a total of {} samples of {} unique words in the corpus." .format(data.N, len(data.vocab))) print("There are {} samples of {} unique words in the training set." .format(data.training_set.N, len(data.training_set.vocab))) print("There are {} samples of {} unique words in the testing set." .format(data.testing_set.N, len(data.testing_set.vocab))) print("There are {} words in the test set that are missing in the training set." .format(len(data.testing_set.vocab - data.training_set.vocab))) assert data.N == data.training_set.N + data.testing_set.N, \ "The number of training + test samples should sum to the total number of samples" ``` #### Accessing word and tag Sequences The `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset. ``` # accessing words with Dataset.X and tags with Dataset.Y for i in range(2): print("Sentence {}:".format(i + 1), data.X[i]) print() print("Labels {}:".format(i + 1), data.Y[i]) print() ``` #### Accessing (word, tag) Samples The `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus. ``` # use Dataset.stream() (word, tag) samples for the entire corpus print("\nStream (word, tag) pairs:\n") for i, pair in enumerate(data.stream()): print("\t", pair) if i > 5: break ``` For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. In the next several cells you will complete functions to compute the counts of several sets of counts. ## Step 2: Build a Most Frequent Class tagger --- Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. ### IMPLEMENTATION: Pair Counts Complete the function below that computes the joint frequency counts for two input sequences. ``` def pair_counts(sequences_A, sequences_B): """Return a dictionary keyed to each unique value in the first sequence list that counts the number of occurrences of the corresponding value from the second sequences list. For example, if sequences_A is tags and sequences_B is the corresponding words, then if 1244 sequences contain the word "time" tagged as a NOUN, then you should return a dictionary such that pair_counts[NOUN][time] == 1244 """ # TODO: Finish this function! dict = {} for i in range(len(sequences_A)): seq_A = sequences_A[i] seq_B = sequences_B[i] for j in range(len(seq_A)): element_A = seq_A[j] element_B = seq_B[j] if element_A in dict: if element_B in dict[element_A]: dict[element_A][element_B] += 1 else: dict[element_A][element_B] = 1 else: dict[element_A] = {} dict[element_A][element_B] = 1 return dict # Calculate C(t_i, w_i) emission_counts = pair_counts(data.Y, data.X) assert len(emission_counts) == 12, \ "Uh oh. There should be 12 tags in your dictionary." assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \ "Hmmm...'time' is expected to be the most common NOUN." HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>') ``` ### IMPLEMENTATION: Most Frequent Class Tagger Use the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string. The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably. ``` # Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word from collections import namedtuple FakeState = namedtuple("FakeState", "name") class MFCTagger: # NOTE: You should not need to modify this class or any of its methods missing = FakeState(name="<MISSING>") def __init__(self, table): self.table = defaultdict(lambda: MFCTagger.missing) self.table.update({word: FakeState(name=tag) for word, tag in table.items()}) def viterbi(self, seq): """This method simplifies predictions by matching the Pomegranate viterbi() interface""" return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"])) # TODO: calculate the frequency of each tag being assigned to each word (hint: similar, but not # the same as the emission probabilities) and use it to fill the mfc_table word_counts = pair_counts(data.X, data.Y) mfc_table = {} for word in data.training_set.vocab: mfc_table[word] = max(word_counts[word], key=word_counts[word].get) # DO NOT MODIFY BELOW THIS LINE mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance assert len(mfc_table) == len(data.training_set.vocab), "" assert all(k in data.training_set.vocab for k in mfc_table.keys()), "" assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, "" HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>') ``` ### Making Predictions with a Model The helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger. ``` def replace_unknown(sequence): """Return a copy of the input sequence where each unknown word is replaced by the literal string value 'nan'. Pomegranate will ignore these values during computation. """ return [w if w in data.training_set.vocab else 'nan' for w in sequence] def simplify_decoding(X, model): """X should be a 1-D sequence of observations for the model to predict""" _, state_path = model.viterbi(replace_unknown(X)) return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions ``` ### Example Decoding Sequences with MFC Tagger ``` for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, mfc_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n") ``` ### Evaluating Model Accuracy The function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus. ``` def accuracy(X, Y, model): """Calculate the prediction accuracy by using the model to decode each sequence in the input X and comparing the prediction with the true labels in Y. The X should be an array whose first dimension is the number of sentences to test, and each element of the array should be an iterable of the words in the sequence. The arrays X and Y should have the exact same shape. X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...] Y = [(), (), ...] """ correct = total_predictions = 0 for observations, actual_tags in zip(X, Y): # The model.viterbi call in simplify_decoding will return None if the HMM # raises an error (for example, if a test sentence contains a word that # is out of vocabulary for the training set). Any exception counts the # full sentence as an error (which makes this a conservative estimate). try: most_likely_tags = simplify_decoding(observations, model) correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags)) except: pass total_predictions += len(observations) return correct / total_predictions ``` #### Evaluate the accuracy of the MFC tagger Run the next cell to evaluate the accuracy of the tagger on the training and test corpus. ``` mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model) print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc)) mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model) print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc)) assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right." assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>') ``` ## Step 3: Build an HMM tagger --- The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence. We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence). The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula: $$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$ Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. ### IMPLEMENTATION: Unigram Counts Complete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.) $$P(tag_1) = \frac{C(tag_1)}{N}$$ ``` def unigram_counts(sequences): """Return a dictionary keyed to each unique value in the input sequence list that counts the number of occurrences of the value in the sequences list. The sequences collection should be a 2-dimensional array. For example, if the tag NOUN appears 275558 times over all the input sequences, then you should return a dictionary such that your_unigram_counts[NOUN] == 275558. """ # TODO: Finish this function! my_unigram_counts = {} for tag in sequences: if tag in my_unigram_counts: my_unigram_counts[tag] += 1 else: my_unigram_counts[tag] = 1 # Easier method: return Counter(sequences) return my_unigram_counts # TODO: call unigram_counts with a list of tag sequences from the training set tags = [tag for word, tag in data.stream()] tag_unigrams = unigram_counts(tags) # TODO: YOUR CODE HERE assert set(tag_unigrams.keys()) == data.training_set.tagset, \ "Uh oh. It looks like your tag counts doesn't include all the tags!" assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \ "Hmmm...'X' is expected to be the least common class" assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \ "Hmmm...'NOUN' is expected to be the most common class" HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>') ``` ### IMPLEMENTATION: Bigram Counts Complete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$ ``` import itertools def pairwise(iterable): t, t_1 = itertools.tee(iterable) next(t_1, 'end') return zip(t, t_1) def bigram_counts(sequences): """Return a dictionary keyed to each unique PAIR of values in the input sequences list that counts the number of occurrences of pair in the sequences list. The input should be a 2-dimensional array. For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582 """ # TODO: Finish this function! prior = '' my_bigram_counts = {} for tag in sequences: if prior != '': if (prior, tag) in my_bigram_counts: my_bigram_counts[prior, tag] += 1 else: my_bigram_counts[prior, tag] = 1 prior = tag # Easier method: return dict(Counter(pairwise(sequences))) return my_bigram_counts # TODO: call bigram_counts with a list of tag sequences from the training set tags = [tag for word, tag in data.stream()] tag_bigrams = bigram_counts(tags) assert len(tag_bigrams) == 144, \ "Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)" assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \ "Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')." assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \ "Hmmm...('DET', 'NOUN') is expected to be the most common bigram." HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>') ``` ### IMPLEMENTATION: Sequence Starting Counts Complete the code below to estimate the bigram probabilities of a sequence starting with each tag. ``` def starting_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the beginning of a sequence. For example, if 8093 sequences start with NOUN, then you should return a dictionary such that your_starting_counts[NOUN] == 8093 """ # TODO: Finish this function! my_start_counts = {} for start, end in sequences: count = sequences[start, end] if start in my_start_counts: my_start_counts[start] += count else: my_start_counts[start] = count return my_start_counts # TODO: Calculate the count of each tag starting a sequence tag_starts = starting_counts(tag_bigrams) assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram." assert max(tag_starts, key=tag_starts.get) != 'DET', "Hmmm...'DET' is expected to be the most common starting bigram." HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>') ``` ### IMPLEMENTATION: Sequence Ending Counts Complete the function below to estimate the bigram probabilities of a sequence ending with each tag. ``` def ending_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the end of a sequence. For example, if 18 sequences end with DET, then you should return a dictionary such that your_starting_counts[DET] == 18 """ # TODO: Finish this function! my_end_counts = {} for start, end in sequences: count = sequences[start, end] if end in my_end_counts: my_end_counts[end] += count else: my_end_counts[end] = count return my_end_counts # TODO: Calculate the count of each tag ending a sequence tag_ends = ending_counts(tag_bigrams) assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram." assert max(tag_ends, key=tag_ends.get) != '.', "Hmmm...'.' is expected to be the most common ending bigram." HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>') ``` ### IMPLEMENTATION: Basic HMM Tagger Use the tag unigrams and bigrams calculated above to construct a hidden Markov tagger. - Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$ - Add an edge from the starting state `basic_model.start` to each tag - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$ - Add an edge from each tag to the end state `basic_model.end` - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$ - Add an edge between _every_ pair of tags - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$ ``` basic_model = HiddenMarkovModel(name="base-hmm-tagger") # TODO: create states with emission probability distributions P(word | tag) and add to the model # (Hint: you may need to loop & create/add new states) tags = [tag for word, tag in data.stream()] words = [word for word, tag in data.stream()] tags_count = unigram_counts(tags) tag_words_count = pair_counts([tags], [words]) states = [] for tag, words_dict in tag_words_count.items(): total = float(sum(words_dict.values())) distribution = {word: count/total for word, count in words_dict.items()} tag_emissions = DiscreteDistribution(distribution) tag_state = State(tag_emissions, name=tag) states.append(tag_state) basic_model.add_states(states) # TODO: add edges between states for the observed transition frequencies P(tag_i | tag_i-1) # (Hint: you may need to loop & add transitions transition_prob_pair = {} for key in tag_bigrams.keys(): transition_prob_pair[key] = tag_bigrams.get(key)/tags_count[key[0]] for tag_state in states : for next_tag_state in states : basic_model.add_transition(tag_state,next_tag_state,transition_prob_pair[(tag_state.name,next_tag_state.name)]) starting_tag_count = starting_counts(tag_bigrams) #the number of times a tag occured at the start ending_tag_count = ending_counts(tag_bigrams) #the number of times a tag occured at the end start_prob = {} for tag in tags: start_prob[tag] = starting_tag_count[tag]/tags_count[tag] for tag_state in states : basic_model.add_transition(basic_model.start, tag_state, start_prob[tag_state.name]) end_prob = {} for tag in tags: end_prob[tag] = ending_tag_count[tag]/tags_count[tag] for tag_state in states: basic_model.add_transition(tag_state, basic_model.end, end_prob[tag_state.name]) # NOTE: YOU SHOULD NOT NEED TO MODIFY ANYTHING BELOW THIS LINE # finalize the model basic_model.bake() assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \ "Every state in your network should use the name of the associated tag, which must be one of the training set tags." assert basic_model.edge_count() == 168, \ ("Your network should have an edge from the start node to each state, one edge between every " + "pair of tags (states), and an edge from each state to the end node.") HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>') hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model) print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc)) hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model) print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc)) assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right." assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>') ``` ### Example Decoding Sequences with the HMM Tagger ``` for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, basic_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n") ``` ## Finishing the project --- <div class="alert alert-block alert-info"> **Note:** **SAVE YOUR NOTEBOOK**, then run the next cell to generate an HTML copy. You will zip & submit both this file and the HTML copy for review. </div> ``` !!jupyter nbconvert *.ipynb ``` ## Step 4: [Optional] Improving model performance --- There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional. - [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values. - Backoff Smoothing Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information. - Extending to Trigrams HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. ### Obtain the Brown Corpus with a Larger Tagset Run the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison. Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets. ``` import nltk from nltk import pos_tag, word_tokenize from nltk.corpus import brown nltk.download('brown') training_corpus = nltk.corpus.brown training_corpus.tagged_sents()[0] ```
github_jupyter
# TensorFlow Tutorial Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables - Start your own session - Train algorithms - Implement a Neural Network Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. ## 1 - Exploring the Tensorflow Library To start, you will import the library: ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) ``` Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ ``` y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss ``` Writing and running programs in TensorFlow has the following steps: 1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors. 3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value. Now let us look at an easy example. Run the cell below: ``` a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) ``` As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. ``` sess = tf.Session() print(sess.run(c)) ``` Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ``` # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() ``` When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. ### 1.1 - Linear function Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1): ```python X = tf.constant(np.random.randn(3,1), name = "X") ``` You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication - tf.add(..., ...) to do an addition - np.random.randn(...) to initialize randomly ``` # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3,1),name='X') W = tf.constant(np.random.randn(4,3),name='W') b = tf.constant(np.random.randn(4,1),name='b') Y = tf.add(tf.matmul(W,X),b) ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) ``` *** Expected Output ***: <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> ### 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")` - `tf.sigmoid(...)` - `sess.run(..., feed_dict = {x: z})` Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:** ```python sess = tf.Session() # Run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) sess.close() # Close the session ``` **Method 2:** ```python with tf.Session() as sess: # run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) # This takes care of closing the session for you :) ``` ``` # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32,name='x') # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid,feed_dict={x:z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) ``` *** Expected Output ***: <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> **To summarize, you how know how to**: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create the session 4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. ### 1.3 - Computing the Cost You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$ you can do it in one line of code in tensorflow! **Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)` Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes $$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ ``` # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32,name='z') y = tf.placeholder(tf.float32,name='y') # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z,labels=y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost,feed_dict={z:logits,y:labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) ``` ** Expected Output** : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> ### 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows: <img src="images/onehot.png" style="width:600px;height:150px;"> This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ``` # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C,name='C') # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(labels,C,axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) ``` **Expected Output**: <table> <tr> <td> **one_hot** </td> <td> [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] </td> </tr> </table> ### 1.5 - Initialize with zeros and ones Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) ``` # GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) ``` **Expected Output:** <table> <tr> <td> **ones** </td> <td> [ 1. 1. 1.] </td> </tr> </table> # 2 - Building your first neural network in tensorflow In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model: - Create the computation graph - Run the graph Let's delve into the problem you'd like to solve! ### 2.0 - Problem statement: SIGNS Dataset One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language. - **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number). - **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number). Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs. Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. <img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center> Run the following code to load the dataset. ``` # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` Change the index below and run the cell to visualize some examples in the dataset. ``` # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. ``` # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ``` **Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. ### 2.1 - Create placeholders Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(shape=[n_x,None],dtype='float') Y = tf.placeholder(shape=[n_y,None],dtype='float') ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output**: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) </td> </tr> </table> ### 2.2 - Initializing the parameters Your second task is to initialize the parameters in tensorflow. **Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```python W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) ``` Please use `seed = 1` to make sure your results match ours. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable('W1',[25,12288],initializer=tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable('b1',[25,1],initializer=tf.zeros_initializer()) W2 = tf.get_variable('W2',[12,25],initializer=tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable('b2',[12,1],initializer=tf.zeros_initializer()) W3 = tf.get_variable('W3',[6,12],initializer=tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable('b3',[6,1],initializer=tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > </td> </tr> <tr> <td> **W2** </td> <td> < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref > </td> </tr> <tr> <td> **b2** </td> <td> < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref > </td> </tr> </table> As expected, the parameters haven't been evaluated yet. ### 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition - `tf.matmul(...,...)` to do a matrix multiplication - `tf.nn.relu(...)` to apply the ReLU activation **Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.add(tf.matmul(W1,X),b1) # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.add(tf.matmul(W2,A1),b2) # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.add(tf.matmul(W3,Z2),b3) # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) ``` **Expected Output**: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. ### 2.4 Compute cost As seen before, it is very easy to compute the cost using: ```python tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) ``` **Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you. - Besides, `tf.reduce_mean` basically does the summation over the examples. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) ``` **Expected Output**: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> ### 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model. After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate. For instance, for gradient descent the optimizer would be: ```python optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) ``` To make the optimization you would do: ```python _ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ``` This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs. **Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). ### 2.6 - Building the model Now, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. ``` def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters ``` Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! ``` parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected Output**: <table> <tr> <td> **Train Accuracy** </td> <td> 0.999074 </td> </tr> <tr> <td> **Test Accuracy** </td> <td> 0.716667 </td> </tr> </table> Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy. **Insights**: - Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. ### 2.7 - Test with your own image (optional / ungraded exercise) Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! ``` import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) ``` You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects". <font color='blue'> **What you should remember**: - Tensorflow is a programming framework used in deep learning - The two main object classes in tensorflow are Tensors and Operators. - When you code in tensorflow you have to take the following steps: - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...) - Create a session - Initialize the session - Run the session to execute the graph - You can execute the graph multiple times as you've seen in model() - The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
github_jupyter
<a href="https://colab.research.google.com/github/oonid/growth-hacking-with-nlp-sentiment-analysis/blob/master/create_dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Evaluate Amazon Video Games Review Dataset ``` # ndjson to handle newline delimited json !pip install ndjson # update imbalanced-learn lib on colab !pip install --upgrade imbalanced-learn # all imports and related %matplotlib inline import pandas as pd import numpy as np import altair as alt import ndjson from collections import Counter from imblearn.under_sampling import RandomUnderSampler # get dataset, extract from gzip (overwrite), and preview data on file !wget http://deepyeti.ucsd.edu/jianmo/amazon/categoryFilesSmall/Video_Games_5.json.gz !yes y | gunzip Video_Games_5.json.gz !head Video_Games_5.json # load from file-like objects with open('Video_Games_5.json') as f: vg5 = ndjson.load(f) print('data loaded as {} with len {}'.format(type(vg5), len(vg5))) # sample out 2 data vg5[:2] # load list of dict as panda DataFrame df = pd.DataFrame(vg5) df.head() # describe to understand values of column overall (next as ratings) df.describe() # create copy of DataFrame with overall as index, to prepare plotting dfo = df.set_index('overall') dfo.head() # group data by column overall (currently as index) and count the variants dfo.groupby(dfo.index).count() # plot grouped data by overall related to column reviewText (next as reviews) dfo.groupby(dfo.index)['reviewText'].count().plot(kind='bar') # add altair chart based on sample solutions rating_counts = Counter(df.overall.tolist()) chart_data = pd.DataFrame( {'ratings': [str(e) for e in list(rating_counts.keys())], 'counts': list(rating_counts.values())}) chart = alt.Chart(chart_data).mark_bar().encode(x="ratings", y="counts") chart # dataset with only two columns (overall, reviewText) as numpy array X = df[['overall', 'reviewText']].to_numpy() print('dataset X shape: {} type: {}'.format(X.shape, type(X))) # using column overall as label y = df['overall'].to_numpy() print('label y shape: {} type: {}'.format(y.shape, type(y))) ``` # Generating small_corpus ``` # predefined sampling strategy sampling_strategy = {1.0: 1500, 2.0: 500, 3.0: 500, 4.0: 500, 5.0: 1500} random_state = 42 # to get identical results with sample solution rus = RandomUnderSampler(random_state=random_state, sampling_strategy=sampling_strategy) X_res, y_res = rus.fit_resample(X, y) print('initial label: {}'.format(Counter(y))) print('result label: {}'.format(Counter(y_res))) # convert from numpy array back to pandas DataFrame small_corpus = pd.DataFrame({'ratings': X_res[:, 0], 'reviews': X_res[:, 1]}) # set ratings column type as int32 small_corpus['ratings'] = small_corpus['ratings'].astype('int32') # get info of small_corpus DataFrame with total 1500+500+500+500+1500 entries small_corpus.info() small_corpus.head() # export small_corpus to csv (1500+500+500+500+1500), without index small_corpus.to_csv('small_corpus.csv', index=False) ``` # Generating big_corpus ``` random_state = 42 # to get identical results with sample solution np.random.seed(random_state) # get 100.000 on random ratings (1-5) as numpy array random_ratings = np.random.randint(low=1, high=6, size=100000) # create sampling strategy by count total ratings on random_ratings (dataframe) unique, counts = np.unique(random_ratings, return_counts=True) sampling_strategy = {} for k, v in zip(unique, counts): sampling_strategy[k] = v print('sampling_strategy: {}'.format(sampling_strategy)) rus = RandomUnderSampler(random_state=random_state, sampling_strategy=sampling_strategy) X_res, y_res = rus.fit_resample(X, y) print('initial label: {}'.format(Counter(y))) print('result label: {}'.format(Counter(y_res))) # convert from numpy array back to pandas DataFrame big_corpus = pd.DataFrame({'ratings': X_res[:, 0], 'reviews': X_res[:, 1]}) # set ratings column type as int32 big_corpus['ratings'] = big_corpus['ratings'].astype('int32') big_corpus.info() big_corpus.head() # export big_corpus to csv (100000) big_corpus.to_csv('big_corpus.csv') ```
github_jupyter
# Project Description Another CV2 tutorial this one from https://pythonprogramming.net/loading-images-python-opencv-tutorial/ ``` #http://tsaith.github.io/record-video-with-python-3-opencv-3-on-osx.html import numpy as np import cv2 cap = cv2.VideoCapture(0) # Capture video from camera # Get the width and height of frame width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH) + 0.5) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT) + 0.5) # Define the codec and create VideoWriter object fourcc = cv2.VideoWriter_fourcc(*'mp4v') # Be sure to use the lower case out = cv2.VideoWriter('output.mp4', fourcc, 20.0, (width, height)) while(cap.isOpened()): ret, frame = cap.read() if ret == True: frame = cv2.flip(frame,0) # write the flipped frame out.write(frame) cv2.imshow('frame',frame) if (cv2.waitKey(1) & 0xFF) == ord('q'): # Hit `q` to exit break else: break # Release everything if job is finished out.release() cap.release() cv2.destroyAllWindows() ``` # Writting stuff on an image ``` import numpy as np import cv2 img = cv2.imread('watch.jpg',cv2.IMREAD_COLOR) cv2.line(img,(0,0),(200,300),(255,255,255),50) cv2.rectangle(img,(500,250),(1000,500),(0,0,255),15) cv2.circle(img,(447,63), 63, (0,255,0), -1) pts = np.array([[100,50],[200,300],[700,200],[500,100]], np.int32) pts = pts.reshape((-1,1,2)) cv2.polylines(img, [pts], True, (0,255,255), 3) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV Tuts!',(0,130), font, 1, (200,255,155), 2, cv2.LINE_AA) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV Tuts!',(10,500), font, 6, (200,255,155), 13, cv2.LINE_AA) cv2.imshow('image',img) cv2.waitKey(0) cv2.destroyAllWindows() import cv2 import numpy as np cap = cv2.VideoCapture(0) while(1): # Take each frame _, frame = cap.read() hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lower_red = np.array([30,150,50]) upper_red = np.array([255,255,180]) mask = cv2.inRange(hsv, lower_red, upper_red) res = cv2.bitwise_and(frame,frame, mask= mask) laplacian = cv2.Laplacian(frame,cv2.CV_64F) sobelx = cv2.Sobel(frame,cv2.CV_64F,1,0,ksize=5) sobely = cv2.Sobel(frame,cv2.CV_64F,0,1,ksize=5) cv2.imshow('Original',frame) cv2.imshow('Mask',mask) cv2.imshow('laplacian',laplacian) cv2.imshow('sobelx',sobelx) cv2.imshow('sobely',sobely) k = cv2.waitKey(5) & 0xFF if k == 27: break cv2.destroyAllWindows() cap.release() import cv2 import numpy as np cap = cv2.VideoCapture(0) while(1): _, frame = cap.read() hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV) lower_red = np.array([30,150,50]) upper_red = np.array([255,255,180]) mask = cv2.inRange(hsv, lower_red, upper_red) res = cv2.bitwise_and(frame,frame, mask= mask) cv2.imshow('Original',frame) edges = cv2.Canny(frame,100,200) cv2.imshow('Edges',edges) k = cv2.waitKey(5) & 0xFF if k == 27: break cv2.destroyAllWindows() cap.release() import cv2 import numpy as np img_rgb = cv2.imread('opencv-template-matching-python-tutorial.jpg') img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY) template = cv2.imread('opencv-template-for-matching.jpg',0) w, h = template.shape[::-1] res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED) threshold = 0.8 loc = np.where( res >= threshold) for pt in zip(*loc[::-1]): cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,255,255), 2) cv2.imshow('Detected',img_rgb) import cv2 # Opens the Video file cap= cv2.VideoCapture('IMG_2128.MOV') i=0 while(cap.isOpened()): ret, frame = cap.read() if ret == False: break cv2.imwrite('kang'+str(i)+'.jpg',frame) i+=1 cap.release() cv2.destroyAllWindows() import cv2 # Opens the Video file cap= cv2.VideoCapture('IMG_2128.MOV') i=1 while(cap.isOpened()): ret, frame = cap.read() if ret == False: break if i%10 == 0: cv2.imwrite('kang'+str(i)+'.jpg',frame) i+=1 cap.release() cv2.destroyAllWindows() ```
github_jupyter
# Hierarchical Clustering Lab In this notebook, we will be using sklearn to conduct hierarchical clustering on the [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris) which contains 4 dimensions/attributes and 150 samples. Each sample is labeled as one of the three type of Iris flowers. In this exercise, we'll ignore the labeling and cluster based on the attributes, then we'll compare the results of different hierarchical clustering techniques with the original labels to see which one does a better job in this scenario. We'll then proceed to visualize the resulting cluster hierarchies. ## 1. Importing the Iris dataset ``` from sklearn import datasets iris = datasets.load_iris() ``` A look at the first 10 samples in the dataset ``` iris.data[:10] ``` ```iris.target``` contains the labels that indicate which type of Iris flower each sample is ``` iris.target ``` ## 2. Clustering Let's now use sklearn's [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to conduct the heirarchical clustering ``` from sklearn.cluster import AgglomerativeClustering # Hierarchical clustering # Ward is the default linkage algorithm, so we'll start with that ward = AgglomerativeClustering(n_clusters=3) ward_pred = ward.fit_predict(iris.data) ``` Let's also try complete and average linkages **Exercise**: * Conduct hierarchical clustering with complete linkage, store the predicted labels in the variable ```complete_pred``` * Conduct hierarchical clustering with average linkage, store the predicted labels in the variable ```avg_pred``` Note: look at the documentation of [```AgglomerativeClustering```](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.AgglomerativeClustering.html) to find the appropriate value to pass as the ```linkage``` value ``` # Hierarchical clustering using complete linkage # TODO: Create an instance of AgglomerativeClustering with the appropriate parameters complete = AgglomerativeClustering(n_clusters=3, linkage = 'complete') # Fit & predict # TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels complete_pred = complete.fit_predict(iris.data) # Hierarchical clustering using average linkage # TODO: Create an instance of AgglomerativeClustering with the appropriate parameters avg = AgglomerativeClustering(n_clusters = 3, linkage = 'average') # Fit & predict # TODO: Make AgglomerativeClustering fit the dataset and predict the cluster labels avg_pred = avg.fit_predict(iris.data) ``` To determine which clustering result better matches the original labels of the samples, we can use ```adjusted_rand_score``` which is an *external cluster validation index* which results in a score between -1 and 1, where 1 means two clusterings are identical of how they grouped the samples in a dataset (regardless of what label is assigned to each cluster). Cluster validation indices are discussed later in the course. ``` from sklearn.metrics import adjusted_rand_score ward_ar_score = adjusted_rand_score(iris.target, ward_pred) ``` **Exercise**: * Calculate the Adjusted Rand score of the clusters resulting from complete linkage and average linkage ``` # TODO: Calculated the adjusted Rand score for the complete linkage clustering labels complete_ar_score = adjusted_rand_score(iris.target, complete_pred) # TODO: Calculated the adjusted Rand score for the average linkage clustering labels avg_ar_score = adjusted_rand_score(iris.target, avg_pred) ``` Which algorithm results in the higher Adjusted Rand Score? ``` print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score) ``` ## 3. The Effect of Normalization on Clustering Can we improve on this clustering result? Let's take another look at the dataset ``` iris.data[:15] ``` Looking at this, we can see that the forth column has smaller values than the rest of the columns, and so its variance counts for less in the clustering process (since clustering is based on distance). Let us [normalize](https://en.wikipedia.org/wiki/Feature_scaling) the dataset so that each dimension lies between 0 and 1, so they have equal weight in the clustering process. This is done by subtracting the minimum from each column then dividing the difference by the range. sklearn provides us with a useful utility called ```preprocessing.normalize()``` that can do that for us ``` from sklearn import preprocessing normalized_X = preprocessing.normalize(iris.data) normalized_X[:10] ``` Now all the columns are in the range between 0 and 1. Would clustering the dataset after this transformation lead to a better clustering? (one that better matches the original labels of the samples) ``` ward = AgglomerativeClustering(n_clusters=3) ward_pred = ward.fit_predict(normalized_X) complete = AgglomerativeClustering(n_clusters=3, linkage="complete") complete_pred = complete.fit_predict(normalized_X) avg = AgglomerativeClustering(n_clusters=3, linkage="average") avg_pred = avg.fit_predict(normalized_X) ward_ar_score = adjusted_rand_score(iris.target, ward_pred) complete_ar_score = adjusted_rand_score(iris.target, complete_pred) avg_ar_score = adjusted_rand_score(iris.target, avg_pred) print( "Scores: \nWard:", ward_ar_score,"\nComplete: ", complete_ar_score, "\nAverage: ", avg_ar_score) ``` ## 4. Dendrogram visualization with scipy Let's visualize the highest scoring clustering result. To do that, we'll need to use Scipy's [```linkage```](https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.linkage.html) function to perform the clusteirng again so we can obtain the linkage matrix it will later use to visualize the hierarchy ``` # Import scipy's linkage function to conduct the clustering from scipy.cluster.hierarchy import linkage # Specify the linkage type. Scipy accepts 'ward', 'complete', 'average', as well as other values # Pick the one that resulted in the highest Adjusted Rand Score linkage_type = 'ward' linkage_matrix = linkage(normalized_X, linkage_type) ``` Plot using scipy's [dendrogram](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.cluster.hierarchy.dendrogram.html) function ``` from scipy.cluster.hierarchy import dendrogram import matplotlib.pyplot as plt plt.figure(figsize=(22,18)) # plot using 'dendrogram()' dendrogram(linkage_matrix) plt.show() ``` ## 5. Visualization with Seaborn's ```clustermap``` The [seaborn](http://seaborn.pydata.org/index.html) plotting library for python can plot a [clustermap](http://seaborn.pydata.org/generated/seaborn.clustermap.html), which is a detailed dendrogram which also visualizes the dataset in more detail. It conducts the clustering as well, so we only need to pass it the dataset and the linkage type we want, and it will use scipy internally to conduct the clustering ``` import seaborn as sns sns.clustermap(normalized_X, figsize=(12,18), method=linkage_type, cmap='viridis') # Expand figsize to a value like (18, 50) if you want the sample labels to be readable # Draw back is that you'll need more scrolling to observe the dendrogram plt.show() ``` Looking at the colors of the dimensions can you observe how they differ between the three type of flowers? You should at least be able to notice how one is vastly different from the two others (in the top third of the image).
github_jupyter
``` import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # import data from the github page of the book data = pd.read_csv('https://raw.githubusercontent.com/Develop-Packt/Exploring-Absenteeism-at-Work/master/data/Absenteeism_at_work.csv', sep=";") # print dimensionality of the data, columns, types and missing values print(f"Data dimension: {data.shape}") for col in data.columns: print(f"Column: {col:35} | type: {str(data[col].dtype):7} | missing values: {data[col].isna().sum():3d}") # compute statistics on numerical features data.describe().T # define encoding dictionaries month_encoding = {1: "January", 2: "February", 3: "March", 4: "April", 5: "May", 6: "June", 7: "July", 8: "August", 9: "September", 10: "October", 11: "November", 12: "December", 0: "Unknown"} dow_encoding = {2: "Monday", 3: "Tuesday", 4: "Wednesday", 5: "Thursday", 6: "Friday"} season_encoding = {1: "Spring", 2: "Summer", 3: "Fall", 4: "Winter"} education_encoding = {1: "high_school", 2: "graduate", 3: "postgraduate", 4: "master_phd"} yes_no_encoding = {0: "No", 1: "Yes"} # backtransform numerical variables to categorical preprocessed_data = data.copy() preprocessed_data["Month of absence"] = preprocessed_data["Month of absence"]\ .apply(lambda x: month_encoding[x]) preprocessed_data["Day of the week"] = preprocessed_data["Day of the week"]\ .apply(lambda x: dow_encoding[x]) preprocessed_data["Seasons"] = preprocessed_data["Seasons"]\ .apply(lambda x: season_encoding[x]) preprocessed_data["Education"] = preprocessed_data["Education"]\ .apply(lambda x: education_encoding[x]) preprocessed_data["Disciplinary failure"] = preprocessed_data["Disciplinary failure"]\ .apply(lambda x: yes_no_encoding[x]) preprocessed_data["Social drinker"] = preprocessed_data["Social drinker"]\ .apply(lambda x: yes_no_encoding[x]) preprocessed_data["Social smoker"] = preprocessed_data["Social smoker"]\ .apply(lambda x: yes_no_encoding[x]) # transform columns preprocessed_data.head().T ``` **Exercise 01: Identifying Disease Reasons for Absence** ``` # define function, which checks if the provided integer value # is contained in the ICD or not def in_icd(val): return "Yes" if val >= 1 and val <= 21 else "No" # add Disease column preprocessed_data["Disease"] = preprocessed_data["Reason for absence"]\ .apply(in_icd) # plot value counts plt.figure(figsize=(10, 8)) sns.countplot(data=preprocessed_data, x='Disease') plt.savefig('figs/disease_plot.png', format='png', dpi=300) ```
github_jupyter
## 8.2 创建超链接 超链接指按内容链接,可以从一个文本内容指向文本其他内容或其他文件、网址等。超链接可以分为文本内链接、网页链接以及本地文件链接。LaTeX提供了`hyperref`宏包,可用于生成超链接。在使用时,只需在前导代码中申明宏包即可,即`\usepackage{hyperref}`。 ### 8.2.1 超链接类型 #### 文本内链接 在篇幅较大的文档中,查阅内容会比较繁琐,因此,往往会在目录中使用超链接来进行文本内容的快速高效浏览。可以使用`hyperref`宏包创建文本内超链接。 【**例8-4**】使用`\usepackage{hyperref}`创建一个简单的目录链接文本内容的例子。 ```tex \documentclass{book} \usepackage{blindtext} \usepackage{hyperref} %超链接包 \begin{document} \frontmatter \tableofcontents \clearpage \addcontentsline{toc}{chapter}{Foreword} {\huge {\bf Foreword}} This is foreword. \clearpage \mainmatter \chapter{First Chapter} This is chapter 1. \clearpage \section{First section} \label{second} This is section 1.1. \end{document} ``` 编译后文档如图8.2.1所示。 <p align="center"> <table> <tr> <td><img align="middle" src="graphics/example8_2_1_1.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_1_2.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_1_3.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_1_4.png" width="300"></td> </tr> </table> </p> <center><b>图8.2.4</b> 编译后的文档</center> 在导入 `hyperref` 时必须非常小心,一般而言,它必须是最后一个要导入的包。 #### 网址链接 众所周知,在文档中插入网址之类的文本同样需要用到超链接,同样的,使用`hyperref`宏包可以创建网页超链接。有时我们需要将超链接命名并隐藏网址,这时我们可以使用`href`命令进行插入;有时,我们插入的网址链接太长,但LaTeX不会自动换行,往往会造成格式混乱的问题,这时,我们可以使用`url`工具包,并在该工具包中声明一个参数即可解决这个问题,相关命令为`\usepackage[hyphens]{url}`。 > 参考[Line breaks in URLs](https://latex.org/forum/viewtopic.php?f=44&t=4022)。 【**例8-5**】在LaTeX中使用`hyperref`及`url`工具包插入网页链接并设置自动换行。 ```tex \documentclass[12pt]{article} \usepackage[hyphens]{url} \usepackage{hyperref} \begin{document} This is the website of open-source latex-cookbook repository: \href{https://github.com/xinychen/latex-cookbook}{LaTeX-cookbook} or go to the next url: \url{https://github.com/xinychen/latex-cookbook}. \end{document} ``` 编译后文档如图8.2.3所示。 <p align="center"> <table> <tr> <td><img align="middle" src="graphics/example8_2_2.png" width="300"></td> </tr> </table> </p> <center><b>图8.2.2</b> 编译后的文档</center> #### 本地文件链接 有时,需要将文本与本地文件进行链接,`href`命令也可用于打开本地文件。 【**例8-6**】在LaTeX中使用`href`命令打开本地文件。 ```tex \documentclass[12pt]{article} \usepackage[hyphens]{url} \usepackage{hyperref} \begin{document} This is the text of open-source latex-cookbook repository: \href{run:./LaTeX-cookbook.dox}{LaTeX-cookbook}. \end{document} ``` 编译后文档如图8.2.3所示。 <p align="center"> <table> <tr> <td><img align="middle" src="graphics/example8_2_3.png" width="300"></td> </tr> </table> </p> <center><b>图8.2.3</b> 编译后的文档</center> ### 8.2.2 超链接格式 当然,有时候为了突出超链接,也可以在工具包`hyperref`中设置特定的颜色,设置的命令为`\hypersetup`,一般放在前导代码中,例如`colorlinks = true, linkcolor=blue, urlcolor = blue, filecolor=magenta`。默认设置以单色样式的空间字体打印链接,`\urlstyle{same}`命令将改变这个样式,并以与文本其余部分相同的样式显示链接。 > 参考[Website address](https://latex.org/forum/viewtopic.php?f=44&t=5115)。 【**例8-7**】在LaTeX中使用`hyperref`工具包插入超链接并设置超链接颜色为蓝色。 ```tex \documentclass{book} \usepackage{blindtext} \usepackage{hyperref} %超链接包 \hypersetup{colorlinks = true, %链接将被着色,默认颜色是红色 linkcolor=blue, % 内部链接显示为蓝色 urlcolor = cyan, % 网址链接为青色 filecolor=magenta} % 本地文件链接为洋红色 \urlstyle{same} \begin{document} \frontmatter \tableofcontents \clearpage \addcontentsline{toc}{chapter}{Foreword} {\huge {\bf Foreword}} This is foreword. \clearpage \mainmatter \chapter{First Chapter} This is chapter 1. \clearpage \section{First section} \label{second} This is section 1.1. This is the website of open-source latex-cookbook repository: \href{https://github.com/xinychen/latex-cookbook}{LaTeX-cookbook} or go to the next url: \url{https://github.com/xinychen/latex-cookbook}. This is the text of open-source latex-cookbook repository: \href{run:./LaTeX-cookbook.dox}{LaTeX-cookbook} \end{document} ``` 编译后文档如图8.2.4所示。 <p align="center"> <table> <tr> <td><img align="middle" src="graphics/example8_2_4_1.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_4_2.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_4_3.png" width="300"></td> <td><img align="middle" src="graphics/example8_2_4_4.png" width="300"></td> </tr> </table> </p> <center><b>图8.2.4</b> 编译后的文档</center> 【回放】[**8.1 图表和公式的索引**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-8/section1.ipynb) 【继续】[**8.3 Bibtex用法**](https://nbviewer.jupyter.org/github/xinychen/latex-cookbook/blob/main/chapter-8/section3.ipynb) ### License <div class="alert alert-block alert-danger"> <b>This work is released under the MIT license.</b> </div>
github_jupyter
``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from sklearn.preprocessing import normalize import seaborn as sns # list of models # Commented few models because they produced very big results which interfere visualization models = [ # 'RandomForestRegressor', # 'AdaBoostRegressor', # 'BaggingRegressor', # 'DecisionTreeRegressor', 'DummyRegressor', 'ExtraTreeRegressor', #'ExtraTreesRegressor', #'GaussianProcessRegressor', #'GradientBoostingRegressor', #'HuberRegressor', 'KNeighborsRegressor', #'MLPRegressor', #'PassiveAggressiveRegressor', #'RANSACRegressor', #'SGDRegressor', #'TheilSenRegressor' ] buildingtypes = ['Office', 'PrimClass', 'UnivClass', 'UnivDorm', 'UnivLab'] # Generate different line styles # 24 different different lines will be generated lineStyles = ['-', '--', '-.', ':'] lineColors = ['b', 'g', 'r', 'c', 'm', 'y'] styles = [] for j in range(3): for i in range(5): styles.append(lineColors[i] + lineStyles[(i + j) % 4]) def visualize(arg): for buildingtype in buildingtypes: # Draw lines on single plot plt.style.use('seaborn-whitegrid') plt.figure(figsize=(15,3)) for i in range(len(models)): dataframes = [] data = pd.read_csv('../results/' + models[i] + '_metrics_' + buildingtype + '.csv') data = data.drop(columns=['Unnamed: 0']) data['buidingtype'] = buildingtype dataframes.append(data) result = pd.concat(dataframes) rows = result[result['buidingtype']==buildingtype]['MAPE'] # Single line creator value, = plt.plot(rows, styles[i], label=models[i]) # Draw plot plt.title(buildingtype, loc='left') plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.) plt.ylabel(arg) plt.xlabel('Buildings') plt.show() visualize('MAPE') ``` # Box plot array visualization Based on this: https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot ``` f, axes = plt.subplots(5, 3, figsize=(11,11), sharex='col') plt.style.use('seaborn-whitegrid') for buildingtype in buildingtypes: # Draw lines on single plot MAPE = {} NMBE = {} CVRSME = {} for i in range(len(models)): dataframes = [] data = pd.read_csv('../results/' + models[i] + '_metrics_' + buildingtype + '.csv') data = data.drop(columns=['Unnamed: 0']) data['buidingtype'] = buildingtype dataframes.append(data) result = pd.concat(dataframes) MAPE[models[i]] = result[result['buidingtype']==buildingtype]['MAPE'] NMBE[models[i]] = result[result['buidingtype']==buildingtype]['NMBE'] CVRSME[models[i]] = result[result['buidingtype']==buildingtype]['CVRSME'] MAPE_df = pd.DataFrame(MAPE) MAPE_df = MAPE_df[MAPE_df<100].melt() ax1 = sns.boxplot(data=MAPE_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),0]) ax1.set(ylabel=buildingtype, xlabel="MAPE") NMBE_df = pd.DataFrame(NMBE) NMBE_df = NMBE_df.melt() #[NMBE_df<100] ax2 = sns.boxplot(data=NMBE_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),1]) ax2.set(ylabel="", xlabel="NMBE", yticks=[]) CVRSME_df = pd.DataFrame(CVRSME) CVRSME_df = CVRSME_df.melt() #[NMBE_df<100] ax3 = sns.boxplot(data=CVRSME_df, x='value', y='variable', ax=axes[buildingtypes.index(buildingtype),2]) ax3.set(ylabel="", xlabel="CVRSME", yticks=[]) # sns.boxplot(y="b", x= "a", data=rows, orient='v' ) #, ax=axes[0] # print(rows) # Single line creator # value, = plt.plot(rows, styles[i], label=models[i]) # sns.boxplot(y="b", x= "a", data=df, orient='v' , ax=axes[0]) # sns.boxplot(y="c", x= "a", data=df, orient='v' , ax=axes[1]) ```
github_jupyter
``` suppressMessages(library("mc2d")) library("scales") library("ggplot2") library("gridExtra") ``` # Risk Study for REPLACE ME See the [ISO 27005 Risk Cookbook](http://www.businessofsecurity.com/docs/FAIR%20-%20ISO_IEC_27005%20Cookbook.pdf) for a more detailed explanation of this template. # Asset Define the asset or assets at risk # Threat Community Explain the threat community. This should include where they operate, how effective they are, and any additional details that help understand them. ## Threat Capability Define the ability for the threat agent to overcome the controls. The guideline for values here are as follows: |Rating |Value | |----------------------|------| |Very High (Top 2%) |98-100| |High (Top 16%) |84-97 | |Moderate |17-84 | |Low (Bottom 16%) |3-16 | |Very Low (Bottom 2%) |0-2 | ``` tcap.min <- 0 tcap.likely <- 50 tcap.max <- 100 tcap.confidence <- 10 ``` # Controls Define the controls that resist the threat community. Provide any necessary links and descriptions. ## Control Strength Define the ability of the controls in play to overcome the threat agents. |Rating |Value | |----------------------|------| |Very High (Top 2%) |98-100| |High (Top 16%) |84-97 | |Moderate |17-84 | |Low (Bottom 16%) |3-16 | |Very Low (Bottom 2%) |0-2 | ``` cs.min <- 0 cs.likely <- 50 cs.max <- 100 cs.confidence <- 10 ``` # Threat Event Frequency Threat Event Frequency. Number assumes an annual value. Example values are as follows: |Rating |Value | |---------|------| |Very High|> 100 | |High |10-100| |Moderate |1-10 | |Low |> .1 | |Very Low |< .1 | ``` tef <- .25 ``` # Loss Magnitude Define the types of loss that could occur during a loss event for this study. |Primary |ISO/IEC 27005 Direct Operational Impacts | |:--------------|:-------------------------------------------------------------------------| |Productivity |The financial replacement value of lost (part of) asset | |Response |The cost of acquisition, configuration, and installation of the new asset | |Replacement |The cost of suspended operations due to the incident | | |Impact results in an information security breach | |Secondary |ISO/IEC 27005 Indirect Operational Impacts | |:-----------------------|:------------------------------------------------------------------------| |Competitive Advantage |Opportunity cost | |Fines/Judgments |Legal or regulatory actions levied against an organization including bail| |Reputation |Potential misuse of information obtained through a security breach | | |Violation of statutory or regulatory obligations | | |Violation of ethical codes of conduct | ## Probable Loss Set the probable amount for a single loss event. This is a combination in dollars of both the primary and secondary loss factors. ``` loss.probable <- 100000 ``` ## Worst Case Loss Set the worst case amount a single loss event. This is a combination in dollars of both the primary and secondary loss factors ``` loss.worstCase <- 1000000 ``` # Qualified risk based on loss tolerance ``` loss.veryHigh <- 10000000 loss.high <- 1000000 loss.moderate <- 100000 loss.low <- 50000 loss.veryLow <- 10000 ``` # Generate distribution of samples ``` sampleSize <- 100000 cs <- rpert(sampleSize, cs.min, cs.likely, cs.max, cs.confidence) tcap <- rpert(sampleSize, tcap.min, tcap.likely, tcap.max, tcap.confidence) csPlot <- ggplot(data.frame(cs), aes(x = cs)) csPlot <- csPlot + geom_histogram(aes(y = ..density..), color="black",fill="white", binwidth=1) csPlot <- csPlot + geom_density(fill="steelblue",alpha=2/3) csPlot <- csPlot + theme_bw() csPlot <- csPlot + labs(title="Control Strength", x="Sample Value", y="Density") csPlot <- csPlot + scale_x_continuous(breaks=seq(0,100, by=10)) tcapPlot <- ggplot(data.frame(tcap), aes(x = tcap)) tcapPlot <- tcapPlot + geom_histogram(aes(y = ..density..), color="black",fill="white", binwidth=1) tcapPlot <- tcapPlot + geom_density(fill="steelblue",alpha=2/3) tcapPlot <- tcapPlot + theme_bw() tcapPlot <- tcapPlot + labs(title="Threat Capability", x="Sample Value", y="Density") tcapPlot <- tcapPlot + scale_x_continuous(breaks=seq(0,100, by=10)) grid.arrange(csPlot, tcapPlot, heights=4:5, ncol=2) ``` # Vulnerability Function ``` CalculateVulnerability <- function() { if (sampleSize < 100) { stop("Sample size needs to be at least 100 to get statistically significant results") } vulnerability <- 0 for (i in 1:sampleSize) { if (tcap[i] > cs[i]) { vulnerability <- vulnerability + 1 } } return(vulnerability / sampleSize) } ``` # Loss Event Frequency Function ``` CalculateLossEventFrequency <- function() { return(CalculateVulnerability() * tef) } ``` # Risk Function ``` CalculateRisk <- function(loss) { if (loss >= loss.veryHigh) { return("Very High") } else if (loss < loss.veryHigh && loss >= loss.high) { return("High") } else if (loss < loss.high && loss >= loss.moderate) { return("Moderate") } else if (loss < loss.moderate && loss >= loss.veryLow) { return("Low") } else { return("Very Low") } } ``` # Annualized Loss Function ``` CalculateAnnualizedLoss <- function(lef, lm) { return(lm * lef) } ``` # Calculate ``` lossEventFrequency <- CalculateLossEventFrequency() worstCaseLoss <- CalculateAnnualizedLoss(lossEventFrequency, loss.worstCase) probableLoss <- CalculateAnnualizedLoss(lossEventFrequency, loss.probable) worstCaseRisk <- CalculateRisk(worstCaseLoss) probableRisk <- CalculateRisk(probableLoss) ``` # Final Results ``` cat("Probable Risk:", probableRisk, dollar_format()(probableLoss), "\n") cat("Worst Case Risk:", worstCaseRisk, dollar_format()(worstCaseLoss), "\n") ``` # Risk Treatments Document any risk treatments that may come out of this study.
github_jupyter
# Testing ARMA hidden semi-Markov models ``` import numpy as np import seaborn as sns import time from types import SimpleNamespace from bioslds import sources from bioslds.arma import Arma, make_random_arma from bioslds.arma_hsmm import sample_switching_models, ArmaHSMM from bioslds.plotting import FigureManager ``` ## Test `sample_switching_models` ### Generate a sawtooth signal ``` sawtooth = SimpleNamespace( arma1=Arma([1.0], [], bias=0.05, default_source=sources.Constant(0)), arma2=Arma([1.0], [], bias=-0.05, default_source=sources.Constant(0)), usage_seq=np.tile(np.repeat([0, 1], 20), 10), ) sawtooth.n = len(sawtooth.usage_seq) sawtooth.sig = sample_switching_models( [sawtooth.arma1, sawtooth.arma2], sawtooth.usage_seq ) with FigureManager() as (_, ax): ax.plot(sawtooth.sig) ``` ### Generate a noisy step signal ``` rng = np.random.default_rng(1) noisy_step = SimpleNamespace( arma1=Arma([0.8], [], bias=1.0, default_source=sources.GaussianNoise(1, scale=0.1)), arma2=Arma( [0.75], [], bias=-0.5, default_source=sources.GaussianNoise(2, scale=0.1) ), arma3=Arma( [0.85], [], bias=0.2, default_source=sources.GaussianNoise(3, scale=0.1) ), usage_seq=np.repeat(rng.integers(low=0, high=3, size=10), 20), ) noisy_step.n = len(sawtooth.usage_seq) noisy_step.sig = sample_switching_models( [noisy_step.arma1, noisy_step.arma2, noisy_step.arma3], noisy_step.usage_seq ) with FigureManager() as (_, ax): ax.plot(noisy_step.sig, "k") ax.set_xlabel("time step") ax.set_ylabel("signal") ax2 = ax.twinx() ax2.plot(noisy_step.usage_seq, c="C1", ls="--") ax2.set_ylabel("state", color="C1") ax2.tick_params(axis="y", labelcolor="C1") ax2.spines["right"].set_color("C1") ax2.set_yticks([0, 1, 2]) sns.despine(ax=ax2, left=True, right=False, offset=10, bottom=True) ``` ## Test `ArmaHSMM` ### Generate a signal with switching ARs, using minimal dwell time ``` random_switching = SimpleNamespace( arma1=Arma([0.8], [], default_source=sources.GaussianNoise(1)), arma2=Arma([-0.5], [], default_source=sources.GaussianNoise(2)), n=10000, ) random_switching.arma_hsmm = ArmaHSMM( [random_switching.arma1, random_switching.arma2], min_dwell=15, dwell_times=[25, 35], ) ( random_switching.sig, random_switching.u, random_switching.usage_seq, ) = random_switching.arma_hsmm.transform( random_switching.n, return_input=True, return_usage_seq=True ) with FigureManager() as (_, ax): ax.plot(random_switching.sig[:100], "k") ax.set_xlabel("time step") ax.set_ylabel("signal") ax2 = ax.twinx() ax2.plot(random_switching.usage_seq[:100], c="C1", ls="--") ax2.set_ylabel("state", color="C1") ax2.tick_params(axis="y", labelcolor="C1") ax2.spines["right"].set_color("C1") ax2.set_yticks([0, 1]) sns.despine(ax=ax2, left=True, right=False, offset=10, bottom=True) with FigureManager(1, 2) as (_, axs): for i, ax in enumerate(axs): crt_sig = random_switching.sig[random_switching.usage_seq == i] ax.scatter(crt_sig[1:], crt_sig[:-1], alpha=0.05, label="actual") xl = ax.get_xlim() ax.plot( xl, random_switching.arma_hsmm.models[i].a[0] * np.asarray(xl), "k--", label="expected", ) leg_h = ax.legend(frameon=False) for crt_lh in leg_h.legendHandles: crt_lh.set_alpha(1) ax.set_xlabel("$y_t$") ax.set_ylabel("$y_{t-1}$") ax.set_title(f"State {i}") ```
github_jupyter
``` from planaritychecker import PlanarityChecker from numpy.random import random, randint import networkx as nx from planarity.planarity_networkx import planarity %matplotlib inline ``` # Check $K_5$ and $K_{3,3}$ without one edge ``` almost_K5 = PlanarityChecker(5) graph_almost_K5 = nx.Graph() graph_almost_K5.add_nodes_from(range(5)) for i in range(5): for j in range(i + 1, 5): if (i != 0 or j != 1): almost_K5.add_edge(i, j) graph_almost_K5.add_edge(i, j) nx.draw(graph_almost_K5) print("almost K5. number of edges: %d, is planar: %d" % (almost_K5.edges_count, almost_K5.is_planar())) almost_K33 = PlanarityChecker(6) graph_almost_K33 = nx.Graph() graph_almost_K33.add_nodes_from(range(6)) for i in range(3): for j in range(3, 6): if i != 1 or j != 4: almost_K33.add_edge(i, j) graph_almost_K33.add_edge(i, j) nx.draw(graph_almost_K33) print("Almost K3,3. number of edges: %d, is planar: %d" % (almost_K33.edges_count, almost_K33.is_planar())) ``` # Check $K_5$ and $K_{3,3}$ ``` K5 = almost_K5 K5.add_edge(0, 1) graph_K5 = graph_almost_K5 graph_K5.add_edge(0, 1) nx.draw(graph_K5) print("K5. number of edges: %d, is planar: %d" % (K5.edges_count, K5.is_planar())) K33 = almost_K33 K33.add_edge(1, 4) graph_K33 = graph_almost_K33 graph_K33.add_edge(1, 4) nx.draw(graph_K33) print("K33. number of edges: %d, is planar: %d" % (K33.edges_count, K33.is_planar())) ``` # Stress test # Generate a lot of graphs with probability of every edge=$p$ and check planarity with PlanarityChecker and planarity library (https://pypi.org/project/planarity/) ``` def generate_graphs(n, p): """Generate Graph and nx.Graph with n vertexes, where p is a probability of edge existance""" G = PlanarityChecker(n) nx_G = nx.Graph() nx_G.add_nodes_from(range(n)) for i in range(n): for j in range(i + 1, n): if random() < p: G.add_edge(i, j) nx_G.add_edge(i, j) return (G, nx_G) n_planar, n_notplanar = 0, 0 for i in range(1000): G, nxG = generate_graphs(100, 0.02) if G.is_planar() != planarity.is_planar(nxG): print("Custom: %d, Library: %d" % (G.is_planar(), planarity.is_planar(nxG))) nx.draw(nxG) break else: if (G.is_planar()): n_planar += 1 else: n_notplanar += 1 print(n_planar, n_notplanar) ``` # It works correctly. Check execution time ``` n = 20000 m = 40000 G = PlanarityChecker(n) edges = set() for i in range(m): a = randint(0, n) b = randint(0, n) while (a, b) in edges or a == b: a = randint(0, n) b = randint(0, n) edges.add((a, b)) for e in edges: G.add_edge(e[0], e[1]) import sys sys.setrecursionlimit(20000) %%time G.is_planar() nx_G = nx.Graph() nx_G.add_edges_from(edges) %%time planarity.is_planar(nx_G) ``` # Not bad for python. (planarity library has implementation on C)
github_jupyter
``` from __future__ import print_function import os from netCDF4 import Dataset import requests from lxml import etree import matplotlib.pyplot as plt from owslib.wps import WebProcessingService, ComplexDataInput verify_ssl = True if 'DISABLE_VERIFY_SSL' not in os.environ else False def parseStatus(execute): o = requests.get(execute.statusLocation, verify=verify_ssl) t = etree.fromstring(o.content) ref = t.getchildren()[-1].getchildren()[-1].getchildren()[-1].get('{http://www.w3.org/1999/xlink}href') return ref # catalogue WPS url wpsURL = 'https://pavics.ouranos.ca/twitcher/ows/proxy/catalog/pywps' # Connection wpsCatalogue = WebProcessingService(url=wpsURL, verify=verify_ssl) for process in wpsCatalogue.processes: print ('%s \t : %s \n' %(process.identifier, process.abstract)) wpsURL = 'https://pavics.ouranos.ca/twitcher/ows/proxy/flyingpigeon/wps' wpsFP = WebProcessingService(wpsURL, verify=verify_ssl) print(wpsFP.identification.title) for process in wpsFP.processes: print ('%s \t : %s \n' %(process.identifier, process.abstract)) proc_name = 'pavicsearch' constraintString = 'variable:tasmax' maxfiles = '1000000' myinputs = [('constraints', constraintString),('type','File'), ('limit',maxfiles)] execution = wpsCatalogue.execute(identifier=proc_name, inputs=myinputs) print(execution.status) print(execution.processOutputs[-1].reference) proc_name = 'pavicsearch' process = wpsCatalogue.describeprocess(proc_name) # get process info for i in process.dataInputs: print('inputs :', i.identifier, ' : ', i.abstract) for i in process.processOutputs: print('outputs :', i.identifier, ' : ', i.abstract) proc_name = 'subset_bbox' process = wpsFP.describeprocess(identifier=proc_name) print(process.title,' : ',process.abstract,'\n') for i in process.dataInputs: print('inputs :', i.identifier, ' : ', i.abstract) for i in process.processOutputs: print('outputs :', i.identifier, ' : ', i.abstract) # NBVAL_IGNORE_OUTPUT # ignore output of this cell because different PAVICS host will have different quantity of netCDF files ref = parseStatus(execution) r = requests.get(ref, verify=verify_ssl) list_nc = r.json() print('Numer of files found :',len(list_nc), '\n') print("\n".join(list_nc[1:15]),'\n...') nrcan_nc = [i for i in list_nc if 'nrcan' in i and ('1991' in i or '1992' in i or '1993' in i)] # sort the filtered list nrcan_nc.sort() print('Number of files :', "%s\n" % len(nrcan_nc), "\n".join(nrcan_nc)) nc_test = Dataset(nrcan_nc[0]) print(nc_test) myinputs = [] # To keep things reasonably quick : subset jan-april for i in nrcan_nc: myinputs.append(('resource', i)) myinputs.append(('lon0', '-80.0')) myinputs.append(('lon1', '-70.0')) myinputs.append(('lat0', '44.0')) myinputs.append(('lat1', '50')) print(myinputs) execution = wpsFP.execute(identifier=proc_name, inputs=myinputs) print(execution.status) print(execution.processOutputs[-1].reference) print(execution.statusLocation) ```
github_jupyter
This notebook is part of the $\omega radlib$ documentation: https://docs.wradlib.org. Copyright (c) $\omega radlib$ developers. Distributed under the MIT License. See LICENSE.txt for more info. # Supported radar data formats The binary encoding of many radar products is a major obstacle for many potential radar users. Often, decoder software is not easily available. In case formats are documented, the implementation of decoders is a major programming effort. This tutorial provides an overview of the data formats currently supported by $\omega radlib$. We seek to continuously enhance the range of supported formats, so this document is only a snapshot. If you need a specific file format to be supported by $\omega radlib$, please [raise an issue](https://github.com/wradlib/wradlib/issues/new) of type *enhancement*. You can provide support by adding documents which help to decode the format, e.g. format reference documents or software code in other languages for decoding the format. At the moment, *supported format* means that the radar format can be read and further processed by wradlib. Normally, wradlib will return an array of data values and a dictionary of metadata - if the file contains any. wradlib does not support encoding to any specific file formats, yet! This might change in the future, but it is not a priority. However, you can use Python's netCDF4 or h5py packages to encode the results of your analysis to standard self-describing file formats such as netCDF or hdf5. In the following, we will provide an overview of file formats which can be currently read by $\omega radlib$. Reading weather radar files is done via the [wradlib.io](https://docs.wradlib.org/en/latest/io.html) module. There you will find a complete function reference. ``` import wradlib as wrl import warnings warnings.filterwarnings('ignore') import matplotlib.pyplot as pl import numpy as np try: get_ipython().magic("matplotlib inline") except: pl.ion() ``` ## German Weather Service: DX format The German Weather Service uses the DX file format to encode local radar sweeps. DX data are in polar coordinates. The naming convention is as follows: <pre>raa00-dx_&lt;location-id&gt;-&lt;YYMMDDHHMM&gt;-&lt;location-abreviation&gt;---bin</pre> or <pre>raa00-dx_&lt;location-id&gt;-&lt;YYYYMMDDHHMM&gt;-&lt;location-abreviation&gt;---bin</pre> [Read and plot DX radar data from DWD](wradlib_reading_dx.ipynb) provides an extensive introduction into working with DX data. For now, we would just like to know how to read the data: ``` fpath = 'dx/raa00-dx_10908-0806021655-fbg---bin.gz' f = wrl.util.get_wradlib_data_file(fpath) data, metadata = wrl.io.read_dx(f) ``` Here, ``data`` is a two dimensional array of shape (number of azimuth angles, number of range gates). This means that the number of rows of the array corresponds to the number of azimuth angles of the radar sweep while the number of columns corresponds to the number of range gates per ray. ``` print(data.shape) print(metadata.keys()) fig = pl.figure(figsize=(10, 10)) ax, im = wrl.vis.plot_ppi(data, fig=fig, proj='cg') ``` ## German Weather Service: RADOLAN (quantitative) composit The quantitative composite format of the DWD (German Weather Service) was established in the course of the [RADOLAN project](https://www.dwd.de/DE/leistungen/radolan/radolan.html). Most quantitative composite products from the DWD are distributed in this format, e.g. the R-series (RX, RY, RH, RW, ...), the S-series (SQ, SH, SF, ...), and the E-series (European quantitative composite, e.g. EZ, EH, EB). Please see the [composite format description](https://www.dwd.de/DE/leistungen/radolan/radolan_info/radolan_radvor_op_komposit_format_pdf.pdf?__blob=publicationFile&v=5) for a full reference and a full table of products (unfortunately only in German language). An extensive section covering many RADOLAN aspects is here: [RADOLAN](../radolan.ipynb) Currently, the RADOLAN composites have a spatial resolution of 1km x 1km, with the national composits (R- and S-series) being 900 x 900 grids, and the European composits 1500 x 1400 grids. The projection is [polar-stereographic](../radolan/radolan_grid.ipynb#Polar-Stereographic-Projection). The products can be read by the following function: ``` fpath = 'radolan/misc/raa01-rw_10000-1408102050-dwd---bin.gz' f = wrl.util.get_wradlib_data_file(fpath) data, metadata = wrl.io.read_radolan_composite(f) ``` Here, ``data`` is a two dimensional integer array of shape (number of rows, number of columns). Different product types might need different levels of postprocessing, e.g. if the product contains rain rates or accumulations, you will normally have to divide data by factor 10. ``metadata`` is again a dictionary which provides metadata from the files header section, e.g. using the keys *producttype*, *datetime*, *intervalseconds*, *nodataflag*. ``` print(data.shape) print(metadata.keys()) ``` Masking the NoData (or missing) values can be done by: ``` maskeddata = np.ma.masked_equal(data, metadata["nodataflag"]) fig = pl.figure(figsize=(10, 8)) # get coordinates radolan_grid_xy = wrl.georef.get_radolan_grid(900, 900) x = radolan_grid_xy[:, :, 0] y = radolan_grid_xy[:, :, 1] # create quick plot with colorbar and title pl.figure(figsize=(10, 8)) pl.pcolormesh(x, y, maskeddata) ``` ## HDF5 ### OPERA HDF5 (ODIM_H5) [HDF5](https://www.hdfgroup.org/HDF5/) is a data model, library, and file format for storing and managing data. The [OPERA 3 program](http://www.eumetnet.eu/opera) developed a convention (or information model) on how to store and exchange radar data in hdf5 format. It is based on the work of [COST Action 717](https://e-services.cost.eu/files/domain_files/METEO/Action_717/final_report/final_report-717.pdf) and is used e.g. in real-time operations in the Nordic European countries. The OPERA Data and Information Model (ODIM) is documented e.g. in this [report](https://www.eol.ucar.edu/system/files/OPERA_2008_03_WP2.1b_ODIM_H5_v2.1.pdf). Make use of these documents in order to understand the organization of OPERA hdf5 files! <div class="alert alert-warning"> **Note** <br> Since $\omega radlib$ version 1.3 an [OdimH5](https://docs.wradlib.org/en/stable/generated/wradlib.io.xarray.OdimH5.html) reader based on [Xarray](http://xarray.pydata.org/en/stable/), [netcdf4](https://unidata.github.io/netcdf4-python/) and [h5py](https://www.h5py.org/) is available. Please read the more indepth notebook [wradlib_xarray_radial_odim](wradlib_xarray_radial_odim.ipynb). A second implementation based on [netcdf4](https://unidata.github.io/netcdf4-python/), [h5py](https://www.h5py.org/), [h5netcdf](https://github.com/shoyer/h5netcdf) and [Xarray](http://xarray.pydata.org/en/stable/) claiming multiple data files and presenting them in a simple structure is available from $\omega radlib$ version 1.6. See the notebook [wradlib_odim_multi_file_dataset](wradlib_odim_multi_file_dataset.ipynb). </div> The hierarchical nature of HDF5 can be described as being similar to directories, files, and links on a hard-drive. Actual metadata are stored as so-called *attributes*, and these attributes are organized together in so-called *groups*. Binary data are stored as so-called *datasets*. As for ODIM_H5, the ``root`` (or top level) group contains three groups of metadata: these are called ``what`` (object, information model version, and date/time information), ``where`` (geographical information), and ``how`` (quality and optional/recommended metadata). For a very simple product, e.g. a CAPPI, the data is organized in a group called ``dataset1`` which contains another group called ``data1`` where the actual binary data are found in ``data``. In analogy with a file system on a hard-disk, the HDF5 file containing this simple product is organized like this: ``` / /what /where /how /dataset1 /dataset1/data1 /dataset1/data1/data ``` The philosophy behind the $\omega radlib$ interface to OPERA's data model is very straightforward: $\omega radlib$ simply translates the complete file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the "directory trees" shown above. Each key ending with ``/data`` points to a Dataset (i.e. a numpy array of data). Each key ending with ``/what``, ``/where`` or ``/how`` points to another dictionary of metadata. The entire output can be obtained by: ``` fpath = 'hdf5/knmi_polar_volume.h5' f = wrl.util.get_wradlib_data_file(fpath) fcontent = wrl.io.read_opera_hdf5(f) ``` The user should inspect the output obtained from his or her hdf5 file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module: ``` # which keyswords can be used to access the content? print(fcontent.keys()) # print the entire content including values of data and metadata # (numpy arrays will not be entirely printed) print(fcontent['dataset1/data1/data']) ``` Please note that in order to experiment with such datasets, you can download hdf5 sample data from the [OPERA](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) or use the example data provided with the [wradlib-data](https://github.com/wradlib/wradlib-data/) repository. ``` fig = pl.figure(figsize=(10, 10)) im = wrl.vis.plot_ppi(fcontent['dataset1/data1/data'], fig=fig, proj='cg') ``` ### GAMIC HDF5 GAMIC refers to the commercial [GAMIC Enigma MURAN software](https://www.gamic.com) which exports data in hdf5 format. The concept is quite similar to the above [OPERA HDF5 (ODIM_H5)](#OPERA-HDF5-(ODIM_H5)) format. Such a file (typical ending: *.mvol*) can be read by: ``` fpath = 'hdf5/2014-08-10--182000.ppi.mvol' f = wrl.util.get_wradlib_data_file(fpath) data, metadata = wrl.io.read_gamic_hdf5(f) ``` While metadata represents the usual dictionary of metadata, the data variable is a dictionary which might contain several numpy arrays with the keywords of the dictionary indicating different moments. ``` print(metadata.keys()) print(metadata['VOL']) print(metadata['SCAN0'].keys()) print(data['SCAN0'].keys()) print(data['SCAN0']['PHIDP'].keys()) print(data['SCAN0']['PHIDP']['data'].shape) fig = pl.figure(figsize=(10, 10)) im = wrl.vis.plot_ppi(data['SCAN0']['ZH']['data'], fig=fig, proj='cg') ``` ### Generic HDF5 This is a generic hdf5 reader, which will read any hdf5 structure. ``` fpath = 'hdf5/2014-08-10--182000.ppi.mvol' f = wrl.util.get_wradlib_data_file(fpath) fcontent = wrl.io.read_generic_hdf5(f) print(fcontent.keys()) print(fcontent['where']) print(fcontent['how']) print(fcontent['scan0/moment_3'].keys()) print(fcontent['scan0/moment_3']['attrs']) print(fcontent['scan0/moment_3']['data'].shape) fig = pl.figure(figsize=(10, 10)) im = wrl.vis.plot_ppi(fcontent['scan0/moment_3']['data'], fig=fig, proj='cg') ``` ## NetCDF The NetCDF format also claims to be self-describing. However, as for all such formats, the developers of netCDF also admit that "[...] the mere use of netCDF is not sufficient to make data self-describing and meaningful to both humans and machines [...]" (see [here](https://www.unidata.ucar.edu/software/netcdf/documentation/historic/netcdf/Conventions.html). Different radar operators or data distributors will use different naming conventions and data hierarchies (i.e. "data models") that the reading program might need to know about. $\omega radlib$ provides two solutions to address this challenge. The first one ignores the concept of data models and just pulls all data and metadata from a NetCDF file ([wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). The second is designed for a specific data model used by the EDGE software ([wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html)). <div class="alert alert-warning"> **Note** <br> Since $\omega radlib$ version 1.3 an [Cf/Radial](https://docs.wradlib.org/en/stable/generated/wradlib.io.xarray.CfRadial.html) reader for CF versions 1.X and 2 based on [Xarray](http://xarray.pydata.org/en/stable/) and [netcdf4](https://unidata.github.io/netcdf4-python/) is available. Please read the more indepth notebook [wradlib_xarray_radial_odim](wradlib_xarray_radial_odim.ipynb). </div> ### Generic NetCDF reader (includes CfRadial) $\omega radlib$ provides a function that will virtually read any NetCDF file irrespective of the data model: [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html). It is built upon Python's [netcdf4](https://unidata.github.io/netcdf4-python/) library. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) will return only one object, a dictionary, that contains all the contents of the NetCDF file corresponding to the original file structure. This includes all the metadata, as well as the so called "dimensions" (describing the dimensions of the actual data arrays) and the "variables" which will contains the actual data. Users can use this dictionary at will in order to query data and metadata; however, they should make sure to consider the documentation of the corresponding data model. [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) has been shown to work with a lot of different data models, most notably **CfRadial** (see [here](https://www.ral.ucar.edu/projects/titan/docs/radial_formats/cfradial.html) for details). A typical call to [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html) would look like: ``` fpath = 'netcdf/example_cfradial_ppi.nc' f = wrl.util.get_wradlib_data_file(fpath) outdict = wrl.io.read_generic_netcdf(f) for key in outdict.keys(): print(key) ``` Please see [this example notebook](wradlib_generic_netcdf_example.ipynb) to get started. ### EDGE NetCDF EDGE is a commercial software for radar control and data analysis provided by the Enterprise Electronics Corporation. It allows for netCDF data export. The resulting files can be read by [wradlib.io.read_generic_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_generic_netcdf.html), but $\omega radlib$ also provides a specific function, [wradlib.io.read_edge_netcdf()](https://docs.wradlib.org/en/latest/generated/wradlib.io.netcdf.read_edge_netcdf.html) to return metadata and data as seperate objects: ``` fpath = 'netcdf/edge_netcdf.nc' f = wrl.util.get_wradlib_data_file(fpath) data, metadata = wrl.io.read_edge_netcdf(f) print(data.shape) print(metadata.keys()) ``` ## Gematronik Rainbow Rainbow refers to the commercial [RAINBOW®5 APPLICATION SOFTWARE](http://www.de.selex-es.com/capabilities/meteorology/products/components/rainbow5) which exports data in an XML flavour, which due to binary data blobs violates XML standard. Gematronik provided python code for implementing this reader in $\omega radlib$, which is very much appreciated. The philosophy behind the $\omega radlib$ interface to Gematroniks data model is very straightforward: $\omega radlib$ simply translates the complete xml file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the "xml nodes" and "xml attributes". Each ``data`` key points to a Dataset (i.e. a numpy array of data). Such a file (typical ending: *.vol* or *.azi*) can be read by: ``` fpath = 'rainbow/2013070308340000dBuZ.azi' f = wrl.util.get_wradlib_data_file(fpath) fcontent = wrl.io.read_rainbow(f) ``` The user should inspect the output obtained from his or her Rainbow file in order to see how access those items which should be further processed. In order to get a readable overview of the output dictionary, one can use the pretty printing module: ``` # which keyswords can be used to access the content? print(fcontent.keys()) # print the entire content including values of data and metadata # (numpy arrays will not be entirely printed) print(fcontent['volume']['sensorinfo']) ``` You can check this [example notebook](wradlib_load_rainbow_example.ipynb) for getting a first impression. ## Vaisala Sigmet IRIS [IRIS](https://www.vaisala.com/en/products/instruments-sensors-and-other-measurement-devices/weather-radar-products/iris-focus) refers to the commercial Vaisala Sigmet **I**nteractive **R**adar **I**nformation **S**ystem. The Vaisala Sigmet Digital Receivers export data in a [well documented](ftp://ftp.sigmet.com/outgoing/manuals/IRIS_Programmers_Manual.pdf) binary format. The philosophy behind the $\omega radlib$ interface to the IRIS data model is very straightforward: $\omega radlib$ simply translates the complete binary file structure to *one* dictionary and returns this dictionary to the user. Thus, the potential complexity of the stored data is kept and it is left to the user how to proceed with this data. The keys of the output dictionary are strings that correspond to the Sigmet Data Structures. Each ``data`` key points to a Dataset (i.e. a numpy array of data). Such a file (typical ending: *.RAWXXXX) can be read by: ``` fpath = 'sigmet/cor-main131125105503.RAW2049' f = wrl.util.get_wradlib_data_file(fpath) fcontent = wrl.io.read_iris(f) # which keywords can be used to access the content? print(fcontent.keys()) # print the entire content including values of data and # metadata of the first sweep # (numpy arrays will not be entirely printed) print(fcontent['data'][1].keys()) print() print(fcontent['data'][1]['ingest_data_hdrs'].keys()) print(fcontent['data'][1]['ingest_data_hdrs']['DB_DBZ']) print() print(fcontent['data'][1]['sweep_data'].keys()) print(fcontent['data'][1]['sweep_data']['DB_DBZ']) fig = pl.figure(figsize=(10, 10)) swp = fcontent['data'][1]['sweep_data'] ax, im = wrl.vis.plot_ppi(swp["DB_DBZ"]['data'], fig=fig, proj='cg') ``` ## OPERA BUFR **WARNING** $\omega radlib$ does currently not support the BUFR format! The Binary Universal Form for the Representation of meteorological data (BUFR) is a binary data format maintained by the World Meteorological Organization (WMO). The BUFR format was adopted by [OPERA](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) for the representation of weather radar data. A BUFR file consists of a set of *descriptors* which contain all the relevant metadata and a data section. The *descriptors* are identified as a tuple of three integers. The meaning of these tupels is described in the so-called BUFR tables. There are generic BUFR tables provided by the WMO, but it is also possible to define so called *local tables* - which was done by the OPERA consortium for the purpose of radar data representation. If you want to use BUFR files together with $\omega radlib$, we recommend that you check out the [OPERA webpage](http://eumetnet.eu/activities/observations-programme/current-activities/opera/) where you will find software for BUFR decoding. In particular, you might want to check out [this tool](http://eumetnet.eu/wp-content/uploads/2017/04/bufr_opera_mf.zip) which seems to support the conversion of OPERA BUFR files to ODIM_H5 (which is supported by $\omega radlib$). However, you have to build it yourself. It would be great if someone could add a tutorial on how to use OPERA BUFR software together with $\omega radlib$!
github_jupyter
# Research ## Imports ``` import pandas as pd import pandas_datareader as dr from pandas_datareader import data as web import matplotlib.pyplot as plt from matplotlib import style import numpy as np import datetime import mplfinance as mpl import plotly.graph_objects as go import plotly import yfinance as yf ``` ## Data Import ``` df = pd.read_csv('data/data2.csv', index_col='Symbol') ``` ## Sorting Data ``` df isInfoTech = df['Sector']== 'Information Technology' print(isInfoTech.head()) df_InfoTech = df[isInfoTech] df_InfoTech ``` ## IBM INTEL NVIDIA ``` #looking at IBM,INTEL,NVIDIA, start = datetime.datetime(2017,1,1) end = datetime.datetime(2021,6,22) ibm = yf.download("IBM",start, end) intel = yf.download("INTC",start, end) nvidia = yf.download("NVDA",start, end) trch = yf.download("TRCH",start, end) ibm.to_csv('IBM_STOCK.csv') #ibm stock intel.to_csv('INTC_STOCK.csv') nvidia.to_csv('NVDA_STOCK.csv') trch.to_csv('TRCH_STOCK.csv') ibm.head() trch.tail() intel.head() nvidia.head() ibm['Open'].plot(label='IBM',figsize=(15,7)) intel['Open'].plot(label='Intel') nvidia['Open'].plot(label='Nvidia') plt.legend() plt.ylabel('Stock Price') plt.title('Stock Prices of IBM,Intel and Nvidia') ``` ## Volumes ``` ibm['Volume'].plot(label='IBM',figsize=(15,7)) intel['Volume'].plot(label='Intel') nvidia['Volume'].plot(label='Nvidia') plt.ylabel('Volume Traded') plt.title('Volumes of IBM, Intel and Nvidia') plt.legend() ``` ## Total Traded / ~Market Cap ``` ibm['Total Traded'] = ibm['Open'] * ibm['Volume'] intel['Total Traded'] = intel['Open'] * intel['Volume'] nvidia['Total Traded'] = nvidia['Open'] * nvidia['Volume'] ibm['Total Traded'].plot(label=('IBM'),figsize=(15,7)) intel['Total Traded'].plot(label=('Intel')) nvidia['Total Traded'].plot(label=('Nvidia')) plt.ylabel('Total Traded') plt.legend() plt.title('Total Traded for IBM, Intel, and Nvidia') ``` ## 50 and 200 Day Rolling EMA ``` intel['Open'].plot(figsize=(15,7)) intel['MA50']=intel['Open'].rolling(50).mean() intel['MA50'].plot(label='MA50') intel['MA200']=intel['Open'].rolling(200).mean() intel['MA200'].plot(label='MA200') plt.legend() plt.title('Intel Open, 50EMA, 200EMA') ibm['Open'].plot(figsize=(15,7)) ibm['MA50']=ibm['Open'].rolling(50).mean() ibm['MA50'].plot(label='MA50') ibm['MA200']=ibm['Open'].rolling(200).mean() ibm['MA200'].plot(label='MA200') plt.legend() plt.title('IBM Open, 50EMA, 200EMA') nvidia['Open'].plot(figsize=(12,7)) nvidia['MA50']=nvidia['Open'].rolling(50).mean() nvidia['MA50'].plot(label='MA50') nvidia['MA200']=nvidia['Open'].rolling(200).mean() nvidia['MA200'].plot(label='MA200') plt.legend() plt.title('Nvidia Open, 50EMA, 200EMA') trch['Open'].plot(figsize=(10,7)) trch['MA50']=trch['Open'].rolling(50).mean() trch['MA50'].plot(label='MA50') trch['MA200']=trch['Open'].rolling(200).mean() trch['MA200'].plot(label='MA200') plt.legend() plt.title('Torchlight Open, 50EMA, 200EMA') ``` ## Time Series Analysis AutoCorrelation ``` def autocorr_daily(intel): returns = intel.pct_change() autocorrelation = returns['Adj Close'].autocorr() return autocorrelation autocorr_daily(intel) autocorr_daily(ibm) autocorr_daily(nvidia) autocorr_daily(trch) ``` ## Scatter Matrix Based off Open Price ``` from pandas.plotting import scatter_matrix tech_comp = pd.concat([ibm['Open'],intel['Open'],nvidia['Open']],axis =1) tech_comp.columns = ['IBM Open','Intel Open','Nvidia Open'] scatter_matrix(tech_comp,figsize=(8,8),hist_kwds={'bins':50}) ``` CandleStick Analysis ## CandleStick Analysis ``` candleIntel = intel.iloc[100:160] mpl.plot(candleIntel,type='candle',volume=True) candleIBM = ibm.iloc[100:160] mpl.plot(candleIBM,type='candle',volume=True) candleNvidia = nvidia.iloc[100:160] mpl.plot(candleNvidia,type='candle',volume=True) ``` ## Monte Carlo Stock Price Predictor ``` monte_end = datetime.datetime.now() monte_start = monte_end - datetime.timedelta(days=300) prices = yf.download("NVDA",monte_start,monte_end)['Close'] returns = prices.pct_change() meanReturns = returns.mean() last_price = prices[-1] num_sims = 100 num_days = 300 sim_df = pd.DataFrame() for x in range(num_sims): count = 0 daily_volatility = returns.std() price_series = [] price = last_price * (1 + np.random.normal(0,daily_volatility)) price_series.append(price) for y in range(num_days): if count == 299: break price = price_series[count] * (1 + np.random.normal(0,daily_volatility)) price_series.append(price) count += 1 sim_df[x] = pd.Series(price_series) fig = plt.figure() fig.suptitle('Monte Carlo Sim NVDA') plt.plot(sim_df) plt.axhline(y = last_price, color = 'lime',linestyle = '-') plt.xlabel('Days') plt.ylabel('Price') plt.show() import plotly_express as px fig2 = px.line(sim_df) fig2.show() pd.set_option('display.max_columns',100) sim_df sim_df.drop(index=1) price_series ```
github_jupyter
<h1>SUBSET SELECTION</h1> ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import scipy import sklearn import seaborn as sns import xlrd import time import statsmodels.api as sm data=pd.read_excel('Data/Mini Project EFSA.xlsx') data.rename(columns={'sex \n(0=M, 1=F)':'sex'}, inplace=True) data from funzioni import forward ``` <h2>I dati sono le colonne originali</h2> ``` import numpy as np from sklearn.preprocessing import PolynomialFeatures #Prepare the datas y = data.response weights = data.SD X = data.drop(columns=["response","SD"]) #Devo estrarre l'endpoint dalla matrice in modo da avere 2 variabili categoriche usate per fare i 3 endpoint endpoint1 = X['endpoint'] == 1 endpoint2 = X['endpoint'] == 2 X["endpoint1"] = endpoint1.astype("int") X["endpoint2"] = endpoint2.astype("int") X = X.drop(columns=["endpoint"]) #X["ones"] = np.ones((X.shape[0],1)) poly = PolynomialFeatures(2) X_poly = poly.fit_transform(X) cols = poly.get_feature_names(X.columns) X = pd.DataFrame(X_poly, columns=cols) X ``` <h1>2 - Use subset selection to estimate separate models for the 3 endpoints using gender as categorical variable.</h1> <h1>3 - Use subset selection to estimate a unique model using gender and endpoint as categorical variables</h1> <h2>Forward solo con i predittori lineari</h2> ``` models_fwd = pd.DataFrame(columns=["RSS", "model","number_of_predictors"]) tic = time.time() predictors = [] for i in range(1,len(X.columns)+1): models_fwd.loc[i] = forward(y,X,predictors,weights) predictors = models_fwd.loc[i]["model"].model.exog_names toc = time.time() print("Total elapsed time:", (toc-tic), "seconds.") display(models_fwd) models_fwd.plot(x='number_of_predictors', y='RSS') for i in range(0,models_fwd.shape[0]): print(models_fwd.iloc[i]["model"].model.exog_names) print() res = models_fwd.iloc[5]["model"].model.fit() In [7]: print(res.summary()) ``` <h2>Confrontiamo ora questi modelli con criteri oggettivi</h2> ``` for i in range(1, models_fwd.shape[0]): model = models_fwd.loc[i,"model"] models_fwd.loc[i,"aic"] = model.aic models_fwd.loc[i,"bic"] = model.bic models_fwd.loc[i,"mse"] = model.mse_total models_fwd.loc[i,"adj_rsquare"] = model.rsquared_adj models_fwd #Quelli da minimizzare for criteria in ["bic","aic"]: print("The criteria is: " + criteria) row = models_fwd.loc[models_fwd[criteria].argmin()] modelFeatures = row["model"].model.exog_names if "intercept" not in modelFeatures: modelFeatures.append("intercept") criteriaValue = row[criteria] degressOfFreedom = row["model"].model.df_model print("Features: "+str(modelFeatures)) print("Criteria value: "+str(criteriaValue)) print("Degrees of freedom: "+str(degressOfFreedom+1)) print() #Quelli da massimizzare for criteria in ["adj_rsquare"]: print("The criteria is: " + criteria) row = models_fwd.loc[models_fwd[criteria].argmax()] modelFeatures = row["model"].model.exog_names if "intercept" not in modelFeatures: modelFeatures.append("intercept") criteriaValue = row[criteria] degressOfFreedom = row["model"].model.df_model print("Features: "+str(modelFeatures)) print("Criteria value: "+str(criteriaValue)) print("Degrees of freedom: "+str(degressOfFreedom+1)) print() from funzioni import CarloCrecco CarloCrecco() ```
github_jupyter
<div align="right"><i>COM418 - Computers and Music</i></div> <div align="right"><a href="https://people.epfl.ch/paolo.prandoni">Paolo Prandoni</a>, <a href="https://www.epfl.ch/labs/lcav/">LCAV, EPFL</a></div> <p style="font-size: 30pt; font-weight: bold; color: #B51F1F;">Non-Harmonic Distortion in a Quantized Sinusoid <br> (Tsividis' Paradox)</p> ``` %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.signal as sp import scipy.special as ss from scipy.io import wavfile import IPython import ipywidgets as widgets plt.rcParams["figure.figsize"] = (14,4) # helper functions def play_sound(SF, s, volume=1): # play a sound with a volume factor #x = np.copy(s) * volume return IPython.display.Audio(volume * s, rate=SF, normalize=False) def multiplay(SF, clips, title=None, volume=1): outs = [widgets.Output() for c in clips] for ix, clip in enumerate(clips): with outs[ix]: print(title[ix] if title is not None else "") display(IPython.display.Audio(volume * clip, rate=SF, normalize=False)) return widgets.HBox(outs) def stem(x, color='tab:blue'): # stem with chosen color markerline, stemlines, baseline = plt.stem(x, use_line_collection=True, basefmt='k'); markerline.set_color(color) stemlines.set_color(color) ``` # Quantization in A/D conversion ## The classic A/D converter * $x(t)$ bandlimited to $F_s/2$ * sample at $F_s$ Hz * uniform quantization with $M$ levels <center> <img src="img/sbq.png" style="width: 1200px;"/> </center> ## Uniform scalar quantization * $M$-level uniform scalar quantizer: $q: \mathbb{R} \rightarrow \{\hat{x}_0, \ldots, \hat{x}_{M-1}\}$ * non-overload region: $[-1,1]$ * quantization step: $\Delta = 2/M$ ``` def quantize(x, M): if M == 0: return x elif M % 2 == 0: # using a mid-riser quantizer M = M / 2 k = np.floor(x * M) k = np.maximum(np.minimum(k, M-1), -M) return (k + 0.5) / M else: # using a deadzone quantizer k = np.round(np.abs(x) * M / 2) k = np.minimum((M - 1) / 2, k) return (np.sign(x) * k / M * 2 ) x = np.arange(-1, 1, .01) for ix, M in enumerate([2, 3, 8]): plt.subplot(1, 3, ix+1) plt.plot(x,x); plt.plot(x, quantize(x, M), '.'); ``` ## High-resolution hypothesis <center> <img src="img/linearized.png" style="width: 1200px;"/> </center> * $e[n]$ white noise uncorrelated with $x[n]$ * $\sigma_e[n] = \Delta^2/12$ * $\mathrm{SNR} = 6M~\mathrm{dB}$ # Tsividis' paradox <center><img src="img/sbq.png" style="width: 800px;"/></center> * sampling and quantization are memoryless: they can be swapped * let's swap them: <center> <img src="img/sbq.png" style="width: 800px;"/> <img src="img/qbs.png" style="width: 800px;"/> </center> <center> but $\mathcal{Q}$ discontinuous so $\hat{x}(t)$ no longer bandlimited $~~\Longrightarrow~~$ aliasing! </center> # Harmonic vs non-harmonic distortion * $x(t)$ periodic with period $T = 1/f_0$ * instantaneous distortion function $r(\cdot)$ The signal $r(x(t))$ will incur: * **harmonic distortion** if the spectral content at integer multiples of $f_0$ is modified <br />(typical of "natural" saturation/clipping) * **non-harmonic distortion** if spectral content appear elsewhere <br />(typical of aliasing) In practice: * harmonic distortion: bearable, if we really have to * non-harmonic distortion: unbearable because totally unnatural ## Total Harmonic Distortion (THD) THD quantifies harmonic distortion for sinusoidal inputs: $x(t) = \sin(2\pi f_0 t)$ Express $r(x(t))$ via its Fourier **series** since periodicity is preserved: $\displaystyle r\left(x(t)\right) = \sum_{k=-\infty}^{\infty} c_k\, e^{-j2\pi f_0 k t}$ $$ \mathrm{THD} = \sqrt{\frac{\sum_{k > 1} |c_k|^2}{|c_1|^2}} $$ Example: * $r(x) = \mathrm{sgn}(x)$, from sinusoid to square wave (two-level quantization) * $\displaystyle \mathrm{sgn}\left(\sin(2\pi f_0 t)\right) = \frac{4}{\pi}\sum_{k = 1}^{\infty}\frac{1}{2k-1}\sin(2\pi(2k-1) f_0 t)$ $$ \mathrm{THD} = \sqrt{\sum_{k = 2}^{\infty}\left(\frac{1}{2k-1}\right)^2} = \sqrt{\frac{\pi^2}{8}-1} \approx 0.48. $$ **Exercise:** prove the result ## Non-harmonic distortion: aliasing Example as before, but in discrete time: * $F_s > 2f_0$ * $\omega_0 = f_0/Fs < \pi$ * $\displaystyle \mathrm{sgn}\left(\sin(\omega_0 n)\right) = \frac{4}{\pi}\sum_{k = 1}^{\infty}\frac{1}{2k-1}\sin((2k-1) \omega_0 n)$ * frequencies for $k > (1 + \pi/\omega_0) / 2$ will be aliased! ## Harmonic vs non-harmonic distortion: example progressively harder clipping vs progressively coarser quantization ``` sf, f0, M = 8000, 440, 9 # one second per clipping level w = 2 * np.pi * f0 / sf * np.arange(0, M * sf) x = np.sin(w) x_c, x_q = np.zeros(len(w)), np.zeros(len(w)) for n, level in enumerate(range(M, 1, -1)): s = slice(n * sf, (n + 1) * sf) # progessively harder clipping x_c[s] = np.clip(np.sin(w[s]), -level/M, level/M) * M / level # progressively coarser quantization x_q[s] = quantize(np.sin(w[s]), 2 ** level) multiplay(sf, (x_c, x_q), ('clipping', 'quantization'), volume=0.3) ``` ## Aside: non-harmonic distortion due to intermodulation When more than a single sinusoid is considered, things get complicated quickly * $r(x) = \sum_{n=0}^{\infty} a_n \, x^n$ (Taylor series expansion) * $\sin^n \alpha = \gamma_0 + \sum_{k=1}^{n} \gamma_k \sin k\alpha$ * $\sin \alpha \sin \beta = \mu_0 \sin(\alpha + \beta) + \mu_1 \sin(\alpha - \beta)$ $$ r\left(\sin(2\pi f_0 t) + \sin(2\pi f_1 t)\right) = \ldots = \sum_{k_0, k_1 = -\infty}^{\infty} b_{k_0, k_1} \sin(2\pi (k_0 f_0 + k_1 f_1) t) $$ ``` for n, level in enumerate(range(M, 1, -1)): s = slice(n * sf, (n + 1) * sf) x_c[s] = np.clip((np.sin(w[s]) + np.sin(1.5 * w[s])) / 2, -level/M, level/M) * M / level play_sound(sf, x_c, volume=0.3) ``` # Ravel's Bolero ## An impressive dynamic range <center> <img width="800" src="img/bolero_diff.jpg"> </center> ``` clips = {} for name in ['boleroA', 'boleroM', 'boleroZ']: sf, audio = wavfile.read('snd/' + name + '.wav') clips['sf'], clips[name] = sf, audio / 32767.0 multiplay(clips['sf'], [clips['boleroA'], clips['boleroZ']], ['beginning, full res', 'ending, full res']) ``` <center> <img width="1200" src="img/bolero_wav.png"> </center> * live performances have an dynamic range of 100dBs or more * 16-bit audio covers about 96dBs * ... but vinyl is no better: about 70dB dynamic range ## Aside: oreloB <img width="480" style="float: right;" src="img/orelob.jpg"> Bolero is much louder at the end but vinyls suffer from _end of side_ distortion: * rotational speed constant, but inner grooves shorter * reading speed gets slower * recorded wavelengths become shorter<br/> and comparable to stylus size * groove slope gets too steep for tracking Solution: oreloB, a vinyl that plays backwards ## Quantizing the Bolero <img width="600" style="float: right;" src="img/bolero_wav.png"> * clearly the beginning spans a much smaller<br />number of quantization levels than the end * the high-resolution hypothesis may not hold ``` levels=[2 ** 16, 2 ** 8] multiplay(clips['sf'], [quantize(clips['boleroM'], m) for m in levels], [f'middle, {m}-level quantization' for m in levels]) levels=[2 ** 16, 2 ** 8] multiplay(clips['sf'], [quantize(clips['boleroA'], m) for m in levels], [f'beginning, {m}-level quantization' for m in levels]) ``` # Numerical Experiments ## Sampling a sine wave with rational normalized frequency (the opening flute in the Bolero is close to a pure sinusoid) * conventional setup: sampling followed by quantization * $x(t) = \sin(2\pi f_0 t)$, sampled at $F_s$ and $f_0 = \frac{A}{B}F_s$ with $A$ and $B$ coprime <br/> * $x[n] = \sin\left(2\pi\frac{A}{B}n\right)$ * $x[n]$ will be periodic with period $B$ and it will span $A$ cycles over $B$ samples * natural Fourier representation: DFS $\mathbf{X}\in \mathbb{C}^B$ * single nonzero coefficient $X[A]$ ``` def quantized_sinusoid(A, B, M=0, initial_phase=1): # add an initial phase non commensurable with pi to eliminate quantization of zero values x = np.sin(initial_phase + 2 * np.pi * ((A * np.arange(0, B)) % B) / B) qx = quantize(x, M) return { 'original' : x, 'quantized' : qx, # square magnitude of the normalized DFS for positive frequencies 'DFS' : (np.abs(np.fft.fft(qx))[:int(np.ceil(B/2))] / B ) ** 2 } stem(quantized_sinusoid(3, 17)['DFS']) ``` ## Introducing quantization * $\mathbf{x} \rightarrow \hat{\mathbf{x}}$ * $\hat{\mathbf{x}}$ still periodic with a period of $B$ samples Distortion: * harmonic distortion will affects the DFS coefficient whose index is a multiple of $A$ * non-harmonic distortion will affect the other coefficients First note in the Bolero is a $C_5$, i.e. 523.25Hz. At $F_s=44.1$KHz we can pick $B=257$ and $A=3$. ``` def find_nhd(A, dfs, full=False): # zero out harmonic components to highlight non-harmonic content N = int(np.ceil(len(dfs) / 2)) if full else len(dfs) nhd = np.copy(dfs[:N]) nhd[::A] = 0 return max(nhd), nhd def show_nhd(A=3, B=257, M=2): s = quantized_sinusoid(A, B, int(M)) peak, nhd = find_nhd(A, s['DFS']) plt.subplot(1, 2, 1) plt.plot(s['original']); plt.plot(s['quantized']); plt.title('signal') plt.subplot(1, 2, 2) stem(s['DFS']) plt.title('DFS') plt.figure() stem(nhd) plt.ylim(0, 0.0002) plt.title('non-harmonic components, max=' + str(peak)) display(widgets.interactive(show_nhd, M=widgets.Dropdown(options=['2', '3', '4', '128' ]), A=(1, 11), B=widgets.fixed(257))) ``` ## Searching for the worst case * try to get a sense for how bad non-harmonic distortion can get * let's iterate over all non-reducible $A/B$ ratios between $0$ and $1/2$ **Farey sequence** of order $N$ is the sequence of _non-reducible_ fractions in the unit interval with denominator smaller or equal than $N$ ``` def farey_sequence(n): """Build the order-N Farey sequence up to 1/2.""" farey = [] (a, b, c, d) = (0, 1, 1, n) while (c <= n): k = (n + b) // d (a, b, c, d) = (c, d, k * c - a, k * d - b) farey.append((a, b)) if a/b >= 0.5: break return farey for (a, b) in farey_sequence(50): plt.plot(b, a, 'o', color=plt.cm.tab20b(a % 20)) def find_max_nhd(N, M=2, parametric=False): max_value = (0, 0, 0) for (A, B) in farey_sequence(N): peak, _ = find_nhd(A, quantized_sinusoid(A, B, M)['DFS']) plt.plot(B if parametric else (A / B), peak, 'o', color=plt.cm.tab20b(A % 20)) if peak > max_value[0]: max_value = (peak, A, B) plt.title(f'max value is {max_value[0]}, frequency {max_value[1]}/{max_value[2]}') ``` ## Non-harmonic distortion for Farey ratios Maximum square magnitude of non-harmonic DFS coefficient as a function of $B$ and parametrized in $A$ ![title](img/nhd.png) ``` find_max_nhd(100, 2, parametric=True) find_max_nhd(100, 3, parametric=True) find_max_nhd(100, 32768, parametric=True) ``` Let's also look at the non-parametrized plots. The reason for the step-ladder patterns will be hopefully clear by the end. ``` find_max_nhd(150, 2) find_max_nhd(150, 3) ``` # Theoretical Analysis ## Some DSP archeology Here is a really interesting paper from 1947 <center> <img width="1200" src="img/cpg_title.jpg"> </center> For context, in 1947 this was happening <br /><br /> <center> <img width="800" src="img/transistor.jpg"> </center> My second favorite quote of the paper: <center> <img width="600" src="img/cpg_quote2.jpg"> </center> My favorite quote of all time: <br /><br /><br /> <center> <img width="600" src="img/cpg_quote.jpg"> </center> ### Quantization before sampling <center> <img src="img/qbs.png" style="width: 800px;"/> </center> ``` t = np.arange(0, 2 * np.pi, 0.001) plt.plot(t, quantize(np.sin(t), 15)); ``` <img width="300" style="float: right; margin: 10px;" src="img/clavier.jpg"> Fundamental idea: * decompose this piecewise-constant periodic <br /> waveform as the sum of $N$ pairs of rectangular steps <br /> of appropriate width * express $\hat{x}(t)$ using a Fourier series expansion: <br /><br /> $$ \hat{x}(t) = \sum_{h=1}^{N} \sum_{k=0}^{\infty} \frac{4}{\pi N (2k+1)} \cos\left[(2k+1)\arcsin\left(\frac{2h-1}{2N}\right) \right]\sin((2k+1)t) $$ ``` def quantized_sinusoid_fs(N, terms=1000): t = np.arange(0, 2 * np.pi, 0.001) x = np.zeros(len(t)) for h in range(1, N): for k in range(0, terms): x = x + np.cos((2 * k + 1) * np.arcsin((2 * h - 1) / N / 2)) * np.sin((2 * k + 1) * t) / (2 * k + 1) x = x * 4 / np.pi / N return t, x plt.plot(*quantized_sinusoid_fs(8)); ``` ### Fundamental intuition: * $q(\sin(t))$ **contains harmonics at all odd multiples of the fundamental frequency** * quantization of a continuous-time sine wave produces only harmonic distortion * NHD is given by spectral lines beyond the Nyquist frequency aliased by the sampler ## More recent times Moving on to Robert Gray's 1990 paper ["Quantization Noise Spectra"](https://ieeexplore.ieee.org/document/59924). ### The normalized quantization error * consider the expression for the _normalized quantization error_ $$ \eta(x) = \frac{q(x) - x}{\Delta} = \frac{q(x) - x}{2/M} \quad \in [-0.5, 0.5]. $$ * $\eta(x)$ is a **periodic** function with period $M/2$ ``` x = np.arange(-1, 1, .001) for ix, M in enumerate([2, 3, 8]): plt.subplot(1, 3, ix+1) e = (quantize(x, M) - x) / (2 / M) plt.plot(x, e); plt.plot(x, e, '.'); ``` * $\eta(x)$ can be expressed as a Fourier Series $$ \eta(x) = \sum_{k=1}^{\infty} \frac{(-1)^{kM}}{\pi k}\sin\left(\pi k M x\right) $$ * $(-1)^{kM}$ is identically one for mid-riser quantizers and alternates in sign for deadzone quantizers. ``` def nqe_fs(x, M, terms=1000): e = np.zeros(len(x)) s = [1, -1 if M % 2 == 1 else 1] for k in range(1, terms): e = e + s[k % 2] * np.sin(np.pi * k * x * M) / (np.pi * k) return x, e for ix, M in enumerate([2, 3, 8]): plt.subplot(1, 3, ix+1) plt.plot(*nqe_fs(np.arange(-1, 1, .01), M)) ``` ### Quantization noise for a sinusoidal input * back to sampling followed by quantization * $x[n] = \sin(\omega_0 n + \theta)$ with $0 \le \omega_0 < 2\pi$. * $\eta[n] = \eta(\sin(\omega_0 n + \theta))$ and we are interested in computing its spectrum. * using complex exponentials for the Fourier series: $$ \eta(x) = \sum_{k \neq 0} \frac{(-1)^{kM}}{j2\pi k}e^{j\pi k M x}. $$ Now we need to replace $x$ by $\sin(\omega_0 n + \theta)$ and we end up with terms of the form $e^{j \alpha \sin \beta}$; these can be expanded in terms of Bessel functions using the so-called Jacobi-Anger formula: $$ e^{j \alpha \sin \omega} = \sum_{m=-\infty}^{\infty} J_m(\alpha)e^{j\omega m}. $$ Bessel functions are even or odd according to whether their order is even or odd, so: $$ \begin{align*} \eta[n] = \eta(\sin(\omega_0 n + \theta)) &= \sum_{k \neq 0} \frac{(-1)^{kM}}{j2\pi k}e^{j\pi k M \sin(\omega_0 n + \theta)} \\ &= \sum_{k \neq 0} \frac{(-1)^{kM}}{j2\pi k} \sum_{m=-\infty}^{\infty} J_m(\pi k M)e^{j (2m+1)\theta} e^{j (2m+1)\omega_0 n} \\ &= \sum_{m=-\infty}^{\infty} \left[ e^{j (2m+1)\theta} \sum_{k = 1}^{\infty} \frac{(-1)^{kM}}{j\pi k}J_{2m+1}(\pi k M) \right] e^{j (2m+1)\omega_0 n} \\ \\ &= \sum_{\varphi \in \Omega(\omega_0)} b(\varphi) e^{j \varphi n} \end{align*} $$ $$ \eta[n] = \sum_{\varphi \in \Omega(\omega_0)} b(\varphi) e^{j \varphi n} $$ * $\Omega(\omega_0) = \{(2m+1)\omega_0 \mod 2\pi\}_{m \in \mathbb{Z}}$, i.e., all the odd multiples of the fundamental frequency aliased over the $[0, 2\pi]$ interval; * for each frequency $\varphi \in \Omega(\omega_0)$: * $I(\varphi) = \{m \in \mathbb{Z} | (2m+1)\omega_0 \equiv \varphi \mod 2\pi\}$ * $\displaystyle b(\varphi) = \sum_{m \in I(\varphi)} \left[ e^{j (2m+1)\theta} \sum_{k = 1}^{\infty} \frac{(-1)^{kM}}{j\pi k}J_{2m+1}(\pi k M) \right]$ ### PSD of the error $$ P_{\omega_0}(e^{j\omega}) = \sum_{\varphi \in \Omega(\omega_0)} |b(\varphi)|^2 \delta(\omega - \varphi). $$ ### Case 1: rational normalized frequency Assume $\omega_0 = 2\pi(A/B)$, with $A$ and $B$ coprime, as in the numerical experiments * the set $\Omega(\omega_0)$ is finite: <br /> $\displaystyle\Omega\left(2\pi\frac{A}{B}\right) = \left\{\frac{2i\pi}{B}\right\}_i, \quad \begin{cases} i = 0, 1, 2, \ldots, B-1 & \mbox{if $A$ or $B$ even} \\ i = 1, 3, 5, \ldots, B-1 & \mbox{if $A$ and $B$ odd} \end{cases}$ * $\displaystyle I\left(\frac{2i\pi}{B}\right) = \{i[A]^{-1}_{B} + pB\}_{p \in \mathbb{Z}}$ The quantization error's PSD: * contains a finite number of spectral lines at multiples of $2\pi/B$ * the power associated to each line $|b(2i\pi/B)|^2$ should correspond to the square magnitude of the $i$-th coefficient of the $B$-point DFS of the error signal. The following function computes an approximation of the coefficients $|b(2i\pi/B)|^2$ for $\omega_0 = 2\pi(A/B)$, scaled to represent the non-normalized quantization error: ``` def nqe_sin_psd(A, B, M, phase=1): s = [1, -1 if M % 2 == 1 else 1] b = np.zeros(B, dtype=complex) m_lim, k_lim = max(1500, 2 * B), 600 for m in range(-m_lim, m_lim): c = 0 for k in range(1, k_lim): c += s[k % 2] * ss.jv(2 * m + 1, np.pi * k * M) / k c /= 1j * np.pi b[((2 * m + 1) * A) % B] += c * np.exp(1j * phase * (2 * m + 1)) # undo error normalization to obtain the real error PSD b = np.abs(b * (2 / M)) ** 2 print('Max NHD (theory): ', find_nhd(A, b, full=True)[0]) return b def nqe_sin_dfs(A, B, M, phase=1): s = quantized_sinusoid(A, B, M, phase) ne = (s['quantized'] - s['original']) b = np.abs(np.fft.fft(ne / B)) ** 2 print('Max NHD (FFT): ', find_nhd(A, b, full=True)[0]) return b P = (3, 8, 2) stem(nqe_sin_psd(*P), 'tab:green') stem(nqe_sin_dfs(*P), 'tab:red') P = (5, 14, 3) stem(nqe_sin_psd(*P), 'tab:green') stem(nqe_sin_dfs(*P), 'tab:red') ``` ### Case 2: irrational normalized frequency Assume $\omega_0$ not a rational multiple of $2\pi$ * the normalized frequency $\nu = \omega_0/(2\pi)$ will be an irrational number in $[0, 1)$ * the set of _normalized_ frequencies $\Omega'(\nu) = \{(2m+1)\nu \mod 1\}_{m \in \mathbb{Z}} = \{ \langle (2m+1)\nu \rangle\}_{m \in \mathbb{Z}}$ Weil's Equidistribution theorem shows that $\Omega'(\nu)$ cover the entire $[0, 1]$ interval _uniformly_. ``` P = (150, 1021, 2) stem(nqe_sin_psd(*P), 'tab:green') stem(nqe_sin_dfs(*P), 'tab:red') ``` # Back to the non-harmonic distortion patterns Recall the plot of the maximum non-harmonic distortion as a function of normalized frequency and its curious "stepladder" pattern: ``` find_max_nhd(150, 2) ``` Consider the non-normalized quantization error for a sinusoid of frequency $\omega_0 = 2\pi\nu$, with $0 < \nu < 1/2$: $$ \begin{align*} \frac{2}{M}\, \eta(\sin(2\pi\nu n))&= \sum_{m=-\infty}^{\infty} \left[ \frac{2}{M}\sum_{k = 1}^{\infty} \frac{(-1)^{kM}}{j\pi k}J_{2m+1}(\pi k M) \right] e^{j 2\pi(2m+1)\nu n} \\ &= \sum_{m=-\infty}^{\infty} c_M(m)\, e^{j 2\pi(2m+1)\nu n}; \end{align*} $$ * for $(2m+1)\nu < 1/2$ the PSD lines are harmonically related to the fundamental * for $(2m+1)\nu > 1/2$ we have aliasing and potentially non-harmonic distortion $$ \frac{2}{M}\, \eta(\sin(2\pi\nu n)) = \sum_{m=-\infty}^{\infty} c_M(m)\, e^{j 2\pi(2m+1)\nu n} $$ * the coefficients $c_M(m)$ depend only on the number of quantization levels $M$ * $|c_M(m)|^2$ decreases rather quickly with $m$: ``` def c_m(N, M=2): k_lim = 600000 s = [1, -1 if M % 2 == 1 else 1] c = np.zeros(N, dtype=complex) for m in range(0, N): for k in range(1, k_lim): c[m] += s[k % 2] * ss.jv(2 * m + 1, np.pi * k * M) / k c[m] /= 1j * np.pi return np.abs(c * (2 / M)) ** 2 c2 = c_m(20, 2) stem(c2) ``` * the max NHD is dominated by the first aliased component:<br />max NHD is $|c_M(m_0)|^2$ where $m_0$ is the minimum integer for which $(2m_0+1)\nu > 1/2$. * for $\nu > 1/6$, NHD $\approx |c_M(1)|^2$ * for $1/10 < \nu < 1/6$, NHD $\approx |c_M(2)|^2$ * ... ``` find_max_nhd(150, 2) for m in range(1, 5): plt.plot([0.5/(2*m+1), 0.5/(2*m+1)], [0, 0.015], color=plt.cm.tab10(m)) plt.plot([0, 0.5], [c2[m], c2[m]], color=plt.cm.tab10(m)) ``` What about $M=3$ ? * $c_3(2) \approx 0$ * $c_3(m)$ non-monotonic * NHD approx the same for $1/18 < \nu < 1/6$. ``` c3 = c_m(20, 3) stem(c3) find_max_nhd(150, 3) for m in range(1, 5): plt.plot([0.5/(2*m+1), 0.5/(2*m+1)], [0, 0.015], color=plt.cm.tab10(m)) plt.plot([0, 0.5], [c3[m], c3[m]], color=plt.cm.tab10(m)) ``` # COnclusionL Does all of this matter? Yes and no: * it's important to understand the consequences of quantization * **dithering** techniques solve most of the problems we've seen here
github_jupyter
# Large dataset testing --- Checking if the new large dataset class, which lazily loads batch files instead of diving a giant pre-loaded one, works well to train my models. ## Importing the necessary packages ``` import os # os handles directory/workspace changes import comet_ml # Comet.ml can log training metrics, parameters, do version control and parameter optimization import torch # PyTorch to create and apply deep learning models # import modin.pandas as pd # Optimized distributed version of Pandas import pandas as pd # Pandas to load and handle the data import numpy as np # NumPy to handle numeric and NaN operations import getpass # Get password or similar private inputs from ipywidgets import interact # Display selectors and sliders os.chdir('..') import data_utils as du # Data science and machine learning relevant methods os.chdir('notebooks/') du.set_random_seed(42) # Debugging packages import pixiedust # Debugging in Jupyter Notebook cells # Path to the parquet dataset files data_path = 'dummy_data/' # Path to the code files project_path = '' import Models # Machine learning models import utils # Context specific (in this case, for the eICU data) methods du.set_pandas_library(lib='pandas') ``` ## Initializing variables Comet ML settings: ``` comet_ml_project_name = input('Comet ML project name:') comet_ml_workspace = input('Comet ML workspace:') comet_ml_api_key = getpass.getpass('Comet ML API key') ``` Dataset parameters: ``` dataset_mode = None # The mode in which we'll use the data, either one hot encoded or pre-embedded ml_core = None # The core machine learning type we'll use; either traditional ML or DL use_delta_ts = None # Indicates if we'll use time variation info time_window_h = None # Number of hours on which we want to predict mortality already_embedded = None # Indicates if categorical features are already embedded when fetching a batch @interact def get_dataset_mode(data_mode=['one hot encoded', 'learn embedding', 'pre-embedded'], ml_or_dl=['deep learning', 'machine learning'], use_delta=[False, 'normalized', 'raw'], window_h=(0, 96, 24)): global dataset_mode, ml_core, use_delta_ts, time_window_h, already_embedded dataset_mode, ml_core, use_delta_ts, time_window_h = data_mode, ml_or_dl, use_delta, window_h already_embedded = dataset_mode == 'embedded' id_column = 'patientunitstayid' # Name of the sequence ID column ts_column = 'ts' # Name of the timestamp column label_column = 'label' # Name of the label column n_ids = 6 # Total number of sequences n_inputs = 9 # Number of input features n_outputs = 1 # Number of outputs padding_value = 999999 # Padding value used to fill in sequences up to the maximum sequence length ``` Data types: ``` dtype_dict = dict(patientunitstayid='uint', ts='uint', int_col='Int32', float_col='float32', cat_1_bool_1='UInt8', cat_1_bool_2='UInt8', cat_2_bool_1='UInt8', cat_3_bool_1='UInt8', cat_3_bool_2='UInt8', cat_3_bool_3='UInt8', cat_3_bool_4='UInt8', death_ts='Int32') ``` One hot encoding columns categorization: ``` cat_feat_ohe = dict(cat_1=['cat_1_bool_1', 'cat_1_bool_2'], cat_2=['cat_2_bool_1'], cat_3=['cat_3_bool_1', 'cat_3_bool_2', 'cat_3_bool_3', 'cat_3_bool_4']) cat_feat_ohe list(cat_feat_ohe.keys()) ``` Training parameters: ``` test_train_ratio = 0.25 # Percentage of the data which will be used as a test set validation_ratio = 1/3 # Percentage of the data from the training set which is used for validation purposes batch_size = 2 # Number of unit stays in a mini batch n_epochs = 1 # Number of epochs lr = 0.001 # Learning rate ``` Testing parameters: ``` metrics = ['loss', 'accuracy', 'AUC', 'AUC_weighted'] ``` ## Creating large dummy data Create each individual column as a NumPy array: ``` patientunitstayid_col = np.concatenate([np.repeat(1, 25), np.repeat(2, 17), np.repeat(3, 56), np.repeat(4, 138), np.repeat(5, 2000), np.repeat(6, 4000), np.repeat(7, 6000), np.repeat(8, 100000)]) patientunitstayid_col ts_col = np.concatenate([np.arange(25), np.arange(17), np.arange(56), np.arange(138), np.arange(2000), np.arange(4000), np.arange(6000), np.arange(100000)]) ts_col int_col = np.random.randint(0, 50, size=(112236)) np.random.shuffle(int_col) int_col float_col = np.random.uniform(3, 15, size=(112236)) np.random.shuffle(float_col) float_col cat_1_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_1_bool_1) cat_1_bool_1 cat_1_bool_2 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_1_bool_2) cat_1_bool_2 cat_2_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_2_bool_1) cat_2_bool_1 cat_3_bool_1 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_3_bool_1) cat_3_bool_1 cat_3_bool_2 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_3_bool_2) cat_3_bool_2 cat_3_bool_3 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_3_bool_3) cat_3_bool_3 cat_3_bool_4 = np.concatenate([np.random.randint(0, 2, size=(112236))]) np.random.shuffle(cat_3_bool_4) cat_3_bool_4 death_ts = np.concatenate([np.random.randint(0, 1000, size=(22236)), np.repeat(np.nan, 90000)]) np.random.shuffle(death_ts) death_ts data = np.column_stack([patientunitstayid_col, ts_col, int_col, float_col, cat_1_bool_1, cat_1_bool_2, cat_2_bool_1, cat_3_bool_1, cat_3_bool_2, cat_3_bool_3, cat_3_bool_4, death_ts]) data ``` Create a pandas dataframe with all the columns: ``` data_df = pd.DataFrame(data, columns=['patientunitstayid', 'ts', 'int_col', 'float_col', 'cat_1_bool_1', 'cat_1_bool_2', 'cat_2_bool_1', 'cat_3_bool_1', 'cat_3_bool_2', 'cat_3_bool_3', 'cat_3_bool_4', 'death_ts']) data_df data_df.dtypes data_df = du.utils.convert_dtypes(data_df, dtypes=dtype_dict, inplace=True) data_df.dtypes ``` Save in batch files: ``` du.data_processing.save_chunked_data(data_df, file_name='dmy_large_data', batch_size=1, id_column=id_column, data_path=data_path) pd.read_feather(f'{data_path}dmy_large_data_2.ftr') ``` ## Defining the dataset object ``` dataset = du.datasets.Large_Dataset(files_name='dmy_large_data', process_pipeline=utils.eICU_process_pipeline, id_column=id_column, initial_analysis=utils.eICU_initial_analysis, files_path=data_path, dataset_mode=dataset_mode, ml_core=ml_core, use_delta_ts=use_delta_ts, time_window_h=time_window_h, total_length=100000, padding_value=padding_value, cat_feat_ohe=cat_feat_ohe, dtype_dict=dtype_dict) # Make sure that we discard the ID, timestamp and label columns if n_inputs != dataset.n_inputs: n_inputs = dataset.n_inputs print(f'Changed the number of inputs to {n_inputs}') else: n_inputs if dataset_mode == 'learn embedding': embed_features = dataset.embed_features n_embeddings = dataset.n_embeddings else: embed_features = None n_embeddings = None print(f'Embedding features: {embed_features}') print(f'Number of embeddings: {n_embeddings}') dataset.__len__() dataset.bool_feat ``` ## Separating into train and validation sets ``` (train_dataloader, val_dataloader, test_dataloader, train_indeces, val_indeces, test_indeces) = du.machine_learning.create_train_sets(dataset, test_train_ratio=test_train_ratio, validation_ratio=validation_ratio, batch_size=batch_size, get_indices=True, num_workers=2) if ml_core == 'deep learning': # Ignore the indeces, we only care about the dataloaders when using neural networks del train_indeces del val_indeces del test_indeces else: # Get the full arrays of each set train_features, train_labels = dataset.X[train_indeces], dataset.y[train_indeces] val_features, val_labels = dataset.X[val_indeces], dataset.y[val_indeces] test_features, test_labels = dataset.X[test_indeces], dataset.y[test_indeces] # Ignore the dataloaders, we only care about the full arrays when using scikit-learn or XGBoost del train_dataloaders del val_dataloaders del test_dataloaders if ml_core == 'deep learning': print(next(iter(train_dataloader))[0]) else: print(train_features[:32]) next(iter(train_dataloader))[0].shape if ml_core == 'deep learning': print(next(iter(val_dataloader))[0]) else: print(val_features[:32]) if ml_core == 'deep learning': print(next(iter(test_dataloader))[0]) else: print(test_features[:32]) next(iter(test_dataloader))[0].shape ``` ## Training models ### Vanilla RNN #### Creating the model Model parameters: ``` n_hidden = 10 # Number of hidden units n_layers = 3 # Number of LSTM layers p_dropout = 0.2 # Probability of dropout embedding_dim = [3, 2, 4] # List of embedding dimensions if use_delta_ts == 'normalized': # Count the delta_ts column as another feature, only ignore ID, timestamp and label columns n_inputs = dataset.n_inputs + 1 elif use_delta_ts == 'raw': raise Exception('ERROR: When using a model of type Vanilla RNN, we can\'t use raw delta_ts. Please either normalize it (use_delta_ts = "normalized") or discard it (use_delta_ts = False).') ``` Instantiating the model: ``` model = Models.VanillaRNN(n_inputs, n_hidden, n_outputs, n_layers, p_dropout, embed_features=embed_features, n_embeddings=n_embeddings, embedding_dim=embedding_dim, total_length=100000) model ``` Define the name that will be given to the models that will be saved: ``` model_name = 'rnn' if dataset_mode == 'pre-embedded': model_name = model_name + '_pre_embedded' elif dataset_mode == 'learn embedding': model_name = model_name + '_with_embedding' elif dataset_mode == 'one hot encoded': model_name = model_name + '_one_hot_encoded' if use_delta_ts is not False: model_name = model_name + '_delta_ts' model_name ``` #### Training and testing the model ``` next(model.parameters()) model = du.deep_learning.train(model, train_dataloader, val_dataloader, test_dataloader, dataset=dataset, padding_value=padding_value, batch_size=batch_size, n_epochs=n_epochs, lr=lr, models_path=f'{project_path}models/', model_name=model_name, ModelClass=Models.VanillaRNN, is_custom=False, do_test=True, metrics=metrics, log_comet_ml=False, already_embedded=already_embedded) next(model.parameters()) ``` #### Hyperparameter optimization ``` config_name = input('Hyperparameter optimization configuration file name:') val_loss_min, exp_name_min = du.machine_learning.optimize_hyperparameters(Models.VanillaRNN, train_dataloader=train_dataloader, val_dataloader=val_dataloader, test_dataloader=test_dataloader, dataset=dataset, config_name=config_name, comet_ml_api_key=comet_ml_api_key, comet_ml_project_name=comet_ml_project_name, comet_ml_workspace=comet_ml_workspace, n_inputs=n_inputs, id_column=id_column, inst_column=ts_column, id_columns_idx=[0, 1], n_outputs=n_outputs, model_type='multivariate_rnn', is_custom=False, models_path='models/', model_name=model_name, array_param='embedding_dim', metrics=metrics, config_path=f'{project_path}notebooks/sandbox/', var_seq=True, clip_value=0.5, padding_value=padding_value, batch_size=batch_size, n_epochs=n_epochs, lr=lr, comet_ml_save_model=True, embed_features=embed_features, n_embeddings=n_embeddings) exp_name_min ```
github_jupyter
**QC of ETL starting with GDC release 24 clinical tables** This notebook focuses on the QC of program **MMRF** data_category clinical This program has a total of five clinical tables present in this release Tables listed below --- - `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` - `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` - `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` - `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow` - `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` ##QC table checklist Multiple one-to-many tables present QC list **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? **2. Look at table row number and size** Do these metrics make sense? **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. [ISB-CGC BigQuery table search test tier](https://isb-cgc-test.appspot.com/bq_meta_search/) Run a manual check in the console with the steps mentioned in step 1. *Note from developer: There are some columns which are sparsely populated (so they might look empty if you’re just scrolling through the table in the GUI), but there should be at least one non-null entry for every column in every table.* **4. Number of case_id versus BigQuery metadata table** **5.Check for any duplicate rows present in the table** **7. Verify case_id count of table against master rel_clinical_data table** ##Reference material * [NextGenETL](https://github.com/isb-cgc/NextGenETL) GitHub repository * [ETL QC SOP draft](https://docs.google.com/document/d/1Wskf3BxJLkMjhIXD62B6_TG9h5KRcSp8jSAGqcCP1lQ/edit) ##Before you begin You need to load the BigQuery module, authenticate ourselves, create a client variable, and load the necessary libraries. ``` from google.colab import auth try: auth.authenticate_user() print('You have been successfully authenticated!') except: print('You have not been authenticated.') from google.cloud import bigquery try: project_id = 'isb-project-zero' # Update your_project_number with your project number client = bigquery.Client(project=project_id) print('BigQuery client successfully initialized') except: print('Failed') #Install pypika to build a Query !pip install pypika # Import from PyPika from pypika import Query, Table, Field, Order import pandas ``` ## READY TO BEGIN TESTING ##Clin MMRF **Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF` [Table location](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel23_clin_MMRF&page=table) Source : GDC API Release version : v24 ###test 1 - schema verification **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields Are the labels correct Google documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view). Google documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table). ``` #return all table information for rel24_clin_MMRF clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES') clin_query = Query.from_(clin_table) \ .select(' table_catalog, table_schema, table_name, table_type ') \ .where(clin_table.table_name=='rel24_clin_MMRF') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() clin.head() #return all table information for rel24_clin_MMRF clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['option_name'][i] + '\n') print('\t' + clin['option_value'][i] + '\n') print('\t' + clin['option_type'][i] + '\n') else: print('QC of friendly name, table description and labels --- FAILED') #check for empty schemas in dataset rel24_clin_MMRF clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") clin.empty ``` FIELD Descriptions pulled example below ``` #list of field descriptions for table rel24_clin_MMRF clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['table_name'][i] + '\n') print('\t' + clin['column_name'][i] + '\n') print('\t' + clin['description'][i] + '\n') # check for empty schemas in dataset rel24_clin_MMRF clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") print(clin) ``` ###test 2 - row number verification **2. Look at table row number and size** Do these metrics make sense? ``` %%bigquery --project isb-project-zero SELECT COUNT(submitter_id) FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF` %%bigquery --project isb-project-zero SELECT * FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` ``` ###test 3 - manual verification **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. ISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/). BigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF&page=table). Run a manual check in the console with the steps mentioned in step 1 Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? *Note from developer: There are some columns which are sparsely populated (so they might look empty if you’re just scrolling through the table in the GUI), but there should be at least one non-null entry for every column in every table.* ###test 4 - case_gdc_id file metadata table count verification **4. Number of case_id versus BigQuery metadata table** ``` # clinical case_id counts table reuslts below # Query below will display the number of cases presents in this table. clin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF`') clin_query = Query.from_(clin_table) \ .select(' DISTINCT case_id, count(*) as count') \ .groupby('case_id') clin_query_clean = str(clin_query).replace('"', "") #print(clin_query_clean) clin = client.query(clin_query_clean).to_dataframe() print('number of case from submitter_id = ' + str(len(clin.index))) # GDC file metadata table case_gdc_id count for clinical below %%bigquery --project isb-project-zero SELECT case_gdc_id, program_name FROM `isb-project-zero.GDC_metadata.rel24_caseData` where program_name = 'MMRF' group by case_gdc_id, program_name %%bigquery --project isb-project-zero SELECT distinct case_id, count(case_id) as count FROM `isb-project-zero.GDC_metadata.rel24_fileData_current` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical WHERE program_name = 'MMRF' AND active.case_gdc_id = clinical.case_id group by case_id order by count ``` ###test 5 - duplication verifcation **5. Check for any duplicate rows present in the table** ``` %%bigquery --project isb-project-zero SELECT count(submitter_id) AS count FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` GROUP BY submitter_id, case_id, diag__treat__count, fam_hist__count, follow__count, primary_site, disease_type, index_date, demo__demographic_id, demo__gender, demo__race, demo__ethnicity, demo__vital_status, demo__days_to_birth, demo__age_at_index, demo__days_to_death, demo__cause_of_death, demo__state, demo__created_datetime, demo__updated_datetime, diag__diagnosis_id, diag__primary_diagnosis, diag__days_to_last_known_disease_status, diag__progression_or_recurrence, diag__site_of_resection_or_biopsy, diag__age_at_diagnosis, diag__days_to_last_follow_up, diag__tumor_grade, diag__last_known_disease_status, diag__morphology, diag__tumor_stage, diag__iss_stage, diag__tissue_or_organ_of_origin, diag__state, diag__created_datetime, diag__updated_datetime, state, created_datetime, updated_datetime ORDER BY count DESC LIMIT 10 ``` ###test 6 - case_id master clinical data table count verifcation **6. Verify case_id count of table against master rel_clinical_data table** ``` # case_id count from the program MMRF clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` group by case_id order by count # case_id count from the master clinical table %%bigquery --project isb-project-zero SELECT distinct case_id, count(case_id) as count FROM `isb-project-zero.GDC_metadata.rel24_fileData_current` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical WHERE program_name = 'MMRF' AND active.case_gdc_id = clinical.case_id group by case_id order by count ``` ##Clin MMRF_diag__treat **Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` [Table location](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF_diag__treat&page=table) Source : GDC API Release version : v24 ###test 1 - schema verification **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields Are the labels correct Google documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view). Google documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table). ``` #return all table information for rel24_clin_MMRF_diag__treat clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES') clin_query = Query.from_(clin_table) \ .select(' table_catalog, table_schema, table_name, table_type ') \ .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() clin.head() #return all table information for rel24_clin_MMRF_diag__treat clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['option_name'][i] + '\n') print('\t' + clin['option_value'][i] + '\n') print('\t' + clin['option_type'][i] + '\n') else: print('QC of friendly name, table description and labels --- FAILED') #check for empty schemas in dataset rel24_clin_MMRF_diag__treat clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") clin.empty ``` FIELD Descriptions pulled example below ``` #list of field descriptions for table rel24_clin_MMRF_diag__treat clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['table_name'][i] + '\n') print('\t' + clin['column_name'][i] + '\n') print('\t' + clin['description'][i] + '\n') # check for empty schemas in dataset rel24_clin_MMRF_diag__treat clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_diag__treat') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() print(clin) ``` ###test 2 - row number verification **2. Look at table row number and size** Do these metrics make sense? ``` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_diag__treat` %%bigquery --project isb-project-zero SELECT * FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` ``` ###test 3 - manual verification **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. ISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/). BigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?authuser=1&folder=&organizationId=&project=isb-project-zero&p=isb-project-zero&d=GDC_Clinical_Data&t=rel24_clin_MMRF_diag__treat&page=table). Run a manual check in the console with the steps mentioned in step 1. Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? ###test 4 - case_gdc_id file metadata table count verification **4. Number of case_id versus BigQuery metadata table** ``` # clinical case_id counts table reuslts below # Query below will display the number of cases presents in this table. clin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat`') clin_query = Query.from_(clin_table) \ .select(' DISTINCT case_id, count(*) as count') \ .groupby('case_id') clin_query_clean = str(clin_query).replace('"', "") #print(clin_query_clean) clin = client.query(clin_query_clean).to_dataframe() print('number of case from submitter_id = ' + str(len(clin.index))) # GDC file metadata table case_gdc_id count for clinical below %%bigquery --project isb-project-zero SELECT case_gdc_id, program_name FROM `isb-project-zero.GDC_metadata.rel24_caseData` where program_name = 'MMRF' group by case_gdc_id, program_name %%bigquery --project isb-project-zero SELECT distinct case_id, count(case_id) as count FROM `isb-project-zero.GDC_metadata.rel24_caseData` as active, `isb-project-zero.GDC_Clinical_Data.rel24_clinical_data` as clinical WHERE program_name = 'MMRF' AND active.case_gdc_id = clinical.case_id group by case_id order by count ``` ###test 5 - duplication verifcation **5. Check for any duplicate rows present in the table** ``` %%bigquery --project isb-project-zero SELECT count(case_id) AS count FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` group by diag__treat__treatment_id, diag__diagnosis_id, case_id, diag__treat__days_to_treatment_start, diag__treat__treatment_type, diag__treat__treatment_or_therapy, diag__treat__therapeutic_agents, diag__treat__days_to_treatment_end, diag__treat__regimen_or_line_of_therapy, diag__treat__state, diag__treat__created_datetime, diag__treat__updated_datetime ORDER BY count DESC LIMIT 10 ``` ###test 6 - case_id master clinical data table count verifcation **6. Verify case_id count of table against master rel_clinical_data table** ``` # case_id count from the program MMRF clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` group by case_id order by count # case_id count from the program MMRF_diag__treat clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_diag__treat` group by case_id order by count ``` ##Clin MMRF_fam_hist **Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` [Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_fam_histpage=table) Source : GDC API Release version : v24 ###test 1 - schema verification **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields Are the labels correct Google documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view). Google documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table). ``` #return all table information for rel24_clin_MMRF_fam_hist clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES') clin_query = Query.from_(clin_table) \ .select(' table_catalog, table_schema, table_name, table_type ') \ .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() clin.head() #return all table information for rel24_clin_MMRF_fam_hist clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['option_name'][i] + '\n') print('\t' + clin['option_value'][i] + '\n') print('\t' + clin['option_type'][i] + '\n') else: print('QC of friendly name, table description and labels --- FAILED') #check for empty schemas in dataset rel24_clin_MMRF_fam_hist clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") clin.empty ``` FIELD Descriptions pulled example below ``` #list of field descriptions for table rel24_clin_MMRF_fam_hist clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['table_name'][i] + '\n') print('\t' + clin['column_name'][i] + '\n') print('\t' + clin['description'][i] + '\n') # check for empty schemas in dataset rel24_clin_MMRF_fam_hist clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_fam_hist') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() print(clin) ``` ###test 2 - row number verification **2. Look at table row number and size** Do these metrics make sense? ``` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_fam_hist` %%bigquery --project isb-project-zero SELECT * FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` ``` ###test 3 - manual verification **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. ISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/). BigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_fam_hist&page=table). Run a manual check in the console with the steps mentioned in step 1 Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? ###test 4 - case_gdc_id file metadata table count verification **4. Number of case_id versus BigQuery metadata table** ``` # clinical case_id counts table reuslts below # Query below will display the number of cases presents in this table. clin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist`') clin_query = Query.from_(clin_table) \ .select(' DISTINCT case_id, count(*) as count') \ .groupby('case_id') clin_query_clean = str(clin_query).replace('"', "") #print(clin_query_clean) clin = client.query(clin_query_clean).to_dataframe() print('number of case from submitter_id = ' + str(len(clin.index))) # GDC file metadata table case_gdc_id count for clinical below %%bigquery --project isb-project-zero SELECT case_gdc_id, program_name FROM `isb-project-zero.GDC_metadata.rel24_caseData` where program_name = 'MMRF' group by case_gdc_id, program_name ``` ###test 5 - duplication verifcation **5. Check for any duplicate rows present in the table** ``` %%bigquery --project isb-project-zero SELECT count(case_id) AS count FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` group by fam_hist__family_history_id, case_id, fam_hist__relative_with_cancer_history, fam_hist__relationship_primary_diagnosis, fam_hist__relationship_type, fam_hist__relationship_gender, fam_hist__state, fam_hist__created_datetime, fam_hist__updated_datetime ORDER BY count DESC LIMIT 10 ``` ###test 6 - case_id master clinical data table count verifcation **6. Verify case_id count of table against master rel_clinical_data table** ``` # case_id count from the program MMRF clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` group by case_id order by count # case_id count from the program MMRF_fam_hist clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` group by case_id order by count ``` ##Clin MMRF_follow **Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow` [Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel23_fileData_legacy&page=table) Source : GDC API Release version : v24 ###test 1 - schema verification **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields Are the labels correct Google documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view). Google documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table). ``` #return all table information for rel24_clin_MMRF_follow clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES') clin_query = Query.from_(clin_table) \ .select(' table_catalog, table_schema, table_name, table_type ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() clin.head() #return all table information for rel24_clin_MMRF_follow clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['option_name'][i] + '\n') print('\t' + clin['option_value'][i] + '\n') print('\t' + clin['option_type'][i] + '\n') else: print('QC of friendly name, table description and labels --- FAILED') #check for empty schemas in dataset rel24_clin_MMRF_follow clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") clin.empty ``` FIELD Descriptions pulled example below ``` #list of field descriptions for table rel24_clin_MMRF_follow clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['table_name'][i] + '\n') print('\t' + clin['column_name'][i] + '\n') print('\t' + clin['description'][i] + '\n') # check for empty schemas in dataset rel24_clin_MMRF_follow clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() print(clin) ``` ###test 2 - row number verification **2. Look at table row number and size** Do these metrics make sense? ``` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow` %%bigquery --project isb-project-zero SELECT * FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_fam_hist` ``` ###test 3 - manual verification **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. ISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/). BigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_fam_hist&page=table). Run a manual check in the console with the steps mentioned in step 1 Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? ###test 4 - case_gdc_id file metadata table count verification **4. Number of case_id versus BigQuery metadata table** ``` # clinical case_id counts table reuslts below # Query below will display the number of cases presents in this table. clin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow`') clin_query = Query.from_(clin_table) \ .select(' DISTINCT case_id, count(*) as count') \ .groupby('case_id') clin_query_clean = str(clin_query).replace('"', "") #print(clin_query_clean) clin = client.query(clin_query_clean).to_dataframe() print('number of case from submitter_id = ' + str(len(clin.index))) # GDC file metadata table case_gdc_id count for clinical below %%bigquery --project isb-project-zero SELECT case_gdc_id, program_name FROM `isb-project-zero.GDC_metadata.rel24_caseData` where program_name = 'MMRF' group by case_gdc_id, program_name ``` ###test 5 - duplication verifcation **5. Check for any duplicate rows present in the table** ``` %%bigquery --project isb-project-zero SELECT count(case_id) AS count FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow` group by follow__follow_up_id, case_id, follow__mol_test__count, follow__days_to_follow_up, follow__height, follow__weight, follow__ecog_performance_status, follow__state, follow__created_datetime, follow__updated_datetime ORDER BY count DESC LIMIT 10 ``` ###test 6 - case_id master clinical data table count verifcation **6. Verify case_id count of table against master rel_clinical_data table** ``` # case_id count from the program MMRF clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` group by case_id order by count # case_id count from the program MMRF_follow clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow` group by case_id order by count ``` ##Clin MMRF_follow__mol_test **Testing Full ID** `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` [Table location](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel23_fileData_slide2caseIDmap&page=table) Source : GDC API Release version : v24 ###test 1 - schema verification **1. Check schema** Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields Are the labels correct Google documentation column descriptions for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#column_field_paths_view). Google documentation table options for [reference](https://cloud.google.com/bigquery/docs/information-schema-tables#options_table). ``` #return all table information for rel24_clin_MMRF_follow__mol_test clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLES') clin_query = Query.from_(clin_table) \ .select(' table_catalog, table_schema, table_name, table_type ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() clin.head() #return all table information for rel24_clin_MMRF_follow__mol_test clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['option_name'][i] + '\n') print('\t' + clin['option_value'][i] + '\n') print('\t' + clin['option_type'][i] + '\n') else: print('QC of friendly name, table description and labels --- FAILED') #check for empty schemas in dataset rel24_clin_MMRF_follow__mol_test clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.TABLE_OPTIONS') clin_query = Query.from_(clin_table) \ .select(' table_name, option_name, option_type, option_value ') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows print("Are there any empty cells in the table schema?") clin.empty ``` FIELD Descriptions pulled example below ``` #list of field descriptions for table rel24_clin_MMRF_follow__mol_test clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() pandas.options.display.max_rows for i in range(len(clin)): print(clin['table_name'][i] + '\n') print('\t' + clin['column_name'][i] + '\n') print('\t' + clin['description'][i] + '\n') # check for empty schemas in dataset rel24_clin_MMRF_follow__mol_test clin_table = Table('`isb-project-zero`.GDC_Clinical_Data.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS') clin_query = Query.from_(clin_table) \ .select('table_name, column_name, description') \ .where(clin_table.table_name=='rel24_clin_MMRF_follow__mol_test') \ clin_query_clean = str(clin_query).replace('"', "") clin = client.query(clin_query_clean).to_dataframe() print(clin) ``` ###test 2 - row number verification **2. Look at table row number and size** Do these metrics make sense? ``` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` %%bigquery --project isb-project-zero SELECT COUNT(case_id) FROM `isb-project-zero.GDC_Clinical_Data.rel23_clin_MMRF_follow__mol_test` %%bigquery --project isb-project-zero SELECT * FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` ``` ###test 3 - manual verification **3. Scroll through table manually** See if anything stands out - empty columns, etc. The BigQuery table search user interface is useful in for this test run. The test tier points to the isb-etl-open. ISB-CGC BigQuery table search [test tier](https://isb-cgc-test.appspot.com/bq_meta_search/). BigQuery console [isb-project-zero](https://console.cloud.google.com/bigquery?project=high-transit-276919&authuser=2&p=isb-project-zero&d=GDC_metadata&t=rel24_clin_MMRF_follow__mol_test&page=table). Run a manual check in the console with the steps mentioned in step 1 Are all the fields labeled? Is there a table description? Do the field labels make sense for all fields? Are the labels correct? ###test 4 - case_gdc_id file metadata table count verification **4. Number of case_id versus BigQuery metadata table** ``` # clinical case_id counts table reuslts below # Query below will display the number of cases presents in this table. clin_table = Table('`isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test`') clin_query = Query.from_(clin_table) \ .select(' DISTINCT case_id, count(*) as count') \ .groupby('case_id') clin_query_clean = str(clin_query).replace('"', "") #print(clin_query_clean) clin = client.query(clin_query_clean).to_dataframe() print('number of case from submitter_id = ' + str(len(clin.index))) # GDC file metadata table case_gdc_id count for clinical below %%bigquery --project isb-project-zero SELECT case_gdc_id, program_name FROM `isb-project-zero.GDC_metadata.rel24_caseData` where program_name = 'MMRF' group by case_gdc_id, program_name ``` ###test 5 - duplication verifcation **5. Check for any duplicate rows present in the table** ``` %%bigquery --project isb-project-zero SELECT count(case_id) AS count FROM `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` group by follow__mol_test__molecular_test_id, follow__follow_up_id, case_id, follow__mol_test__biospecimen_type, follow__mol_test__laboratory_test, follow__mol_test__test_result, follow__mol_test__test_units, follow__mol_test__test_value, follow__mol_test__molecular_analysis_method, follow__mol_test__gene_symbol, follow__mol_test__state, follow__mol_test__created_datetime, follow__mol_test__updated_datetime ORDER BY count DESC LIMIT 10 ``` ###test 6 - case_id master clinical data table count verifcation **6. Verify case_id count of table against master rel_clinical_data table** ``` # case_id count from the program MMRF clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF` group by case_id order by count # case_id count from the program MMRF_follow clinical table %%bigquery --project isb-project-zero select distinct case_id, count(case_id) as count from `isb-project-zero.GDC_Clinical_Data.rel24_clin_MMRF_follow__mol_test` group by case_id order by count ```
github_jupyter
``` #Libraries import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import os import re import json import string import matplotlib.pyplot as plt %matplotlib inline import plotly.express as px import plotly.graph_objects as go from tqdm.autonotebook import tqdm from functools import partial import torch import random from sklearn.model_selection import train_test_split !pip install transformers from transformers import BertTokenizer, BertModel #import spacy gpu_info = !nvidia-smi gpu_info = '\n'.join(gpu_info) if gpu_info.find('failed') >= 0: print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ') print('and then re-execute this cell.') else: print(gpu_info) print(f'GPU available: {torch.cuda.is_available()}') random.seed(10) print(torch.cuda.is_available()) if torch.cuda.is_available(): device = torch.device("cuda") else: device = torch.device("cpu") print("Using device:", device) ``` ## Vocabulary This is useful only for the decoder; we get the vocab from the complete data ``` df = pd.read_csv("data.csv") df = df.sample(frac=1, random_state=100).reset_index(drop=True) df.head() # df = df.iloc[0:10,:] text = [] for i in range(len(df)): t = df.loc[i][6] text.append((t, df.loc[i][5])) df.head() pad_word = "<pad>" bos_word = "<s>" eos_word = "</s>" unk_word = "<unk>" pad_id = 0 bos_id = 1 eos_id = 2 unk_id = 3 def normalize_sentence(s): s = re.sub(r"([.!?])", r" \1", s) s = re.sub(r"[^a-zA-Z.!?]+", r" ", s) s = re.sub(r"\s+", r" ", s).strip() return s class Vocabulary: def __init__(self): self.word_to_id = {pad_word: pad_id, bos_word: bos_id, eos_word:eos_id, unk_word: unk_id} self.word_count = {} self.id_to_word = {pad_id: pad_word, bos_id: bos_word, eos_id: eos_word, unk_id: unk_word} self.num_words = 4 def get_ids_from_sentence(self, sentence): sentence = normalize_sentence(sentence) sent_ids = [bos_id] + [self.word_to_id[word] if word in self.word_to_id \ else unk_id for word in sentence.split()] + \ [eos_id] return sent_ids def tokenized_sentence(self, sentence): sent_ids = self.get_ids_from_sentence(sentence) return [self.id_to_word[word_id] for word_id in sent_ids] def decode_sentence_from_ids(self, sent_ids): words = list() for i, word_id in enumerate(sent_ids): if word_id in [bos_id, eos_id, pad_id]: # Skip these words continue else: words.append(self.id_to_word[word_id]) return ' '.join(words) def add_words_from_sentence(self, sentence): sentence = normalize_sentence(sentence) for word in sentence.split(): if word not in self.word_to_id: # add this word to the vocabulary self.word_to_id[word] = self.num_words self.id_to_word[self.num_words] = word self.word_count[word] = 1 self.num_words += 1 else: # update the word count self.word_count[word] += 1 vocab = Vocabulary() for src, tgt in text: vocab.add_words_from_sentence(src) vocab.add_words_from_sentence(tgt) print(f"Total words in the vocabulary = {vocab.num_words}") ``` ## Create chunks for each publication ``` # Every publication input will be mapped into a variable numbers of chunks (split by sentence) that are less than chunk_max_len # These can then be batched by encoding strings, then padding them chunk_max_len = 512 publication_ids = df['Id'] dataset_label = df['cleaned_label'] chunked_text = [[]] * len(df.index) # publication id x chunks - left in string format for flexibility in encoding chunk_labels = [[]] * len(df.index) # publication id x chunk - if label in chunk, True else False for i in range(len(df.index)): chunked_text[i] = [] chunk_labels[i] = [] chunk = '' for s in df['text'][i].split('.'): # print(s) new_chunk = chunk + s.strip() if len(s)>0 and s[-1]!='.': new_chunk += '. ' if len(new_chunk.split(' ')) > chunk_max_len: # labels_per_chunk[i].append(True if df['dataset_label'][i] in chunk else False) chunk_labels[i].append(1 if df['dataset_label'][i] in chunk else 0) chunked_text[i].append(chunk) chunk = s else: chunk = new_chunk # labels_per_chunk[i].append(True if df['dataset_label'][i] in chunk else False) chunk_labels[i].append(1 if df['dataset_label'][i] in chunk else 0) chunked_text[i].append(chunk) print(len(chunked_text[0]), chunked_text[0]) print(dataset_label[0]) ``` ## Create dataset For each publication, it will return a tensor with all the chunks inside Therefore, each pass of our bi-LSTM will work with one single publication (with all the chunks inside that publication) ``` from transformers import BertModel, BertTokenizerFast bert_model = BertModel.from_pretrained('bert-base-uncased').to(device) bert_model.eval() tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') from torch.nn.utils.rnn import pad_sequence from torch.utils.data import Dataset, DataLoader class ChunkedDataset(Dataset): """ @author: Alexander Rodriguez """ def __init__(self, publication_ids, chunked_text, chunk_labels, dataset_label, device, tokenizer, bert_model): """ Args: chunked_text: list of str, contains all the chunks chunk_labels: list booleans, contain whether or not the label is in the chunks dataset_label: string, same label for all chunks in the publication device: cpu or cuda """ self.publication_ids = publication_ids self.chunked_text = chunked_text self.chunk_labels = chunk_labels self.dataset_label = dataset_label self.tokenizer = tokenizer self.device = device self.bert_model = bert_model def __len__(self): return len(self.publication_ids) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() return {"publication_ids":self.publication_ids[idx], "chunked_text":self.chunked_text[idx], "chunk_labels":self.chunk_labels[idx], "dataset_label":self.dataset_label[idx]} def collate_fn(data): """Creates mini-batch tensors for several publications Return: A dictionary for each chunk (read below) Each training observation will represent one chunk, therefore we have: input_ids: the word ids from the Bert tokenizer tensor shape (max_input_sequence_length,batch_size) input_tensor: the Bert word embeddings for the sequence (chunk) tensor shape (max_input_sequence_length,batch_size,bert_dim) attention_mask: useful for knowing where the sequence ends Each chunk has two labels: chunk_labels: (list of 0/1) whether or not the chunk contains the label output_ids: the ids that have to be predicted for the target sequence tensor shape (max_output_sequence_length,batch_size) Sequences are padded to the maximum length of mini-batch sequences (dynamic padding). """ chunked_text = []; chunk_labels = []; dataset_label = [] for publication in data: # for chunk in publication: chunked_text += [chunk for chunk in publication["chunked_text"] ] chunk_labels += [chunk for chunk in publication["chunk_labels"] ] # our dataset_label have to be repeated dataset_label += [publication["dataset_label"] for _ in publication["chunk_labels"] ] with torch.no_grad(): # needed for memory t = tokenizer(chunked_text, padding=True, truncation=True, return_tensors="pt").to(device) outputs = bert_model(**t) bert_input_word_embeddings = outputs[0].permute(1,0,2) del outputs torch.cuda.empty_cache() input_ids = t['input_ids'].permute(1,0) attention_mask = t['attention_mask'] def encode(tgt): tgt_ids = vocab.get_ids_from_sentence(tgt) return tgt_ids # We will pre-tokenize the dataset labels (output) and save in id lists for later use output_ids = [encode(tgt) for tgt in dataset_label] output_ids = [torch.LongTensor(e) for e in output_ids] output_ids = pad_sequence(output_ids,padding_value=pad_id).to(device) # "chunked_text":chunked_text, # "dataset_label":dataset_label, return {"input_ids":input_ids, "chunk_labels":chunk_labels, \ "output_ids":output_ids, "input_tensor":bert_input_word_embeddings, \ 'attention_mask':attention_mask} # do not use, this is only for debugging # data = pd.read_csv("data.csv") # with torch.no_grad(): # t = tokenizer(data['text'].tolist()[0:16], padding=True, truncation=True, return_tensors="pt").to(device) # outputs = bert_model(**t) # encoded_layers = outputs[0] # del outputs # torch.cuda.empty_cache() ``` ## Seq2seq model Uses Bert word embeddings Makes two predictions for each chunk ``` import torch.nn as nn class Seq2seq(nn.Module): def __init__(self, vocab, bert_dim = 300, emb_dim = 300, hidden_dim = 300, num_layers = 2, dropout=0.1): super().__init__() """ @author: Alexander Rodriguez bert_dim: dimension of Bert embeddings emb_dim: dimension of our word embedding (used in decoder) hidden_dim: dimension of our GRU hidden states """ self.bert_dim = bert_dim self.num_words = vocab.num_words self.emb_dim = emb_dim self.hidden_dim = hidden_dim self.num_layers = num_layers # neural layers self.embedding_layer = nn.Linear(1,self.emb_dim) self.encoder = nn.GRU( self.bert_dim,self.hidden_dim,self.num_layers,bidirectional=True,dropout=dropout ) self.linear_hidden = nn.Linear(self.hidden_dim,self.hidden_dim) self.decoder = nn.GRU( self.emb_dim,self.hidden_dim,self.num_layers,bidirectional=False,dropout=dropout ) self.output_layer = nn.Linear(self.hidden_dim,self.num_words) self.classifier = nn.Linear(self.hidden_dim, 1) self.attn_softmax = nn.Softmax(1) def encode(self, input_embeddings, attention_mask): """Encode the source batch using a bidirectional GRU encoder. Args: input_embeddings: Bert embeddings with shape (max_input_sequence_length, batch_size,bert_dim), e.g. torch.Size([512, 16, 768]) attention_mask: attention mask obtained from Bert tokenizer Returns: A tuple with three elements: encoder_output: The output hidden representation of the encoder with shape (max_input_sequence_length, batch_size, hidden_size). Can be obtained by adding the hidden representations of both directions of the encoder bidirectional GRU. encoder_mask: A boolean tensor with shape (max_input_sequence_length, batch_size) indicating which encoder outputs correspond to padding tokens. Its elements should be True at positions corresponding to padding tokens and False elsewhere. encoder_hidden: The final hidden states of the bidirectional GRU (after a suitable projection) that will be used to initialize the decoder. This should be a tensor h_n with shape (num_layers, batch_size, hidden_size). Note that the hidden state returned by the bi-GRU cannot be used directly. Its initial dimension is twice the required size because it contains state from two directions. """ batch_size = input_embeddings.shape[1] dtype = torch.float # gru pass encoder_output, encoder_hidden = self.encoder(input_embeddings) # seq_len first # sum embeddings from the two GRUs encoder_output = encoder_output[:,:,:self.hidden_dim] + encoder_output[:,:,self.hidden_dim:] # hidden embedding encoder_hidden = encoder_hidden.view(self.num_layers, 2, batch_size, self.hidden_dim) encoder_hidden = encoder_hidden.sum(1) # sum over bi-directional, keep number of layers encoder_hidden = self.linear_hidden(encoder_hidden) encoder_mask = attention_mask.permute(1,0) return encoder_output, encoder_mask, encoder_hidden def decode(self, decoder_input, last_hidden, encoder_output, encoder_mask, use_classifier=False): """Run the decoder GRU for one decoding step from the last hidden state. Args: decoder_input: An integer tensor with shape (1, batch_size) containing the subword indices for the current decoder input. last_hidden: A pair of tensors h_{t-1} representing the last hidden state of the decoder, each with shape (num_layers, batch_size, hidden_size). For the first decoding step the last_hidden will be encoder's final hidden representation. encoder_output: The output of the encoder with shape (max_src_sequence_length, batch_size, hidden_size). encoder_mask: The output mask from the encoder with shape (max_src_sequence_length, batch_size). Encoder outputs at positions with a True value correspond to padding tokens and should be ignored. use_classifier: (boolean) Whether or not we should classify Returns: A tuple with three elements: logits: A tensor with shape (batch_size, vocab_size) containing unnormalized scores for the next-word predictions at each position. decoder_hidden: tensor h_n with the same shape as last_hidden representing the updated decoder state after processing the decoder input. attention_weights: This will be implemented later in the attention model, but in order to maintain compatible type signatures, we also include it here. This can be None or any other placeholder value. """ # shared layer dtype = torch.float input = decoder_input.type(dtype) input = self.embedding_layer(input.permute(1,0).unsqueeze(2)) # attention weights max_src_sequence_length = encoder_output.shape[0] batch_size = encoder_output.shape[1] decoder_output, decoder_hidden = self.decoder(input.permute(1,0,2),last_hidden) # use the decoder output to get attention weights via dot-product attention_weights = torch.empty((batch_size,max_src_sequence_length),device=device,dtype=dtype) # function for batch dot product taken from https://discuss.pytorch.org/t/dot-product-batch-wise/9746/12 def bdot(a, b): B = a.shape[0] S = a.shape[1] return torch.bmm(a.view(B, 1, S), b.view(B, S, 1)).reshape(-1) for i in range(max_src_sequence_length): attention_weights[:,i] = bdot(decoder_output.squeeze(0),encoder_output[i,:,:]) # softmax attention_weights = self.attn_softmax(attention_weights) # get context vector context = torch.mul(encoder_output.permute(1,0,2), attention_weights.unsqueeze(2)) context = context.sum(1) decoder_output = decoder_output.squeeze(0) + context # gru pass logits = self.output_layer(decoder_output) # use the attention context as input to the classifier along with # hidden states from encoder if use_classifier: out_classifier = self.classifier(last_hidden[0] + last_hidden[1] + context) else: out_classifier = torch.tensor(0.).to(device) return logits, decoder_hidden, attention_weights, out_classifier def compute_loss(self, input_tensor, attention_mask, target_seq, target_binary): """Run the model on the source and compute the loss on the target. Args: input_tensor & attention_mask: Coming from Bert, directly go to encoder See encoder documentation for details target_seq: An integer tensor with shape (max_target_sequence_length, batch_size) containing subword indices for the target sentences. target_binary: Binary indicator for the chunk, indicates if the label is in that chunk (it's a list) NOTE: this is used as a mask for the sequence loss Returns: A scalar float tensor representing cross-entropy loss on the current batch divided by the number of target tokens in the batch. Many of the target tokens will be pad tokens. You should mask the loss from these tokens using appropriate mask on the target tokens loss. """ # loss criterion, ignoring pad id tokens criterion = nn.CrossEntropyLoss(ignore_index=pad_id,reduction='none') criterion_classification = nn.BCEWithLogitsLoss(reduction='sum') # call encoder encoder_output, encoder_mask, encoder_hidden = self.encode(input_tensor, attention_mask) # decoder max_target_sequence_length = target_seq.shape[0] last_hidden = encoder_hidden total_loss = torch.tensor(0.).to(device) target_binary = torch.tensor(target_binary,dtype=torch.float).to(device) for i in range(max_target_sequence_length-1): decoder_input = target_seq[[i],] # do a forward pass over classifier only for the first use_classifier = True if i==0 else False logits, decoder_hidden, attention_weights, out_classifier = self.decode(decoder_input, last_hidden, encoder_output, encoder_mask, use_classifier) # target_binary serves as a mask for the loss # we only care about the predicted sequence when we should total_loss += (criterion(logits,target_seq[i+1,]) * target_binary).sum() # get classification loss only for the first one (which is where out_classifier is meaningful) if use_classifier: class_loss = criterion_classification(out_classifier.view(-1),target_binary) # now we have to make last_hidden to be hidden embedding of gru last_hidden = decoder_hidden # denominator of loss total_target_tokens = torch.sum(target_seq != pad_id).cpu() return total_loss/total_target_tokens + class_loss import tqdm def train(model, data_loader, num_epochs, model_file, learning_rate=0.0001): """Train the model for given number of epochs and save the trained model in the final model_file. """ decoder_learning_ratio = 5.0 encoder_parameter_names = ['embedding_layer','encoder','linear_hidden'] encoder_named_params = list(filter(lambda kv: any(key in kv[0] for key in encoder_parameter_names), model.named_parameters())) decoder_named_params = list(filter(lambda kv: not any(key in kv[0] for key in encoder_parameter_names), model.named_parameters())) encoder_params = [e[1] for e in encoder_named_params] decoder_params = [e[1] for e in decoder_named_params] optimizer = torch.optim.AdamW([{'params': encoder_params}, {'params': decoder_params, 'lr': learning_rate * decoder_learning_ratio}], lr=learning_rate) clip = 50.0 for epoch in tqdm.notebook.trange(num_epochs, desc="training", unit="epoch"): # print(f"Total training instances = {len(train_dataset)}") # print(f"train_data_loader = {len(train_data_loader)} {1180 > len(train_data_loader)/20}") with tqdm.notebook.tqdm( data_loader, desc="epoch {}".format(epoch + 1), unit="batch", total=len(data_loader)) as batch_iterator: model.train() total_loss = 0.0 for i, batch_data in enumerate(batch_iterator, start=1): input_tensor = batch_data["input_tensor"] attention_mask = batch_data["attention_mask"] output_ids = batch_data["output_ids"] target_binary = batch_data["chunk_labels"] optimizer.zero_grad() loss = model.compute_loss(input_tensor, attention_mask, output_ids,target_binary) total_loss += loss.item() loss.backward() # Gradient clipping before taking the step _ = nn.utils.clip_grad_norm_(model.parameters(), clip) optimizer.step() batch_iterator.set_postfix(mean_loss=total_loss / i, current_loss=loss.item()) # Save the model after training torch.save(model.state_dict(), model_file) # Create the DataLoader for all publications dataset = ChunkedDataset(publication_ids[0:2000], chunked_text[0:2000], chunk_labels[0:2000], dataset_label[0:2000], device, tokenizer, bert_model) batch_size = 4 # this means it's 4 publications per batch ---too large may not fit in GPU memory data_loader = DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn) # You are welcome to adjust these parameters based on your model implementation. num_epochs = 10 model = Seq2seq(vocab,bert_dim=768,emb_dim=256,hidden_dim=256,num_layers=2).to(device) train(model, data_loader, num_epochs, "bert_word_seq2seq_model_2.pt") # Download the trained model to local for future use x = next(iter(data_loader)) print(x["output_ids"]) ``` ## Evaluation This come is from Alex Wang, I haven't checked it. Load model ``` model = Seq2seq(vocab,bert_dim=768,emb_dim=256,hidden_dim=256,num_layers=2).to(device) model.load_state_dict(torch.load("bert_word_seq2seq_model_2.pt")) print(chunked_text[0]) print(sent) def predict_greedy(model, sentence, max_length=100): """Make predictions for the given input using greedy inference. Args: model: A sequence-to-sequence model. sentence: A input string. max_length: The maximum length at which to truncate outputs in order to avoid non-terminating inference. Returns: Model's predicted greedy response for the input, represented as string. """ # You should make only one call to model.encode() at the start of the function, # and make only one call to model.decode() per inference step. with torch.no_grad(): # needed for memory t = tokenizer(sentence, padding=True, truncation=True, return_tensors="pt").to(device) outputs = bert_model(**t) bert_input_word_embeddings = outputs[0].permute(1,0,2) del outputs torch.cuda.empty_cache() input_ids = t['input_ids'].permute(1,0) attention_mask = t['attention_mask'] model.eval() model.encode(bert_input_word_embeddings,attention_mask) encoder_output, encoder_mask, encoder_hidden = model.encode(bert_input_word_embeddings, attention_mask) last_hidden = encoder_hidden start = bos_id sent = [start] i = 0 while start != eos_id and i < 100: use_classifier = True if i==0 else False start = torch.unsqueeze(torch.tensor(start).cuda(), 0) logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), last_hidden, encoder_output, encoder_mask, use_classifier) start = torch.argmax(logits[0], 0) last_hidden = decoder_hidden sent.append(start.item()) i += 1 if use_classifier: if out_classifier < -1: return False sent = vocab.decode_sentence_from_ids(sent) return sent #predictions = [] #for i in range(100): # temp = [] # for j in range(len(chunked_text[i])): # a = predict_greedy(model, chunked_text[i][j]) # temp.append(a) # predictions.append(temp) # print(dataset_label[i]) # print(temp) score = 0 def jaccard(str1, str2): a = set(str1.lower().split()) b = set(str2.lower().split()) c = a.intersection(b) return float(len(c)) / (len(a) + len(b) - len(c)) predictions[40] def predict_beam(model, sentence, k=3, max_length=100, thresh=-9999): """Make predictions for the given inputs using beam search. Args: model: A sequence-to-sequence model. sentence: An input sentence, represented as string. k: The size of the beam. max_length: The maximum length at which to truncate outputs in order to avoid non-terminating inference. Returns: A list of k beam predictions. Each element in the list should be a string corresponding to one of the top k predictions for the corresponding input, sorted in descending order by its final score. """ # Implementation tip: once an eos_token has been generated for any beam, # remove its subsequent predictions from that beam by adding a small negative # number like -1e9 to the appropriate logits. This will ensure that the # candidates are removed from the beam, as its probability will be very close # to 0. Using this method, uou will be able to reuse the beam of an already # finished candidate # Implementation tip: while you are encouraged to keep your tensor dimensions # constant for simplicity (aside from the sequence length), some special care # will need to be taken on the first iteration to ensure that your beam # doesn't fill up with k identical copies of the same candidate. # You are welcome to tweak alpha alpha = 0.9 with torch.no_grad(): # needed for memory t = tokenizer(sentence, padding=True, truncation=True, return_tensors="pt").to(device) outputs = bert_model(**t) bert_input_word_embeddings = outputs[0].permute(1,0,2) del outputs torch.cuda.empty_cache() input_ids = t['input_ids'].permute(1,0) attention_mask = t['attention_mask'] model.eval() model.encode(bert_input_word_embeddings,attention_mask) encoder_output, encoder_mask, encoder_hidden = model.encode(bert_input_word_embeddings, attention_mask) last_hidden = encoder_hidden start = bos_id sent = [start] i = 0 start = bos_id beams = [] start = torch.unsqueeze(torch.tensor(start).cuda(), 0) logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), last_hidden, encoder_output, encoder_mask, 1) if out_classifier < -2: return False out = torch.log_softmax(logits[0], 0) values, start = torch.topk(out, k, 0) for i in range(len(values)): # Each beam contains the log probs at its first index and the hidden states at its last index beams.append([values[i], start[i].item(), decoder_hidden]) generation = [] i = 0 while i < k: curr = [] for j in beams: start = torch.unsqueeze(torch.tensor(j[-2]).cuda(), 0) logits, decoder_hidden, attention_weights, out_classifier = model.decode(torch.unsqueeze(torch.tensor(start).cuda(), 0), j[-1], encoder_output, encoder_mask, 0) out = torch.log_softmax(logits[0], 0) values, start = torch.topk(out, k, 0) for z in range(len(values)): temp = j.copy() temp[0] = values[z] + temp[0] temp.insert(-1, start[z].item()) temp[-1] = decoder_hidden curr.append(temp) curr = sorted(curr,reverse=True, key=lambda x: x[0]) curr = curr[0:k - i] beams = [] for j in curr: if j[-2] == eos_id or len(j) > 20: generation.append(j[:-1]) i +=1 else: beams.append(j) final = [] generation = sorted(generation, reverse=True, key=lambda x: x[0]/(len(x)-1)**alpha) #for i in generation: # if i[0].item() > thresh: final.append(vocab.decode_sentence_from_ids(generation[0][1:]).lower()) return final predictions = [] for i in range(2000): temp = [] for j in chunked_text[i]: x = predict_beam(model, j) if x: temp.append(x[0]) predictions.append(temp) print(len(predictions)) score = 0 for i in range(2000): for j in predictions[i]: found = False if jaccard(df.loc[i][5], j) > 0.5: score += 1 found = True break print("max accuracy") print(score/2000) print(df.loc[5][5]) testing = {} for i in range(0, len(predictions)): if publication_ids[i] not in testing.keys(): pred = predictions[i] testing[publication_ids[i]] = (pred, [df.loc[i][5]]) else: testing[publication_ids[i]][1].append(df.loc[i][5]) print(len(testing.keys())) tp = 0 fp = 0 fn = 0 for i in testing.values(): prediction = set(i[0]) cop = prediction.copy() true_pred = i[1].copy() check = False #check exact match first for j in prediction: if j in true_pred: tp += 1 true_pred.remove(j) cop.remove(j) #then check rest for jaccard score for j in cop: found = False removal = 0 for k in true_pred: if jaccard(j, k) >= 0.5: found = True removal = k break if found: tp += 1 true_pred.remove(removal) else: fp += 1 fn += len(true_pred) ``` TRAINING PERFORMANCE ``` print("training performance") print("micro F score") print(fp) print(fn) print(tp/(tp + 1/2*(fp+fn))) print("accuracy") print(tp/(tp+fn)) print(len(df)) predictions = [] for i in range(2000, 3000): temp = [] for j in chunked_text[i]: x = predict_beam(model, j) if x: temp.append(x[0]) predictions.append(temp) print(predictions) ``` Checking Classifer Accuracy ``` len(chunked_text) count = 0 for i in predictions: if not i: count += 1 print(count) testing = {} for i in range(0, len(predictions)): if publication_ids[2000+i] not in testing.keys(): pred = predictions[i] print(pred) print(df.loc[2000+i][5]) testing[publication_ids[2000+i]] = (pred, [df.loc[2000+i][5]]) else: testing[publication_ids[2000+i]][1].append(df.loc[2000+i][5]) tp = 0 fp = 0 fn = 0 for i in testing.values(): prediction = i[0] cop = set(prediction.copy()) true_pred = i[1].copy() check = False #check exact match first for j in prediction: if j in true_pred: tp += 1 true_pred.remove(j) cop.remove(j) #then check rest for jaccard score for j in cop: found = False removal = 0 for k in true_pred: if jaccard(j, k) >= 0.5: found = True removal = k break if found: tp += 1 true_pred.remove(removal) else: fp += 1 fn += len(true_pred) ``` Testing Performance ``` print("testing performance") print("micro F score") print(fp) print(fn) print(tp/(tp + 1/2*(fp+fn))) print("accuracy") print(tp/(tp+fn)) ```
github_jupyter
``` #!/usr/bin/env python3 # -*- coding: utf-8 -*- #@author: Kelle Clark, Andrew Florian, Xinyu Xiong #Created on Tue Feb 4 10:05:49 2020 #CSCI 6040 Project 1 Text Generation #PHASE 3: Smoothing the Language Models for the Corpus #Various folders of .txt files were created in the CSCI6040 Team Project 1 folder #to be used for testing our application during develpment #/Short Test Data # has 3 .txt files each about 4KB #/Med test Data # has 2 .txt files one of 119KB (Tragedy of Macbeth) and 6.5MB (big) #/Grande test Data (the 18-document-gutenburg-copus but with 19? files cleaned using the #boilerplate.ipynb -author Andrew Florian and resulting files #shared on Canvas in Project 1 discussion forum) # has 19 .txt files with a total of 11.8MB #we needed the help of a few packages...import all those at once import langid import itertools import mmap import nltk import numpy import os import pandas import random import re import string import sys from collections import Counter from math import log10 from matplotlib.pyplot import yscale, xscale, title, plot from nltk.tokenize import word_tokenize, sent_tokenize from nltk.tokenize import RegexpTokenizer from nltk.corpus import stopwords #from keras.models import Sequential #from keras.layers import Dense, Dropout, LSTM #from keras.utils import np_utils #from keras.callbacks import ModelCheckpoint #**** from phase 1 reading in the tokenized corpus def tokensByFiles(folderpath): textfiles = [f for f in os.listdir(folderpath) if '.txt' in f] tokenfilelist =[] for f in textfiles: rawcorpus = [] substring = '' file = open(folderpath+"/"+f,'rt', encoding='utf-8', errors='replace') print (f" Reading from: '{f}' . . .") rawcorpus.append(file.read() .replace('. . .','.') .replace('!',' .') # substitue space period for ! mark to have a simple token to end a sentence .replace('"',' ') .replace('#',' ') .replace('$',' ') .replace('%',' ') .replace('&',' ') .replace('\\',' ') .replace('\' ',' ') # only remove ' if it has a space before or after meaning it is used as a quote .replace(' \'',' ') # but leave it in if it is inside a word as a contraction .replace('\- ',' ') # only remove - if it has a space before or after meaning it is to be left in the .replace(' \-',' ') # word e.g. C-A-T .replace('(',' ') .replace('\n', ' ') .replace(')',' ') .replace('*',' ') .replace('+',' ') .replace(',',' ') .replace('. ',' ') .replace('/',' ') .replace(':',' ') .replace(';',' ') .replace('<',' ') .replace('=',' ') .replace('>',' ') .replace('?',' .') # substitue space period for ? mark to have a simple token to end a sentence .replace('@',' ') .replace('[',' ') .replace('\\',' ') .replace(']',' ') .replace('^',' ') .replace('_',' ') # remove all unwanted punctuation .replace('`',' ') .replace('{',' ') .replace('|',' ') .replace('}',' ') .replace('~',' ') .replace('0',' ') # remove all digits .replace('1',' ') .replace('2',' ') .replace('3',' ') .replace('4',' ') .replace('5',' ') .replace('6',' ') .replace('7',' ') .replace('8',' ') .replace('9',' ')) file.close() substring = substring + rawcorpus[0] #print(f"the language of file "+f+" is {nltk.language(substring)}") print(f"the estimated language of the file {f} is {langid.classify(substring)}") #tokens=substring.split() tokens = word_tokenize(substring) tokens = [w.lower() for w in tokens] tokenfilelist.append(tokens) return tokenfilelist #we have the different files tokenized, in the variable tokenfilelist #method below creates one corpus from the string of tokens in each file def createOneCorpus(inlist): temp = " " for i in range(len(inlist)): for w in inlist[i]: temp = temp + w + " " return temp def printcorpus(instring): if len(instring) > 500: print(f"The first & last 50 tokens of this corpus are:\n {instring[:50]} \t ... {instring[-50:]}\n") else: print(f"The tokens in the corpus are: \n {instring} \n") #ngrams returns a dictionary # enumerate ngrams code copied from Eisentein and CSCI6040 ipynb # returns the ngram from instring and n def ngrams(instring, n): outset = {} for i in range(len(instring) - n + 1): g = ' '.join(instring[i:i+n]) outset.setdefault(g, 0) outset[g] += 1 return outset #**** from phase 1 reading in the .txt files and creating the tokenized corpus pathname = 'Test Data/short test data' #pathname = 'your choice of path here' #read in the corups file by file tokenfilelist = tokensByFiles(pathname) #print(tokenfilelist) tokencorpus = createOneCorpus(tokenfilelist) #printcorpus(tokencorpus) tokens = tokencorpus.split() #**** from phase 2 creating the four different language models using ngrams: #unigram prob. model using prob(x) = (frequency of x in corpus)/(total in corpus) def createUnigramModel(instring): n = 1 outset = word_tokenize(instring) totalpossible = len(outset) sumofprob = 0 anoutcome = ngrams(outset,n) probmodel = anoutcome for keyword in anoutcome: probmodel[keyword] = (anoutcome[keyword]) / totalpossible sumofprob = sumofprob + probmodel[keyword] print(f"The sum of all the probabiities of unigrams needs to be 1 and it is {sumofprob}\n") return probmodel #create the unigram model unigrammodel = createUnigramModel(tokencorpus) pandas.set_option("display.max_rows", 10) unidataframe = pandas.DataFrame.from_dict(unigrammodel, orient = 'index', columns = ['prob.']) print('Number of rows in Unigram Prob. Model : ', len(unidataframe.index)) print(unidataframe) #Attempt to try and plot the unigram language model using first a Counter object COUNT = Counter(unigrammodel) greatestprob = 0 bigword = '' for w in COUNT.keys(): if COUNT[w] >= greatestprob: bigword = w greatestprob = COUNT[w] print(f"the unigram of greatest freq is: {bigword} \n") M = COUNT[bigword] yscale('log'); xscale('log'); title('Frequency of n-th most frequent word and 1/n line.') ##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS... ##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR ##UNIGRAMS, WE COULD USE PROB. M/i for the ith rankend term and M is the frequency of the ##MOST COMMON UNIGRAM plot([c for (w,c) in COUNT.most_common()]) plot([M/i for i in range(1, len(COUNT)+1)]); #method to create the bigram model def createBigramModel(instring): n = 2 outset = word_tokenize(instring) totalpossible = len(outset) anoutcome = ngrams(outset,n) previousoutcome = ngrams(outset,n-1) sumofprob = 0 probmodel = anoutcome for keyword in anoutcome: listword = keyword.split() prob1 = (previousoutcome[listword[0]]) / totalpossible probmodel[keyword] = prob1 * ((probmodel[keyword]) / (previousoutcome[listword[0]])) sumofprob = sumofprob + probmodel[keyword] print(f"The sum of all the probabiities for bigrams needs to be 1 and it is {sumofprob}") return probmodel #create the bigram model bigrammodel = createBigramModel(tokencorpus) pandas.set_option("display.max_rows", 10) bidataframe = pandas.DataFrame.from_dict(bigrammodel, orient = 'index', columns = ['prob.']) print('Number of rows in Bigram Prob. Model : ', len(bidataframe.index)) print(bidataframe) #Attempt to try and plot the bigram language model using first a Counter object COUNT2 = Counter(bigrammodel) greatestprob2 = 0 bigword2 = '' for w in COUNT2.keys(): if COUNT2[w] >= greatestprob2: bigword2 = w greatestprob2 = COUNT[w] print(f"the bigram of greatest freq is: {bigword2} \n") M2 = COUNT2[bigword2] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset and 1/n line.') ##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS... ##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR ##BIGRAMS, WE COULD USE PROB. M/i for the ith rankend term and M is the frequency of the ##MOST COMMON BIGRAM plot([c for (w,c) in COUNT2.most_common()]) plot([(M2)/i for i in range(1, len(COUNT2)+1)]); #create the trigram model def createTrigramModel(instring): n = 3 outset = word_tokenize(instring) totalpossible = len(outset) anoutcome = ngrams(outset,3) probmodel = anoutcome sumofprob = 0 previous1outcome = ngrams(outset,n-2) previous2outcome = ngrams(outset,n-1) for keyword in anoutcome: listword = keyword.split() wordofinterest = listword[0] prob1 = previous1outcome[wordofinterest]/ totalpossible wordofinterest = listword[0] + " " + listword[1] prob2 = previous2outcome[wordofinterest]/previous1outcome[listword[0]] wordofinterest = keyword probmodel[keyword] = prob1 * prob2 * anoutcome[wordofinterest]/ previous2outcome[listword[0]+ " " + listword[1]] sumofprob = sumofprob + probmodel[keyword] print(f"The sum of all the probabiities for trigrams needs to be 1 and it is {sumofprob}") return probmodel #create the trigram model trigrammodel = createTrigramModel(tokencorpus) pandas.set_option("display.max_rows", 10) tridataframe = pandas.DataFrame.from_dict(trigrammodel, orient = 'index', columns = ['prob.']) print('Number of rows in Trigram Prob. Model : ', len(tridataframe.index)) print(tridataframe) #Attempt to plot the trigram language model using first a Counter object COUNT3 = Counter(trigrammodel) greatestprob3 = 0 bigword3 = '' for w in COUNT3.keys(): if COUNT3[w] >= greatestprob3: bigword3 = w greatestprob3 = COUNT3[w] print(f"the trigram of greatest freq is: {bigword3} \n") M3 = COUNT3[bigword3] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset and 1/n line.') ##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS... ##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR ##TRIGRAMS, WE COULD USE PROB. M3/i for the ith rankend term and M3 is the frequency of the ##MOST COMMON TRIGRAM plot([c for (w,c) in COUNT3.most_common()]) plot([(M3)/i for i in range(1, len(COUNT3)+1)]); #create the quadgram model def createQuadgramModel(instring): n = 4 outset = word_tokenize(instring) totalpossible = len(outset) anoutcome = ngrams(outset,n) probmodel = anoutcome sumofprob = 0 previous1outcome = ngrams(outset,n-3) previous2outcome = ngrams(outset,n-2) previous3outcome = ngrams(outset,n-1) for keyword in anoutcome: listword = keyword.split() wordofinterest = listword[0] prob1 = previous1outcome[wordofinterest]/ totalpossible wordofinterest = listword[0] + " " + listword[1] prob2 = previous2outcome[wordofinterest]/previous1outcome[listword[0]] wordofinterest = listword[0]+ " " + listword[1] + " " + listword[2] prob3 = previous3outcome[wordofinterest]/previous2outcome[listword[0] + " " + listword[1]] wordofinterest = keyword probmodel[keyword] = prob1 * prob2 * prob3 * anoutcome[wordofinterest]/ previous3outcome[listword[0]+ " " + listword[1] + " "+ listword[2]] sumofprob = sumofprob + probmodel[keyword] print(f"The sum of all the probabiities of quadgrams needs to be 1 and it is {sumofprob}") return probmodel #create the quadgram model quadgrammodel = createQuadgramModel(tokencorpus) pandas.set_option("display.max_rows", 10) quaddataframe = pandas.DataFrame.from_dict(quadgrammodel, orient = 'index', columns = ['prob.']) print('Number of rows in Quadgram Prob. Model : ', len(quaddataframe.index)) print(quaddataframe) #Attempt to plot the trigram language model using first a Counter object COUNT4 = Counter(quadgrammodel) greatestprob4 = 0 bigword4 = '' for w in COUNT4.keys(): if COUNT4[w] >= greatestprob4: bigword4 = w greatestprob4 = COUNT4[w] print(f"the quadgram of greatest freq is: {bigword4} \n") M4 = COUNT4[bigword4] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset and 1/n line.') ##RAN INTO SOME ISSUES GETTING THE GRAPH TO PRINT THE RANK ORDER OF THE WORDS... ##BUT WHAT I THINK THIS IS SHOWING IS THAT IF WE WANT TO SMOOTH THE PROB. MODEL FOR ##QUADGRAMS, WE COULD USE PROB. M4/i for the ith rankend term and M3 is the frequency of the ##MOST COMMON TRIGRAM plot([c for (w,c) in COUNT4.most_common()]) plot([(M4)/i for i in range(1, len(COUNT4)+1)]); ####****KEPT IN PHASE 4 TO PROVIDE COMPARISON IN EVALUATION TEXT GENERATION IN PHASE 5.. ####****from phase 3 were we create new models of the language using the linear smoothing and weightings lambda ####****the linear smoothing quadgram model has minor error in indexing and should be updated. #smoothing the ngramModel using a linear function of the kgrams for k = 1 to n def ngramModel_LinearSmooth(inlist, n): #generate ngrams total = len(inlist) anoutcome = [] for i in range(1,n+1): anoutcome.append(ngrams(inlist, i)) #print("outcome: ") #print(anoutcome[i-1]) #generate lamd coefficients for terms in model k = 1 lamd = [] last_lamd = 0 for i in range(1,n): lamd.append(random.uniform(0,k)) k = k-lamd[i -1] lamd.append(k) print("lamd: ", lamd) #generate smooth model smooth_model = {} for keyword in anoutcome[n-1]: grams = keyword.split(' ') #print("grams:") #print(grams) smooth_model.setdefault(keyword, lamd[0]*anoutcome[0][grams[0]]/total) for i in range(1,len(grams) - 2): sub_string = ' '.join(grams[0:i]) sub_sub_string = ' '.join(input[0:i -1]) # print(sub_string) smooth_model[keyword] = smooth_model[keyword] + lamd[i] * (anoutcome[i][sub_string]/anoutcome[i-1][keyword]) #print(keyword + ":") #print(smooth_model[keyword]) #print("smooth_model:") #print(smooth_model) return smooth_model linearsmoothunimodel = ngramModel_LinearSmooth(tokens, 1) pandas.set_option("display.max_rows", 10) linearsmoothunidataframe = pandas.DataFrame.from_dict(linearsmoothunimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Linear Smoothed Unigram Prob. Model : ', len(linearsmoothunidataframe.index)) print(linearsmoothunidataframe) #Attempt to plot the unigram language model using first a Counter object COUNTLSMOOTH1 = Counter(linearsmoothunimodel) greatestlinearsmoothprob1 = 0 biglinearsmoothword1 = '' for w in COUNTLSMOOTH1.keys(): if COUNTLSMOOTH1[w] >= greatestlinearsmoothprob1: biglinearsmoothword1 = w greatestlinearsmoothprob1 = COUNTLSMOOTH1[w] print(f"the unigram of greatest freq in the smoothed unigram model is: {biglinearsmoothword1} \n") MLS1 = COUNTLSMOOTH1[biglinearsmoothword1] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 1-itemset in linear smoothed model and 1/n line.') plot([c for (w,c) in COUNTLSMOOTH1.most_common()]) plot([(MLS1)/i for i in range(1, len(COUNTLSMOOTH1)+1)]); linearsmoothbimodel = ngramModel_LinearSmooth(tokens, 2) pandas.set_option("display.max_rows", 10) linearsmoothbidataframe = pandas.DataFrame.from_dict(linearsmoothbimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Linear Smoothed Bigram Prob. Model : ', len(linearsmoothbidataframe.index)) print(linearsmoothbidataframe) #Attempt to plot the bigram language model using first a Counter object COUNTLSMOOTH2 = Counter(linearsmoothbimodel) greatestlinearsmoothprob2 = 0 biglinearsmoothword2 = '' for w in COUNTLSMOOTH2.keys(): if COUNTLSMOOTH2[w] >= greatestlinearsmoothprob2: biglinearsmoothword2 = w greatestlinearsmoothprob2 = COUNTLSMOOTH2[w] print(f"the bigram of greatest freq in the linear smoothed bigram model is: {biglinearsmoothword2} \n") MLS2 = COUNTLSMOOTH2[biglinearsmoothword2] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset in linear smoothed model and 1/n line.') plot([c for (w,c) in COUNTLSMOOTH2.most_common()]) plot([(MLS2)/i for i in range(1, len(COUNTLSMOOTH1)+1)]); linearsmoothtrimodel = ngramModel_LinearSmooth(tokens, 3) pandas.set_option("display.max_rows", 10) linearsmoothtridataframe = pandas.DataFrame.from_dict(linearsmoothtrimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Linear Smoothed Trigram Prob. Model : ', len(linearsmoothtridataframe.index)) print(linearsmoothtridataframe) #Attempt to plot the trigram language model using first a Counter object COUNTLSMOOTH3 = Counter(linearsmoothtrimodel) greatestlinearsmoothprob3 = 0 biglinearsmoothword3 = '' for w in COUNTLSMOOTH3.keys(): if COUNTLSMOOTH3[w] >= greatestlinearsmoothprob3: biglinearsmoothword3 = w greatestlinearsmoothprob3 = COUNTLSMOOTH3[w] print(f"the trigram of greatest freq in the smoothed trigram model is: {biglinearsmoothword3} \n") MLS3 = COUNTLSMOOTH3[biglinearsmoothword3] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset in linear smoothed model and 1/n line.') plot([c for (w,c) in COUNTLSMOOTH3.most_common()]) plot([(MLS3)/i for i in range(1, len(COUNTLSMOOTH3)+1)]); #linearsmoothquadmodel = ngramModel_LinearSmooth(tokens, 4) #pandas.set_option("display.max_rows", 10) #linearsmoothquaddf = pandas.DataFrame.from_dict(linearsmoothquadmodel, orient = 'index', columns = ['prob.']) #print('Number of rows in Linear Smoothed Quadgram Prob. Model : ', len(linearsmoothquaddf.index)) #print(linearsmoothquaddf) ##Attempt to plot the quadgram language model using first a Counter object #COUNTLSMOOTH4 = Counter(linearsmoothquadmodel) #greatestlinearsmoothprob4 = 0 #biglinearsmoothword4 = '' #for w in COUNTLSMOOTH4.keys(): # if COUNTLSMOOTH4[w] >= greatestlinearsmoothprob4: # biglinearsmoothword4 = w # greatestlinearsmoothprob4 = COUNTLSMOOTH4[w] #print(f"the quadgram of greatest freq in the smoothed quadgram model is: {biglinearsmoothword4} \n") #MLS4 = COUNTLSMOOTH4[biglinearsmoothword4] #yscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset in linear smoothed model and 1/n line.') #plot([c for (w,c) in COUNTLSMOOTH4.most_common()]) #plot([(MLS4)/i for i in range(1, len(COUNTLSMOOTH4)+1)]); ####****from phase 3, the next cell below uses in the smoothing of the language models with Laplace... #In case we want to take into consideration of the file size when smoothing #the language models... we created a Counter object for each file to seperate #the unigrams, bigrams, trigrams and quadgrams in each file and their fruency in the file... #the createListDoc_Foo_Counters below take in a list of strings, one fore each incoming file, which we #created when we read in the files ....the smoothing in the Laplace smoothing below do not weight #the files by size but do use these counters to tally up the total freqeuencies of ngrams and token count def createListDocUniCounter(inlist): docfreqlist = [] for i in range(len(inlist)): counter = Counter(newngram(inlist[i],1)) docfreqlist.append(counter) return docfreqlist dfforuniperfile = createListDocUniCounter(tokenfilelist) firstunifile = dfforuniperfile[0] #print(dfforuniperfile) #print(firstunifile) def createListDocBiCounter(inlist): df = [] for i in range(len(inlist)): #words = re.findall("\w+",inlist[i]) counter = Counter(newngram(inlist[i],2)) df.append(counter) return df dfforbiperfile = createListDocBiCounter(tokenfilelist) firstbifile = dfforbiperfile[0] #print(firstbifile) #print(dfforbiperfile) def createListDocTriCounter(inlist): df = [] for i in range(len(inlist)): #words = re.findall("\w+",inlist[i]) counter = Counter(newngram(inlist[i],3)) df.append(counter) return df dffortriperfile = createListDocTriCounter(tokenfilelist) firsttrifile = dffortriperfile[0] #print(firsttrifile) #print(dffortriperfile) def createListDocQuadCounter(inlist): df = [] for i in range(len(inlist)): #words = re.findall("\w+",inlist[i]) counter = Counter(newngram(inlist[i],4)) df.append(counter) return df dfforquadperfile = createListDocQuadCounter(tokenfilelist) firstquadfile = dfforquadperfile[0] #print(firstquadfile) #print(dfforquadperfile) ###****From Phase 3 of the project the Laplace smoothed unigram, bigram, trigram and quadgram models ###****using the chosen training data test folder....relies on computation of the above module for the dataframes ###****per file #Laplace smoothed unigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus) def createLeplaceSmoothedUnigramModel(outset, dfperfilelist): n = 1 anoutcome = ngrams(outset,n) sumoflaplaceprob = 0 laplaceprobmodel = anoutcome for w in laplaceprobmodel: laplaceprobmodel[w] = 0 filecount = 0 for temp in anoutcome: for i in range(len(dfperfilelist)): count = dfperfilelist[i] filecount = filecount + count[temp] + 1 for keyword in anoutcome: #print(keyword) for i in range(len(dfperfilelist)): count = dfperfilelist[i] laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword] #print(f"The laplaceprobmodel is \n {laplaceprobmodel}") print(f"The sum of all the unigram probabiities in the laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}") return laplaceprobmodel laplacesmoothunimodel = createLeplaceSmoothedUnigramModel(tokens, dfforuniperfile) pandas.set_option("display.max_rows", 10) laplacesmoothunidf = pandas.DataFrame.from_dict(laplacesmoothunimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Laplace Smoothed Unigram Prob. Model : ', len(laplacesmoothunidf.index)) print(laplacesmoothunidf) #Attempt to plot the unigram language model using first a Counter object COUNTLapSMOOTH1 = Counter(laplacesmoothunimodel) greatestlaplacesmoothprob1 = 0 biglaplacesmoothword1 = '' for w in COUNTLapSMOOTH1.keys(): if COUNTLapSMOOTH1[w] >= greatestlaplacesmoothprob1: biglaplacesmoothword1 = w greatestlaplacesmoothprob1 = COUNTLapSMOOTH1[w] print(f"the unigram of greatest freq in the Laplace smoothed unigram model is: {biglaplacesmoothword1} \n") MLapS1 = COUNTLapSMOOTH1[biglaplacesmoothword1] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 1-itemset in laplace smoothed model and 1/n line.') plot([c for (w,c) in COUNTLapSMOOTH1.most_common()]) plot([(MLapS1)/i for i in range(1, len(COUNTLapSMOOTH1)+1)]); #Laplace smoothed bigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus) def createLeplaceSmoothedBigramModel(outset, dfperfilelist): n = 2 anoutcome = ngrams(outset,n) sumoflaplaceprob = 0 laplaceprobmodel = anoutcome for w in laplaceprobmodel: laplaceprobmodel[w] = 0 filecount = 0 for temp in anoutcome: for i in range(len(dfperfilelist)): count = dfperfilelist[i] filecount = filecount + count[temp] + 1 for keyword in anoutcome: #print(keyword) for i in range(len(dfperfilelist)): count = dfperfilelist[i] #print(keyword, count[keyword], filecount) laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) #print(laplaceprobmodel[keyword], keyword) sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword] #print(sumoflaplaceprob) #print(f"The laplaeprobmodel is \n {laplaceprobmodel}") #print(f"The sum of all the probabiities needs to be 1 and it is {sumoflaplaceprob}") return laplaceprobmodel laplacesmoothbimodel = createLeplaceSmoothedBigramModel(tokens, dfforbiperfile) pandas.set_option("display.max_rows", 10) laplacesmoothbidf = pandas.DataFrame.from_dict(laplacesmoothbimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Laplace Smoothed Bigram Prob. Model : ', len(laplacesmoothbidf.index)) print(laplacesmoothbidf) #Attempt to plot the bigram language model using first a Counter object COUNTLapSMOOTH2 = Counter(laplacesmoothbimodel) greatestlaplacesmoothprob2 = 0 biglaplacesmoothword2 = '' for w in COUNTLapSMOOTH2.keys(): if COUNTLapSMOOTH2[w] >= greatestlaplacesmoothprob2: biglaplacesmoothword2 = w greatestlaplacesmoothprob2 = COUNTLapSMOOTH2[w] print(f"the bigram of greatest freq in the Laplace smoothed bigram model is: {biglaplacesmoothword2} \n") MLapS2 = COUNTLapSMOOTH2[biglaplacesmoothword2] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 2-itemset in laplace smoothed model and 1/n line.') plot([c for (w,c) in COUNTLapSMOOTH2.most_common()]) plot([(MLapS2)/i for i in range(1, len(COUNTLapSMOOTH2)+1)]); #Laplace smoothed trigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus) def createLeplaceSmoothedTrigramModel(outset, dfperfilelist): n = 3 anoutcome = ngrams(outset,3) sumoflaplaceprob = 0 laplaceprobmodel = anoutcome for w in laplaceprobmodel: laplaceprobmodel[w] = 0 filecount = 0 for temp in anoutcome: for i in range(len(dfperfilelist)): count = dfperfilelist[i] filecount = filecount + count[temp] + 1 for keyword in anoutcome: #print(keyword) for i in range(len(dfperfilelist)): count = dfperfilelist[i] #print(keyword, count[keyword], filecount) laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) #print(laplaceprobmodel[keyword], keyword) sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword] #print(sumoflaplaceprob) #print(f"The laplaeprobmodel is \n {laplaceprobmodel}") #print(f"The sum of all the trigram probabiities in the Laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}") return laplaceprobmodel laplacesmoothtrimodel = createLeplaceSmoothedTrigramModel(tokens, dffortriperfile) pandas.set_option("display.max_rows", 10) laplacesmoothtridf = pandas.DataFrame.from_dict(laplacesmoothtrimodel, orient = 'index', columns = ['prob.']) print('Number of rows in Laplace Smoothed Trigram Prob. Model : ', len(laplacesmoothtridf.index)) print(laplacesmoothtridf) #Attempt to plot the trigram language model using first a Counter object COUNTLapSMOOTH3 = Counter(laplacesmoothtrimodel) greatestlaplacesmoothprob3 = 0 biglaplacesmoothword3 = '' for w in COUNTLapSMOOTH3.keys(): if COUNTLapSMOOTH3[w] >= greatestlaplacesmoothprob3: biglaplacesmoothword3 = w greatestlaplacesmoothprob3 = COUNTLapSMOOTH3[w] print(f"the trigram of greatest freq in the Laplace smoothed trigram model is: {biglaplacesmoothword3} \n") MLapS3 = COUNTLapSMOOTH3[biglaplacesmoothword3] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 3-itemset in laplace smoothed model and 1/n line.') plot([c for (w,c) in COUNTLapSMOOTH3.most_common()]) plot([(MLapS3)/i for i in range(1, len(COUNTLapSMOOTH3)+1)]); #Laplace smoothed trigram prob. model using prob(x) = (1 + frequency of x in corpus)/(total in corpus) def createLeplaceSmoothedQuadgramModel(outset, dfperfilelist): n = 4 anoutcome = ngrams(outset,4) sumoflaplaceprob = 0 laplaceprobmodel = anoutcome for w in laplaceprobmodel: laplaceprobmodel[w] = 0 filecount = 0 for temp in anoutcome: for i in range(len(dfperfilelist)): count = dfperfilelist[i] filecount = filecount + count[temp] + 1 for keyword in anoutcome: #print(keyword) for i in range(len(dfperfilelist)): count = dfperfilelist[i] #print(keyword, count[keyword], filecount) laplaceprobmodel[keyword] = laplaceprobmodel[keyword] + (count[keyword] + 1)/(filecount) #print(laplaceprobmodel[keyword], keyword) sumoflaplaceprob = sumoflaplaceprob + laplaceprobmodel[keyword] #print(sumoflaplaceprob) #print(f"The laplaeprobmodel is \n {laplaceprobmodel}") #print(f"The sum of all the quadgram probabiities in the Laplace smoothed model needs to be 1 and it is {sumoflaplaceprob}") return laplaceprobmodel laplacesmoothquadmodel = createLeplaceSmoothedQuadgramModel(tokens, dfforquadperfile) pandas.set_option("display.max_rows", 10) laplacesmoothquaddf = pandas.DataFrame.from_dict(laplacesmoothquadmodel, orient = 'index', columns = ['prob.']) print('Number of rows in Laplace Smoothed Quadgram Prob. Model : ', len(laplacesmoothquaddf.index)) print(laplacesmoothquaddf) #Attempt to plot the quadgram language model using first a Counter object COUNTLapSMOOTH4 = Counter(laplacesmoothquadmodel) greatestlaplacesmoothprob4 = 0 biglaplacesmoothword4 = '' for w in COUNTLapSMOOTH4.keys(): if COUNTLapSMOOTH4[w] >= greatestlaplacesmoothprob4: biglaplacesmoothword4 = w greatestlaplacesmoothprob4 = COUNTLapSMOOTH4[w] print(f"the quadgram of greatest freq in the Laplace smoothed quadgram model is: {biglaplacesmoothword4} \n") MLapS4 = COUNTLapSMOOTH4[biglaplacesmoothword4] yscale('log'); xscale('log'); title('Frequency of n-th most frequent 4-itemset in laplace smoothed model and 1/n line.') plot([c for (w,c) in COUNTLapSMOOTH4.most_common()]) plot([(MLapS4)/i for i in range(1, len(COUNTLapSMOOTH4)+1)]); ranint = random.randint(0,len(laplacesmoothunimodel)-1) print(ranint) ####***** if you are feeling like generating a random seed for the text: i = 0; lapcounter = Counter(laplacesmoothunimodel) ranint = random.randint(0, len(laplacesmoothunimodel)-1) for w in laplacesmoothunimodel.keys(): if (i == ranint): seedword = w i = i + 1 print(seedword) ####**** to set the seed to one of the most common 10 unigrams: seedpossibilities = lapcounter.most_common(10) ranint = random.randint(0,9) seedtuple = seedpossibilities[ranint] seedword = seedtuple[0] print(seedword) #### from phase 2, The team kept both ngrams method and newngram method for computing the ###unigrams, bigrams, trigrams and quadgrams smoothed models.... ###output of newngram is a Counter obj and output of ngrams is a dictionary object... #newngram outputs to files: #the most common unigrams are set to unigramfile.dat #the most common bigrams are set to bigramfile.dat #the most common trigrams are set to trigramfile.dat #the most common quadgrams are set to quadgramfile.dat #!!!newngram again returns a Counter object def newngram(toks, n): output = {} for i in range(len(toks) - n + 1): g = ' '.join(toks[i:i+n]) output.setdefault(g, 0) output[g] += 1 COUNTS = Counter(output) outputstring = '' outputstring = outputstring + str(COUNTS.most_common(3000)) + " " if n == 1: #print(f"\n The most common unigrams are: {(COUNTS.most_common(10))}") f=open("unigramfile.dat","w+", encoding='utf-8', errors='replace') f.write(str(sum(COUNTS.values()))) f.write(str(COUNTS.most_common(3000))) #trying to keep file size at about 50 k for this sample outputstring = outputstring + str(COUNTS.most_common(3000)) + " " f.close() if n == 2: #print(f"\n The most common bigrams are: {(COUNTS.most_common(10))}") f=open("bigramfile.dat","w+", encoding='utf-8', errors='replace') f.write(str(sum(COUNTS.values()))) f.write(str(COUNTS.most_common(2700))) #trying to keep file size at about 50 k for this sample outputstring = outputstring + str(COUNTS.most_common(2700)) + " " f.close() if n == 3: #print(f"\n The most common trigrams are: {(COUNTS.most_common(10))}") f=open("trigramfile.dat","w+", encoding='utf-8', errors='replace') f.write(str(sum(COUNTS.values()))) f.write(str(COUNTS.most_common(2300))) #trying to keep file size at about 50 k for this sample outputstring = outputstring + str(COUNTS.most_common(2300)) + " " f.close() if n == 4: #print(f"\n The most common quadgrams are: {(COUNTS.most_common(10))}") f=open("quadgramfile.dat","w+", encoding='utf-8', errors='replace') f.write(str(sum(COUNTS.values()))) f.write(str(COUNTS.most_common(2100))) #trying to keep file size at about 50 k for this sample outputstring = outputstring + str(COUNTS.most_common(2100)) + " " f.close() return output ###!!!! THESE COUNTS WILL BE USED IN THE TEXT GENERATION METHOD BELOW...THE PHASE 3 LAPLACE SMOOTH ###!!! MODELS ARE CREATED IMPLICITLY WITHIN THE GENERATING TEXT MODULE newunigrams = newngram(tokens, 1) print(unigrams) newbigrams = newngram(tokens, 2) #print(bigrams) newtrigrams = newngram(tokens, 3) #print(trigrams) newquadgrams = newngram(tokens, 4) #print(quadgrams) ####piecing together the development from when we posted the most common unigrams, bigrams ####trigrams and quadgrams to the files..now we are able to use them in this application ####to generate text.... unigramfile="unigramfile.dat" bigramfile="bigramfile.dat" trigramfile="trigramfile.dat" quadgramfile="quadgramfile.dat" with open(unigramfile, 'rb', 0) as file, \ mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text: unigramtotal= text.read(text.find(b'[')).decode('utf-8') unigrams= text.read(text.find(b']')).decode('utf-8') with open(bigramfile, 'rb', 0) as file, \ mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text: bigramtotal= text.read(text.find(b'[')).decode('utf-8') bigrams= text.read(text.find(b']')).decode('utf-8') with open(trigramfile, 'rb', 0) as file, \ mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text: trigramtotal= text.read(text.find(b'[')).decode('utf-8') trigrams= text.read(text.find(b']')).decode('utf-8') with open(quadgramfile, 'rb', 0) as file, \ mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_READ) as text: quadgramtotal= text.read(text.find(b'[')).decode('utf-8') quadgrams= text.read(text.find(b']')).decode('utf-8') words=unigrams.replace('[','').replace(']','').replace('(','').replace(')','').split(',') unigramrange=0 for w in range(1,len(words)-1,2): unigramrange +=int(words[w]) ####****Finally Phase 4! Here is the geneartion method. In the cell above, ####****you created a seedword for generating the text...either from the most frequent 10 ####****unigramsstring or a random unigram... def generate(seedtext, length): if length==0: output='' print("Scroll down for the final result. Here is the process used:\n") for gword in range(1,length+1): if gword > 3: # use quadgram model print(f"Searching quadigrams for '{currenttrigram}'.") quadgramoccurance = [] quadgramoccurance.append(quadgrams.find("'" + currenttrigram + ' ')) if quadgramoccurance[0] > -1: possiblequadgram = [] possiblequadfrequency = [] possiblequadtotalfreq = 0 n = 0 possiblequadgram.append(quadgrams[quadgramoccurance[n]+len(currenttrigram)+2:quadgrams.find("'",quadgramoccurance[n]+len(currenttrigram)+2)]) try: possiblequadfrequency.append(int(quadgrams[quadgrams.find("', ",quadgramoccurance[n])+3:quadgrams.find(")",quadgramoccurance[n])])) possiblequadtotalfreq += possiblequadfrequency[n] except: print("Error") n += 1 while True: quadgramoccurance.append(quadgrams.find("'" + currenttrigram + ' ', quadgramoccurance[n-1]+1)) if quadgramoccurance[n] == -1: break possiblequadgram.append(quadgrams[quadgramoccurance[n]+len(currenttrigram)+2:quadgrams.find("'",quadgramoccurance[n]+len(currenttrigram)+2)]) try: possiblequadfrequency.append(int(quadgrams[quadgrams.find("', ",quadgramoccurance[n])+3:quadgrams.find(")",quadgramoccurance[n])])) possiblequadtotalfreq += possiblequadfrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possiblequadtotalfreq) look = rand for w in range(0,n): look = look - possiblequadfrequency[w] if look < 0: nextword = possiblequadgram[w] break print(f" Out of {possiblequadtotalfreq} occurances in the quadgram model the following word:") for w in range(0,n): print(f" '{possiblequadgram[w]}' appeared {possiblequadfrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: print(f" Not found. Searching trigrams for '{currentbigram}'.") trigramoccurance = [] trigramoccurance.append(trigrams.find("'" + currentbigram + ' ')) if trigramoccurance[0] > -1: possibletrigram = [] possibletrifrequency = [] possibletritotalfreq = 0 n = 0 possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find("'",trigramoccurance[n]+len(currentbigram)+2)]) try: possibletrifrequency.append(int(trigrams[trigrams.find("', ",trigramoccurance[n])+3:trigrams.find(")",trigramoccurance[n])])) possibletritotalfreq += possibletrifrequency[n] except: print("Error") n += 1 while True: trigramoccurance.append(trigrams.find("'" + currentbigram + ' ', trigramoccurance[n-1]+1)) if trigramoccurance[n] == -1: break possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find("'",trigramoccurance[n]+len(currentbigram)+2)]) try: possibletrifrequency.append(int(trigrams[trigrams.find("', ",trigramoccurance[n])+3:trigrams.find(")",trigramoccurance[n])])) possibletritotalfreq += possibletrifrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possibletritotalfreq) look = rand for w in range(0,n): look = look - possibletrifrequency[w] if look < 0: nextword = possibletrigram[w] break print(f" Out of {possibletritotalfreq} occurances in the trigram model the following word:") for w in range(0,n): print(f" '{possibletrigram[w]}' appeared {possibletrifrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: print(f" Not found. Searching bigrams for '{currentword}'.") bigramoccurance = [] bigramoccurance.append(bigrams.find("'" + currentword + ' ')) if bigramoccurance[0] > -1: possiblebigram = [] possiblebifrequency = [] possiblebitotalfreq = 0 n = 0 possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") n += 1 while True: bigramoccurance.append(bigrams.find("'" + currentword + ' ', bigramoccurance[n-1]+1)) nextword = bigrams[bigrams.find("'" + currentword + ' ')+len(currentword)+2:bigrams.find("'",bigrams.find(currentword + ' '))] if bigramoccurance[n] == -1: break possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possiblebitotalfreq) look = rand for w in range(0,n): look = look - possiblebifrequency[w] if look < 0: nextword = possiblebigram[w] break print(f" Out of {possiblebitotalfreq} occurances in the bigram model the following word:") for w in range(0,n): print(f" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: rand=random.randint(0,unigramrange) look = rand for w in range(1,len(words)-1,2): look = look - int(words[w]) if look < 0: nextword = words[w-1][2:-1] break print(f" Not found. We randomly choose '{nextword}'.") pastword =currentword currentword=nextword currentquadgram=currenttrigram + ' ' + currentword currenttrigram=currentbigram + ' ' + currentword currentbigram=pastword + ' ' + currentword output+=' ' + currentword else: if gword == 3: # use trigram model print(f" Searching trigrams for '{currentbigram}'.") trigramoccurance = [] trigramoccurance.append(trigrams.find("'" + currentbigram + ' ')) if trigramoccurance[0] > -1: possibletrigram = [] possibletrifrequency = [] possibletritotalfreq = 0 n = 0 possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find("'",trigramoccurance[n]+len(currentbigram)+2)]) try: possibletrifrequency.append(int(trigrams[trigrams.find("', ",trigramoccurance[n])+3:trigrams.find(")",trigramoccurance[n])])) possibletritotalfreq += possibletrifrequency[n] except: print("Error") n += 1 while True: trigramoccurance.append(trigrams.find("'" + currentbigram + ' ', trigramoccurance[n-1]+1)) if trigramoccurance[n] == -1: break possibletrigram.append(trigrams[trigramoccurance[n]+len(currentbigram)+2:trigrams.find("'",trigramoccurance[n]+len(currentbigram)+2)]) try: possibletrifrequency.append(int(trigrams[trigrams.find("', ",trigramoccurance[n])+3:trigrams.find(")",trigramoccurance[n])])) possibletritotalfreq += possibletrifrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possibletritotalfreq) look = rand for w in range(0,n): look = look - possibletrifrequency[w] if look < 0: nextword = possibletrigram[w] break print(f" Out of {possibletritotalfreq} occurances in the trigram model the following word:") for w in range(0,n): print(f" '{possibletrigram[w]}' appeared {possibletrifrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: print(f" Not found. Searching bigrams for '{currentword}'.") bigramoccurance = [] bigramoccurance.append(bigrams.find("'" + currentword + ' ')) if bigramoccurance[0] > -1: possiblebigram = [] possiblebifrequency = [] possiblebitotalfreq = 0 n = 0 possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") n += 1 while True: bigramoccurance.append(bigrams.find("'" + currentword + ' ', bigramoccurance[n-1]+1)) if bigramoccurance[n] == -1: break possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possiblebitotalfreq) look = rand for w in range(0,n): look = look - possiblebifrequency[w] if look < 0: nextword = possiblebigram[w] break print(f" Out of {possiblebitotalfreq} occurances in the bigram model the following word:") for w in range(0,n): print(f" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: rand=random.randint(0,unigramrange) look = rand for w in range(1,len(words)-1,2): look = look - int(words[w]) if look < 0: nextword = words[w-1][2:-1] break print(f" Not found. We randomly choose '{nextword}'.") pastword =currentword currentword=nextword currenttrigram=currentbigram + ' ' + currentword currentbigram=pastword + ' ' + currentword output+=' ' + currentword elif gword == 2: # use bigram model print(f" Searching bigrams for '{currentword}'.") bigramoccurance = [] bigramoccurance.append(bigrams.find("'" + currentword + ' ')) if bigramoccurance[0] > -1: possiblebigram = [] possiblebifrequency = [] possiblebitotalfreq = 0 n = 0 possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") n += 1 while True: bigramoccurance.append(bigrams.find("'" + currentword + ' ', bigramoccurance[n-1]+1)) if bigramoccurance[n] == -1: break possiblebigram.append(bigrams[bigramoccurance[n]+len(currentword)+2:bigrams.find("'",bigramoccurance[n]+len(currentword)+2)]) try: possiblebifrequency.append(int(bigrams[bigrams.find("', ",bigramoccurance[n])+3:bigrams.find(")",bigramoccurance[n])])) possiblebitotalfreq += possiblebifrequency[n] except: print("Error") break n += 1 rand=random.randint(0,possiblebitotalfreq) look = rand for w in range(0,n): look = look - possiblebifrequency[w] if look < 0: nextword = possiblebigram[w] break print(f" Out of {possiblebitotalfreq} occurances in the bigram model the following word:") for w in range(0,n): print(f" '{possiblebigram[w]}' appeared {possiblebifrequency[w]} times,") print(f" From the {n} possibilities, we randomly chose '{nextword}'.") else: rand=random.randint(0,unigramrange) look = rand for w in range(1,len(words)-1,2): look = look - int(words[w]) if look < 0: nextword = words[w-1][2:-1] break print(f" Not found. We randomly choose '{nextword}'.") pastword =currentword currentword=nextword currentbigram=pastword + ' ' + currentword output+=' ' + currentword elif gword == 1: # check seedtext for char in range(0,len(seedtext)): maybe = seedtext[:len(seedtext)-char] print(f" Searching unigrams for '{maybe}'.") if unigrams.find(maybe) > -1: if unigrams[unigrams.find(maybe)-1] == "'": print(" Found the word (or a word that starts with it).") currentword = unigrams[unigrams.find(maybe):unigrams.find("'",unigrams.find(maybe))] break print(" Not found. Dropping a letter") currentword = "" if currentword == "": rand=random.randint(0,unigramrange) look = rand for w in range(1,len(words)-1,2): look = look - int(words[w]) if look < 0: currentword = words[w-1][2:-1] break print(f" We randomly choose '{currentword}'.") output=currentword print(f"\n\n Given '{seedtext}', our initial model generates the following {length} words:\n\n{output.replace(' .','.')}") generate(seedword, 100) ```
github_jupyter
[this doc on github](https://github.com/dotnet/interactive/tree/master/samples/notebooks/fsharp/Docs) # Object formatters ## Default formatting behaviors When you return a value or a display a value in a .NET notebook, the default formatting behavior is to try to provide some useful information about the object. If it's an array or other type implementing `IEnumerable`, that might look like this: ``` display ["hello"; "world"] Enumerable.Range(1, 5) ``` As you can see, the same basic structure is used whether you pass the object to the `display` method or return it as the cell's value. Similarly to the behavior for `IEnumerable` objects, you'll also see table output for dictionaries, but for each value in the dictionary, the key is provided rather than the index within the collection. ``` // Cannot simply use 'dict' here, see https://github.com/dotnet/interactive/issues/12 let d = dict [("zero", 0); ("one", 1); ("two", 2)] System.Collections.Generic.Dictionary<string, int>(d) ``` The default formatting behavior for other types of objects is to produce a table showing their properties and the values of those properties. ``` type Person = { FirstName: string; LastName: string; Age: int } // Evaluate a new person { FirstName = "Mitch"; LastName = "Buchannon"; Age = 42 } ``` When you have a collection of such objects, you can see the values listed for each item in the collection: ``` let people = [ { FirstName = "Mitch"; LastName = "Buchannon"; Age = 42 } { FirstName = "Hobie "; LastName = "Buchannon"; Age = 23 } { FirstName = "Summer"; LastName = "Quinn"; Age = 25 } { FirstName = "C.J."; LastName = "Parker"; Age = 23 } ] people ``` Now let's try something a bit more complex. Let's look at a graph of objects. We'll redefine the `Person` class to allow a reference to a collection of other `Person` instances. ``` type Person = { FirstName: string LastName: string Age: int Friends: ResizeArray<Person> } let mitch = { FirstName = "Mitch"; LastName = "Buchannon"; Age = 42; Friends = ResizeArray() } let hobie = { FirstName = "Hobie "; LastName = "Buchannon"; Age = 23; Friends = ResizeArray() } let summer = { FirstName = "Summer"; LastName = "Quinn"; Age = 25; Friends = ResizeArray() } mitch.Friends.AddRange([ hobie; summer ]) hobie.Friends.AddRange([ mitch; summer ]) summer.Friends.AddRange([ mitch; hobie ]) let people = [ mitch; hobie; summer ] display people ``` That's a bit hard to read, right? The defaut formatting behaviors are thorough, but that doesn't always mean they're as useful as they might be. In order to give you more control in these kinds of cases, the object formatters can be customized from within the .NET notebook. ## Custom formatters Let's clean up the output above by customizing the formatter for the `Person.Friends` property, which is creating a lot of noise. The way to do this is to use the `Formatter` API. This API lets you customize the formatting for a specific type. Since `Person.Friends` is of type `ResizeArray<Person>`, we can register a custom formatter for that type to change the output. Let's just list their first names: ``` Formatter<ResizeArray<Person>>.Register( fun people writer -> for person in people do writer.Write("person") , mimeType = "text/plain") people ``` You might have noticed that `people` is of type `ResizeArray<Person>`, but the table output still includes columns for `LastName`, `Age`, and `Friends`. What's going on here? Notice that the custom formatter we just registered was registered for the mime type `"text/plain"`. The top-level formatter that's used when we call `display` requests output of mime type `"text/html"` and the nested objects are formatted using `"text/plain"`. It's the nested objects, not the top-level HTML table, that's using the custom formatter here. With that in mind, we can make it even more concise by registering a formatter for `Person`: ``` Formatter<Person>.Register( fun person writer -> writer.Write(person.FirstName) , mimeType = "text/plain"); people ``` Of course, you might not want table output. To replace the default HTML table view, you can register a formatter for the `"text/html"` mime type. Let's do that, and write some HTML using PocketView.
github_jupyter
# MAT281 - Laboratorio N°06 ## Problema 01 <img src="./images/logo_iris.jpg" width="360" height="360" align="center"/> El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros. Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen: ``` # librerias import os import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes # Ver gráficos de matplotlib en jupyter notebook/lab %matplotlib inline # cargar datos df = pd.read_csv(os.path.join("data","iris_contaminados.csv")) df.columns = ['sepalLength', 'sepalWidth', 'petalLength', 'petalWidth', 'species'] df.head() ``` ### Bases del experimento Lo primero es identificar las variables que influyen en el estudio y la naturaleza de esta. * **species**: * Descripción: Nombre de la especie de Iris. * Tipo de dato: *string* * Limitantes: solo existen tres tipos (setosa, virginia y versicolor). * **sepalLength**: * Descripción: largo del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm. * **sepalWidth**: * Descripción: ancho del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm. * **petalLength**: * Descripción: largo del pétalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm. * **petalWidth**: * Descripción: ancho del pépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm. Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones: 1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan.. ``` df['species'].fillna('default',inplace=True) df df['species'].value_counts() #Veamos los valores que puede tomar species df['species'].value_counts().index #Dejamos todo sin espacios ni mayúscula df['species'].astype('str') df['species'] = df['species'].str.lower().str.strip() df['species'].value_counts().index ``` 2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan. ``` df['sepalLength'].fillna(0,inplace=True) df['sepalWidth'].fillna(0,inplace=True) df['petalLength'].fillna(0,inplace=True) df['petalWidth'].fillna(0,inplace=True) df_1=df.drop(['species'], axis=1) sns.boxplot(data=df_1) ``` 3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos. ``` lista_label = [] for i in range(len(df)): if df['sepalLength'][i]<4.0 or df['sepalLength'][i]>7.0: lista_label.append("sepalLength") elif df['sepalWidth'][i]<2.0 or df['sepalWidth'][i]>4.5: lista_label.append("sepalWidth") elif df['petalLength'][i]<1.0 or df['petalLength'][i]>7.0: lista_label.append("petalLength") elif df['petalWidth'][i]<0.1 or df['petalWidth'][i]>2.5: lista_label.append("petalWidth") else: lista_label.append('Dentro de los rangos') df['label']=lista_label df ``` 4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados. ``` # tamano del grafico plt.figure(figsize=(10, 5)) # graficar sns.scatterplot( x='petalLength', y='sepalLength', data=df, hue='label', ) # tamano del grafico plt.figure(figsize=(10, 5)) # graficar sns.scatterplot( x='petalWidth', y='sepalWidth', data=df, hue='label', ) ``` Vemos en ambos gráficos que los largos y anchos de los sépalos y los pétalos están en su mayoría dentro de los rangos, además podemos observar que la columna sepalLength es la que mas errada está. 5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**. ``` mask_sl = df['sepalLength']<=7.0 mask_sl1 = df['sepalLength']>=4.0 mask_sw = df['sepalWidth']<=4.5 mask_sw1 = df['sepalWidth']>=2.0 mask_pl = df['petalLength']<=7.0 mask_pl1 = df['petalLength']>=1.0 mask_pw = df['petalWidth']<=2.5 mask_pw1 = df['petalWidth']>=0.1 df_filtrado = df[mask_sl & mask_sw & mask_pl & mask_pw & mask_sl1 & mask_sw1 & mask_pl1 & mask_pw1] df_filtrado # tamano del grafico plt.figure(figsize=(10, 5)) # graficar sns.scatterplot( x='petalLength', y='sepalLength', data=df_filtrado, hue='species', ) ```
github_jupyter
# Polarization **Prerequisite: Basic Introduction** In this notebook parameterization of the polarization primitives and few methods derived from the primitives are presented. In particular, setting up of * General parameters * Current/voltage range * Current/voltage limits * Online display * Tolerance limits is explained in detail. **Test object:** Polarization measurements are carried out for a 200 F, 2.7 V supercapacitor. ``` from zahner_potentiostat.scpi_control.searcher import SCPIDeviceSearcher from zahner_potentiostat.scpi_control.serial_interface import SerialCommandInterface, SerialDataInterface from zahner_potentiostat.scpi_control.control import * from zahner_potentiostat.scpi_control.datahandler import DataManager from zahner_potentiostat.scpi_control.datareceiver import TrackTypes from zahner_potentiostat.display.onlinedisplay import OnlineDisplay from jupyter_utils import executionInNotebook, notebookCodeToPython if __name__ == '__main__': deviceSearcher = SCPIDeviceSearcher() deviceSearcher.searchZahnerDevices() commandSerial, dataSerial = deviceSearcher.selectDevice() ZahnerPP2x2 = SCPIDevice(SerialCommandInterface(commandSerial), SerialDataInterface(dataSerial)) ``` # Setting up general parameters At first, general parameters are set which will be used in all primitives to be executed. [setRaiseonErrorEnabled(True)](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html?highlight=setraiseonerrorenabled#zahner_potentiostat.scpi_control.control.SCPIDevice.setRaiseOnErrorEnabled) enables that every error that comes back from the device triggers an exception. By default, it is turned off and errors are only printed on the console. The next command sets the sampling frequency to 50 Hz. A maximum sampling frequency of 100 Hz is possible. ``` ZahnerPP2x2.setRaiseOnErrorEnabled(True) ZahnerPP2x2.setSamplingFrequency(50) ``` The **mains power-line frequency** of the device is pre-set to the customer's mains frequency before delivery. To ensure the correct frequency value, the user must also provide the **mains power-line frequency** with which the device will be operated. **The mains frequency is stored in the device's internal memory and remains stored even after a software update or a reboot hence providing mains frequency with every script execution is not necessay**. ``` ZahnerPP2x2.setLineFrequency(50) ``` Each potentiostat is factory-calibrated before delivery. The calibration is carried out after potentiostat's warm-up time of 30 minutes. With the following primitive, the users may calibrate the potentiostat again. However it is strongly recommended to start calibration after a warm up time of 30 minutes. The calibration only takes a few seconds. <div class="alert alert-block alert-warning"> <b>Warning:</b> The offsets must be calibrated manually by calling this method after the instrument has been warmed up for at least 30 minutes. If a cold instrument is calibrated, the offsets will be worse when the instrument will be warm during operation. </div> If the device repeatedly displays an error code after calibration, there may be a defect in the device. In this case, please contact your Zahner support representive or Zahner distributor. ``` ZahnerPP2x2.calibrateOffsets() ``` ## Set current ranging parameters In the following code, at first, autoranging of the current range is carried out. If possible, **autoranging should be avoided**, as autoranges provide noisy measurement results, since the shunt change causes disturbances. It also takes time for the measuring device to find the correct current range, during which time-sensitive measurement data may be lost. In order to see less disturbances during autoranging, the interpolation is switched on [setInterpolationEnabled(True)](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html?highlight=interpolation#zahner_potentiostat.scpi_control.control.SCPIDevice.setInterpolationEnabled). With this, the measurement data is linearly interpolated for the duration of the shunt change and voltage disturbances are reduced. Depending on the measurement object, this may works for better or for worse. Zahner's potentiostats (PP2X2) have four shunts (0, 1, 2, and 3). In this section, shunt 1 is selected because with supercapacitor, a voltage jump is initially measured which leads to a substantial current flow in the supercapacitor. Hence Shunt 1 is used as it covers a big current range. To get further information about the suitable shunts for different current ranges, please check the respective [manual of the potentiostat](http://zahner.de/files/power_potentiostats.pdf). Shunt 0 is only be used when PP2X2 potentiostats are used as EPC devices with the Zennium series potentiostats. Alternatively, the [setCurrentRange()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setCurrentRange) method can also be used to select an appropriate current range to match the expected currents. Finally, the shunts limits in which autoranging is possible are set. In stand-alone mode, only DC measurements are possible. ``` ZahnerPP2x2.setAutorangingEnabled(True) ZahnerPP2x2.setInterpolationEnabled(True) ZahnerPP2x2.setShuntIndex(1) #or ZahnerPP2x2.setCurrentRange(20) ZahnerPP2x2.setMinimumShuntIndex(1) ZahnerPP2x2.setMaximumShuntIndex(3) ``` ## Set voltage range The voltage range index must be selected manually. This is not switched automatically. The range can be set like the current by shunt index or by desired maximum working range. The maximum voltages for each range can be found in the manual of the potentiostat. ``` ZahnerPP2x2.setVoltageRangeIndex(0) #or ZahnerPP2x2.setVoltageRange(2.5) ``` ## Set current and voltage limits **The voltage limits are always absolute values and not related to the OCV.** If the limits are exceeded, the potentiostat switches off and the device assumes an error state. <div class="alert alert-block alert-danger"> <b>Danger:</b> Limits are monitored only in primitives. If only the potentiostat is switched on, neither measurement nor limits are monitored. </div> With ZahnerPP2x2.[clearState()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.clearState), the error state could be cleared so that primitives can be executed. If an attempt is made to execute primitives in the error state, an error message is displayed. In the following code, current range of $\pm$ 30 A and voltage are $\pm$ 5 V are set and enabled. ``` ZahnerPP2x2.setMinimumCurrentGlobal(-30) ZahnerPP2x2.setMaximumCurrentGlobal(30) ZahnerPP2x2.setGlobalCurrentCheckEnabled(True) ZahnerPP2x2.setMinimumVoltageGlobal(0) ZahnerPP2x2.setMaximumVoltageGlobal(2.5) ZahnerPP2x2.setGlobalVoltageCheckEnabled(True) ``` # Starting the live data display With the following command, a plotting window can be opened, in which the measured voltage and current values from the measuring device are displayed live. The function executionInNotebook() is used to check if the execution is taking place in Jupyter notebook or not. As the Jupyter cannot display the live measured data so if the execution take place in Jupyter notebook then the online display will not be executed. ``` onlineDisplay = None if executionInNotebook() == False: onlineDisplay = OnlineDisplay(ZahnerPP2x2.getDataReceiver()) ``` # Polarisation at OCV/OCP with change tolerance abort As a first example, a potential jump is output at the open circuit voltage/potential of a supercap, then polarization is performed until the current change is below a defined value. In the following, OCV is always used, which is the same as OCP. For this measurement, it would be better to measure with the autoranging switched off, since the ranging is slower than the current change during the potential jump. The largest current range is set because a large current will flow during the jump. ``` ZahnerPP2x2.setAutorangingEnabled(False) ZahnerPP2x2.setShuntIndex(1) ``` ## Setting the measurement on the OCV In order to measure with a defined potential, potentiostatic mode is chosen. ``` ZahnerPP2x2.setCoupling(COUPLING.POTENTIOSTATIC) ``` The first command [setVoltageRelation()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageRelation) sets the voltage value, assigned to the [setVoltageValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageValue). The [setVoltageValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageValue) or [setCurrentValue()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setCurrentValue) set the voltage or current values which are applied when the potentiostat is switched on, without starting a primitive. In this state, the voltage or current are not recorded. Only when a primitive is run, recording of voltage or current (as well as autoranging) is carried out. The second command [setVoltageParameterRelation()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageParameterRelation) set the value defined in the [setVoltageParameter()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setVoltageParameter). This command tells the device the voltage or current parameter needed in potentiostatic or galvanostatic methods. [RELATION.ZERO](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.RELATION) defines that absolute voltages are concerned. With the command [measureOCV()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureOCV), the open circuit voltage is defined, to which the voltage values are refered. ``` ZahnerPP2x2.setVoltageRelation(RELATION.OCV) ZahnerPP2x2.setVoltageParameterRelation("OCV") ``` ## Setting up tolerance limits In the subsection, the tolerance limits are defined. The tolerance limit refers to the complementary quantity (current in the potentiostatic and the voltage in the galvanostatic mode). In the OCV scan, this value also refers to the voltage. The absolute tolerance is defined as a change in amperes or volts per second. To proceed to the next primitive, the value of current or voltage change per second should fall below the defined limit. The relative tolerance is related to the current or voltage value at the start of the primitive. Absolute and relative tolerances are defined as following $Absolute Tolerance = \frac{X_{n}-X_{n-1}}{t_{n}-t_{n-1}}$ $Relative Tolerance = \frac{Absolute Tolerance}{X_{0}}$ In the following example, the absolute tolerance is set to 1 $\frac{mA}{s}$. Here as the relative tolerance is not needed so it is set to 0. The tolerance check must be activated so that the primitive can be aborted when the tolerance limits are met. ``` ZahnerPP2x2.setAbsoluteTolerance(0.001) ZahnerPP2x2.setRelativeTolerance(0.000) ZahnerPP2x2.setToleranceBreakEnabled(True) ``` A minumum and maximum time can also be defined in regards to the tolerance limits. The minimum time provides the times for which the test object should be polarized. If the voltage or current tolerance is met before the minimum defined time is passed then the polarization is carried out till the minimum time is passed. In the following example, 10 seconds are selected as the minimum time. ``` ZahnerPP2x2.setMinimumTimeParameter(10) ``` If the tolerance is not reached, the polarization should be terminated after the maximum time at the latest. Here the second input possibility of the time is selected. Either the time is entered in seconds as a floating point number or as a string. As string, the user has the possibility to enter a time unit (s, min, m or h) as parameter into the method [setMaximumTimeParameter()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.setMaximumTimeParameter). ``` ZahnerPP2x2.setMaximumTimeParameter("1 m") ``` ## Execute the primitives Next, the voltage is set, which is outputted when the pot is switched on. The potentiostat is turned on before the primitives, then it stays on after the primitives as long as no global limits are exceeded in the primitive. ``` ZahnerPP2x2.setVoltageValue(0) ``` Now the open circuit voltage is measured, which is used as a reference for the commands related to the ``` print("open circuit reference voltage: " + str(ZahnerPP2x2.measureOCV()) + " V") ZahnerPP2x2.setPotentiostatEnabled(True) ZahnerPP2x2.setVoltageParameter(0.1) #OCV + 0.1 ZahnerPP2x2.measurePolarization() ZahnerPP2x2.setVoltageParameter(0) #OCV ZahnerPP2x2.measurePolarization() ZahnerPP2x2.setPotentiostatEnabled(False) ``` ## Plot the data ``` dataReceiver = ZahnerPP2x2.getDataReceiver() dataManager = DataManager(dataReceiver) dataManager.plotTIUData() ``` ## Reset configurations Now the specific example configurations are reset to the default configuration, which are not needed for the next example. ``` ZahnerPP2x2.setToleranceBreakEnabled(False) ZahnerPP2x2.setVoltageRelation(RELATION.ZERO) ZahnerPP2x2.setVoltageParameterRelation(RELATION.ZERO) dataReceiver.deletePoints() ZahnerPP2x2.setAutorangingEnabled(True) ``` # Polarization - aborted with charge limit The following example shows how polarizing primitives can be aborted on reaching a chargelimit. The settings of the global limits of current and voltage remain to protect the supercapacitor. ## Setting up the measurement Here galvanostatic measurement is performed, which makes it easier to read the charge from the diagram. However, potentiostatic charge verification would also be possible. Galvanostatic mode automatically selects the correct current range. The previously allowed current ranges are kept here as well. ``` ZahnerPP2x2.setCoupling(COUPLING.GALVANOSTATIC) ``` ## Setting up the charge conditions The supercapacitor is charged by 100 As and then discharged by 50 As. ``` ZahnerPP2x2.setMaximumCharge(100) ZahnerPP2x2.setMinimumCharge(-50) ZahnerPP2x2.setChargeBreakEnabled(True) ``` ## Execute the primitives The maximum time is set to 2 minutes. Charging to 100 As with 2 A charging current should take 50 seconds. Similarly discharging to -50 As with -2 A discharging currents should take 25 seconds. ``` ZahnerPP2x2.setMaximumTimeParameter("2 m") ``` Set the current to 2 A and measure the polarization to charge the supercapacitor. ``` ZahnerPP2x2.setCurrentParameter(2) ZahnerPP2x2.measurePolarization() ``` Set a discharge current of -2 A. The charging and discharging parameters can be customized individually. ``` ZahnerPP2x2.setCurrentParameter(-2) ZahnerPP2x2.measurePolarization() ``` ## Plot the data The dataManager measured data can be easily plotted. ``` dataManager.plotTIUData() ``` The voltage jump at 50 s (observed at the current polarity change) is due to the internal resistance of the supercapacitor, which is about 4 mΩ. ## Reset configurations Again the configurations are reset to the default configuration, which are not needed for the next example. ``` ZahnerPP2x2.setChargeBreakEnabled(False) dataReceiver.deletePoints() ``` # Charging and discharging routines The following example shows how to charge and discharge a supercapacitor with the polarization primitive. The settings of the global limits of current and voltage remain to protect the supercapacitor. ## Setting the measurement The supercapacitor is galvanostatically charged. ``` ZahnerPP2x2.setCoupling(COUPLING.GALVANOSTATIC) ``` ## Setting the reversal potentials The supercapactor is cycled between 1 V and 2 V. To realize this, the functional voltage drop is set to 1 V and 2 V for the polarization primitive. Here, at the beginning of the primitive the minimum and maximum values must also be observed. Depending on test object, the minimum voltage may have to be changed before the first polarization. In the case of supercapacitor, it is assumed that the supercapacitor is completely uncharged therefore a minumum voltage of 0 V is set. Functional current aborts for potentiostatic primitives also exist. ``` ZahnerPP2x2.setMinimumVoltageParameter(1) ZahnerPP2x2.setMaximumVoltageParameter(2) ZahnerPP2x2.setMinMaxVoltageParameterCheckEnabled(True) ``` Charging the super capacitor with 10 A current will charge the capacitor to 2 V in 20 seconds hence the maximum charging time is set to 30 seconds. For safety, a maximum time must always be specified with this primitive can be switched off. ``` ZahnerPP2x2.setMaximumTimeParameter("30 s") ``` ## Execute the primitives ``` ZahnerPP2x2.setCurrentParameter(10) ZahnerPP2x2.measurePolarization() ``` After charging the possibly empty electrolytic capacitor, now the lower voltage limit can be adjusted. ``` ZahnerPP2x2.setMinimumVoltageParameter(1) ``` Afterwards, two cycles of charging nad discharging are carried out. ``` cycles = 2 for i in range(cycles): ZahnerPP2x2.setCurrentParameter(-10) ZahnerPP2x2.measurePolarization() ZahnerPP2x2.setCurrentParameter(10) ZahnerPP2x2.measurePolarization() ``` At the end, the supercapcitor will have a voltage of 1 V. ``` ZahnerPP2x2.setCurrentParameter(-10) ZahnerPP2x2.measurePolarization() ``` Instead of manually composing the charge or discharge with primitives, user may also use the charge [measureCharge()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureCharge) and discharge [measureDischarge()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureDischarge) methods, which have been programmed as an example of how primitives can be composed into more complex methods. ## Plot the data ``` dataManager.plotTIUData() ``` ## Reset configurations Now the specific example configurations are reset to the default configuration, which are not needed for the next example. ``` ZahnerPP2x2.setMinMaxVoltageParameterCheckEnabled(False) dataReceiver.deletePoints() ``` # Open circuit voltage scan The following example shows how to record the open circuit voltage scan. In principle, the open circuit voltage scan [measureOCVScan()](https://doc.zahner.de/zahner_potentiostat/scpi_control/control.html#zahner_potentiostat.scpi_control.control.SCPIDevice.measureOCVScan) is same as a galvanostatic polarization with the potentiostat turned off. The user may set a minimum and maximum voltage at which the primitive can be stopped. Also a voltage change tolerance as shown in a previous example is possible. For example, to measure OCV until the voltage change has diminshed and the OCV is stable. The settings of the global limits of current and voltage remain to protect the supercapacitor. ## Setting the measurement Only the maximum time is configured. A change in tolerance or a range limit has already been shown in the previous examples. 5 minutes measurement time is set. And the sampling rate is reduced to 1 Hz because there is less dynamic change in the measurement. ``` ZahnerPP2x2.setMaximumTimeParameter("2 min") ZahnerPP2x2.setSamplingFrequency(1) ``` ## Execute the primitive The primitve can simply be started, nothing else needs to be configured. ``` ZahnerPP2x2.measureOCVScan() ``` ## Plot the data ``` dataManager.plotTIUData() ``` # Close the connection Closing the online display when it has been opened and close the connection to the device. ``` if onlineDisplay != None: onlineDisplay.close() ZahnerPP2x2.close() print("finish") ``` # Deployment of the source code **The following instruction is not needed by the user.** It automatically extracts the pure python code from the jupyter notebook to provide it for the user. Thus the user does not need jupyter itself and does not have to copy the code manually. The code is stored in a notebook-like file with the extension .py. ``` if executionInNotebook() == True: notebookCodeToPython("Polarizations.ipynb") ```
github_jupyter
# Arrays There are several kinds of sequences in Python. A [list](lists) is one. However, the sequence type that we will use most in the class, is the array. The `numpy` package, abbreviated `np` in programs, provides Python programmers with convenient and powerful functions for creating and manipulating arrays. ``` # Load the numpy package, and call it "np". import numpy as np ``` ## Creating arrays The `array` function from the Numpy package creates an array from single values, or sequences of values. For example, remember `my_list`? ``` my_list = [1, 2, 3] ``` This is a `list`: ``` type(my_list) ``` The `array` function from Numpy can make an array from this list: ``` my_array = np.array(my_list) my_array ``` As you can see from the display above, this is an array. We confirm it with `type`: ``` type(my_array) ``` We can also create the list and then the array in one call, like this: ``` my_array = np.array([1, 2, 3]) my_array ``` Here `[1, 2, 3]` is an *expression* that returns a list. `np.array` then operates on the returned list, to create an array. Arrays often contain numbers, but, like lists, they can also contain strings or other types of values. However, a single array can only contain a single kind of data. (It usually doesn't make sense to group together unlike data anyway.) For example, ``` english_parts_of_speech = np.array(["noun", "pronoun", "verb", "adverb", "adjective", "conjunction", "preposition", "interjection"]) english_parts_of_speech ``` We have not seen this yet, but Python allows us to spread expressions between round and square brackets across many lines. It knows that the expression has not finished yet because it is waiting for the closing bracket. For example, this cell works in the exactly the same way as the cell above, and may be easier to read: ``` # An expression between brackets spread across many lines. english_parts_of_speech = np.array( ["noun", "pronoun", "verb", "adverb", "adjective", "conjunction", "preposition", "interjection"] ) english_parts_of_speech ``` Below, we collect four different temperatures into a list called `temps`. These are the [estimated average daily high temperatures](http://berkeleyearth.lbl.gov/regions/global-land) over all land on Earth (in degrees Celsius) for the decades surrounding 1850, 1900, 1950, and 2000, respectively, expressed as deviations from the average absolute high temperature between 1951 and 1980, which was 14.48 degrees. If you are interested, you can get more data from [this file of daily high temperatures](http://berkeleyearth.lbl.gov/auto/Regional/TMAX/Text/global-land-TMAX-Trend.txt). ``` baseline_high = 14.48 highs = np.array([baseline_high - 0.880, baseline_high - 0.093, baseline_high + 0.105, baseline_high + 0.684]) highs ``` ## Calculations with arrays Arrays can be used in arithmetic expressions to compute over their contents. When an array is combined with a single number, that number is combined with each element of the array. Therefore, we can convert all of these temperatures to Fahrenheit by writing the familiar conversion formula. ``` (9/5) * highs + 32 ``` <img src="https://matthew-brett.github.io/cfd2019/images/array_arithmetic.png"> As we saw for strings, arrays have *methods*, which are functions that operate on the array values. The `mean` of a collection of numbers is its average value: the sum divided by the length. Each pair of parentheses in the examples below is part of a call expression; it's calling a function with no arguments to perform a computation on the array called `highs`. ``` # The number of elements in the array highs.size highs.sum() highs.mean() ``` ## Functions on Arrays Numpy provides various useful functions for operating on arrays. For example, the `diff` function computes the difference between each adjacent pair of elements in an array. The first element of the `diff` is the second element minus the first. ``` np.diff(highs) ``` The [full Numpy reference](http://docs.scipy.org/doc/numpy/reference/) lists these functions exhaustively, but only a small subset are used commonly for data processing applications. These are grouped into different packages within `np`. Learning this vocabulary is an important part of learning the Python language, so refer back to this list often as you work through examples and problems. However, you **don't need to memorize these**. Use this as a reference. Each of these functions takes an array as an argument and returns a single value. | **Function** | Description | |--------------------|----------------------------------------------------------------------| | `np.prod` | Multiply all elements together | | `np.sum` | Add all elements together | | `np.all` | Test whether all elements are true values (non-zero numbers are true)| | `np.any` | Test whether any elements are true values (non-zero numbers are true)| | `np.count_nonzero` | Count the number of non-zero elements | Each of these functions takes an array as an argument and returns an array of values. | **Function** | Description | |--------------------|----------------------------------------------------------------------| | `np.diff` | Difference between adjacent elements | | `np.round` | Round each number to the nearest integer (whole number) | | `np.cumprod` | A cumulative product: for each element, multiply all elements so far | | `np.cumsum` | A cumulative sum: for each element, add all elements so far | | `np.exp` | Exponentiate each element | | `np.log` | Take the natural logarithm of each element | | `np.sqrt` | Take the square root of each element | | `np.sort` | Sort the elements | Each of these functions takes an array of strings and returns an array. | **Function** | **Description** | |---------------------|--------------------------------------------------------------| | `np.char.lower` | Lowercase each element | | `np.char.upper` | Uppercase each element | | `np.char.strip` | Remove spaces at the beginning or end of each element | | `np.char.isalpha` | Whether each element is only letters (no numbers or symbols) | | `np.char.isnumeric` | Whether each element is only numeric (no letters) Each of these functions takes both an array of strings and a *search string*; each returns an array. | **Function** | **Description** | |----------------------|----------------------------------------------------------------------------------| | `np.char.count` | Count the number of times a search string appears among the elements of an array | | `np.char.find` | The position within each element that a search string is found first | | `np.char.rfind` | The position within each element that a search string is found last | | `np.char.startswith` | Whether each element starts with the search string
github_jupyter
``` import pandas as pd, numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV import pandas as pd, numpy as np from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier from sklearn.linear_model import LogisticRegression, LinearRegression from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC, SVR from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis from sklearn.ensemble import GradientBoostingClassifier from sklearn.cross_validation import train_test_split from gensim.models import word2vec import nltk from scipy import stats from itertools import combinations import pickle import warnings warnings.filterwarnings("ignore") train = pd.read_csv('data_files/train.csv') train.head() train.shape train.dtypes ``` ### cat1 - cat116 are categorical ``` categorical_vars = ['cat{}'.format(i+1) for i in range(116)] for var in categorical_vars: train = pd.get_dummies(train, columns=[var]) train.head() def multi_model_prediction(test_df, models): preds = list() for model in models: preds.append(model.predict(test_df)) return [np.mean(p) for p in np.array(preds).T] # rf = RandomForestRegressor(n_estimators=30, max_depth=10, max_features='sqrt') # lr = LinearRegression() # X, y = train_test_split(train) # rf.fit(X.drop(['loss'], axis=1), X.loss) # lr.fit(X.drop(['loss'], axis=1), X.loss) #preds = multi_model_prediction(y.drop(['loss'], axis=1), [rf, lr]) #np.mean([abs(prediction - loss) for prediction, loss in zip(preds, y.loss)]) # n_sample = 1000 # errors = list() # for _ in range(3): # sample_data = train.sample(n_sample) # X, y = train_test_split(sample_data) # rf = RandomForestRegressor(n_estimators=50, max_depth=10, max_features='sqrt') # rf.fit(X.drop(['loss'], axis=1), X.loss) # lr = LinearRegression() # lr.fit(X.drop(['loss'], axis=1), X.loss) # gbt = GradientBoostingRegressor(n_estimators=50, max_depth=10, max_features='sqrt') # gbt.fit(X.drop(['loss'], axis=1), X.loss) # knn = KNeighborsRegressor(n_neighbors=7) # knn.fit(X.drop(['loss'], axis=1), X.loss) # svr = SVR(kernel='poly', degree=4) # svr.fit(X.drop(['loss'], axis=1), X.loss) # model_list = [rf, lr, gbt, knn, svr] # preds = multi_model_prediction(y.drop(['loss'], axis=1), model_list) # errors.append(np.mean([abs(p - loss) for p, loss in zip(preds, y.loss)])) # np.mean(errors) test = pd.read_csv('data_files/test.csv') for var in categorical_vars: test = pd.get_dummies(test, columns=[var]) test.head() rf = RandomForestRegressor(n_estimators=10, max_depth=10, max_features='sqrt') rf.fit(train.drop(['loss'], axis=1), train.loss) lr = LinearRegression() lr.fit(train.drop(['loss'], axis=1), train.loss) gbt = GradientBoostingRegressor(n_estimators=10, max_depth=10, max_features='sqrt') gbt.fit(train.drop(['loss'], axis=1), train.loss) knn = KNeighborsRegressor(n_neighbors=7) knn.fit(train.drop(['loss'], axis=1), train.loss) svr = SVR(kernel='poly', degree=4) svr.fit(train.drop(['loss'], axis=1), train.loss) model_list = [rf, lr, gbt, knn, svr] test['loss'] = multi_model_prediction(test, model_list) test[['id', 'loss']].head() import csv with open('tate_submission1.csv', 'a') as file: writer = csv.writer(file) writer.writerow(['id', 'loss']) writer.writerows(test[['id', 'loss']].values.tolist()) predictions = rf.predict(y.drop(['loss'], axis=1)) np.mean([abs(prediction - loss) for prediction, loss in zip(predictions, y.loss)]) # def mae(estimator, X, y): # return np.mean([abs(prediction - value) # for prediction, value in zip(estimator.predict(X), y)]) # param_grid = {'n_estimators': np.arange(50, 251, 50), # 'max_depth': np.arange(5, 21, 5), # 'max_features': ['auto', 'sqrt']} # random_forest = RandomForestRegressor() # cv = GridSearchCV(random_forest, param_grid, scoring=mae) #cv.fit(train.drop(['loss'], axis=1), train.loss) #cv predictions = cv.predict(y.drop(['loss'], axis=1)) np.mean([abs(prediction - loss) for prediction, loss in zip(predictions, y.loss)]) ```
github_jupyter
# Checkpoint 2 ``` # imports. from datetime import datetime import matplotlib.pyplot as plt %matplotlib inline import numpy as np from scipy import integrate from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm import random plt.rcParams['figure.figsize'] = (10, 6) plt.rcParams['font.size'] = 16 # Constants G = 6.67408e-11 # m^3 s^-1 kg^-2 AU = 149.597e9 # m Mearth = 5.9721986e24 # kg Mmars = 6.41693e23 # kg Msun = 1.988435e30 # kg day2sec = 3600 * 24 # seconds in one day ``` ## Initial Conditions Below are the initial positions and velocities for Earth and Mars. ``` # positions and velocities at t=0 (2019/6/2) rs = [[-4.8957151e10, -1.4359284e11, 501896.65], # Earth [-1.1742901e11, 2.1375285e11, 7.3558899e9]] # Mars (units of m) vs = [[27712., -9730., -0.64148], # Earth [-20333., -9601., 300.34]] # Mars (units of m/s) ``` ## Historical Positions Below are historical positions for Earth and Mars at t=-1000 days prior to 2019/6/2. These will be used in tasks 5 and 6. ``` # positions of the planets at (2019/6/2)-1000 days rspast = [[1.44109e11, -4.45267e10, -509142.], # Earth [1.11393e11, -1.77611e11, -6.45385e9]] # Mars ``` ## Earth/Mars functions Below are functions for the equations of motion (the vector of 1st derivtives) for Earth and Mars and for calculating the angle between Earth and Mars. ``` def earth_mars_motion(t, y): """ # order of variables # 0,1,2 rx,ry,rz for Earth # 3,4,5 rx,ry,rz for Mars # 6,7,8 vx,vy,vz for Earth # 9,10,11 vx,vy,vz for Mars # order of derivatives: # 0,1,2 Drx,Dry,Drz for Earth # 3,4,5 Drx,Dry,Drz for Mars # 6,7,8 Dvx,Dvy,Dvz for Earth # 9,10,11 Dvx,Dvy,Dvy for Mars """ rx1,ry1,rz1, rx2,ry2,rz2, vx1,vy1,vz1, vx2,vy2,vz2 = y drx1 = vx1 dry1 = vy1 drz1 = vz1 drx2 = vx2 dry2 = vy2 drz2 = vz2 GMmars = G*Mmars GMearth = G*Mearth GMsun = G*Msun rx12 = rx1 - rx2 ry12 = ry1 - ry2 rz12 = rz1 - rz2 xy12 = np.power(np.power(rx12,2) + 2*np.power(ry12,2),1.5) xyz1 = np.power(np.power(rx1,2) + np.power(ry1,2) + np.power(rz1,2),1.5) xyz2 = np.power(np.power(rx2,2) + np.power(ry2,2) + np.power(rz2,2),1.5) dvx1 = GMmars * rx12 / xy12 - GMsun * rx1 / xyz1 dvy1 = GMmars * ry12 / xy12 - GMsun * ry1 / xyz1 dvz1 = GMmars * rz12 / xy12 - GMsun * rz1 / xyz1 dvx2 = -GMearth * rx12 / xy12 - GMsun * rx2 / xyz2 dvy2 = -GMearth * ry12 / xy12 - GMsun * ry2 / xyz2 dvz2 = -GMearth * rz12 / xy12 - GMsun * rz2 / xyz2 return np.array([drx1,dry1,drz1, drx2,dry2,drz2, dvx1,dvy1,dvz1, dvx2,dvy2,dvz2]) def angle_between_planets(y): """ Input should be same form as the y variable in the earth_mars_motion function. """ r1 = y[0:3] r2 = y[3:6] return np.arccos((r1*r2).sum(axis=0) / np.sqrt((r1*r1).sum(axis=0) * (r2*r2).sum(axis=0))) ``` ## Task 1 Write a code that solves the equations and plots trajectories of Mars and Earth up to some $t_{max}$. The 3D plot should include at least one full orbit for each body. ``` # setting time domain. tmax = 8000*day2sec # 8000 days. dt = 3600 # 1 hour. ts = np.arange(0, tmax, dt) trange = (ts[0], ts[-1]) def get_traj(initial_rs, initial_vs): ini = np.append(initial_rs, initial_vs) # initial coordinates of Earth and Mars sol = integrate.solve_ivp(earth_mars_motion, trange, ini, method = 'RK45', t_eval=ts, max_step = 1e6) rx1 = sol.y[0] # x pos of Earth ry1 = sol.y[1] # y pos of Earth rz1 = sol.y[2] # z pos of Earth rx2 = sol.y[3] # x pos of Mars ry2 = sol.y[4] # y pos of Mars rz2 = sol.y[5] # z pos of Mars vx1 = sol.y[6] # x velocity of Earth vy1 = sol.y[7] # y velocity of Earth vz1 = sol.y[8] # z velocity of Earth vx2 = sol.y[9] # x velocity of Mars vy2 = sol.y[10] # y velocity of Mars vz2 = sol.y[11] # z velocity of Mars y = sol.y t = sol.t return y, t def plot_traj(y): # creating 3D figure object fig = plt.figure(figsize = (15,10)) ax = fig.gca(projection='3d') rx1 = y[0] # x pos of Earth ry1 = y[1] # y pos of Earth rz1 = y[2] # z pos of Earth rx2 = y[3] # x pos of Mars ry2 = y[4] # y pos of Mars rz2 = y[5] # z pos of Mars # Plotting Earth and Mars trajectories earth, = ax.plot(rx1, ry1, rz1, c = 'b') earth.set_label('Earth') mars, = ax.plot(rx2, ry2, rz2, c = 'r') mars.set_label('Mars') plt.legend() # Labelling axes ax.xaxis.set_label_text('x position from Sun (m)', fontsize = 12) ax.xaxis.labelpad = 15 ax.yaxis.set_label_text('y position from Sun (m)', fontsize = 12) ax.yaxis.labelpad = 15 ax.zaxis.set_label_text('z position from Sun (m)', fontsize = 12) ax.zaxis.labelpad = 15 plt.title('Trajectory of the Earth and Mars relative to the Sun', fontsize = 22) plt.show() y, t = get_traj(rs, vs) plot_traj(y) ``` ## Task 2 Find the time of the next opposition to $\pm10$ days. Return the time in days from $t_0$ = 2 June 2019. ``` def get_opp_times(solutions, times): thetas = angle_between_planets(solutions) # finding relationship between neighbouring points. thetas_diff = np.diff(thetas) # determining locations of minima. indices = np.where(np.sign(thetas_diff[:-1]) < np.sign(thetas_diff[1:]))[0] + 1 # recording times these minima occur. opp_times = times[indices] return (opp_times[:10]) / 86400 def time_to_next_opposition(): # get trajectories. solutions, times = get_traj(rs, vs) # get opposition times. opp_times = get_opp_times(solutions, times) return opp_times t_opp = time_to_next_opposition() print (f"Next opposition in {t_opp} days.") ``` ## Task 3 Find the times for 10 oppositions in days since 2 June 2019. The results must be accurate to 1 day. Convert this to dates (year/month/day) and print out on the screen. Do not worry if the dates come out different than the actual dates you can find online, it’s supposed to be like that. The `calculate_oppositions` function should return a list of the ten next opposition times after 2 June, 2019. The times should be returned in units of days. You may create additional functions outside this cell that are called by `calculate_oppositions`. ``` def get_opp_times(solutions, times): thetas = angle_between_planets(solutions) # finding relationship between neighbouring points thetas_diff = np.diff(thetas) # determining locations of minima indices = np.where(np.sign(thetas_diff[:-1]) < np.sign(thetas_diff[1:]))[0] + 1 # recording times these minima occur opp_times = times[indices] return (opp_times[:10]) / 86400 def calculate_oppositions(): # get trajectories solutions, times = get_traj(rs, vs) # get opposition times opp_times = get_opp_times(solutions, times) return opp_times opp_times = calculate_oppositions() opp_times *= day2sec date0 = datetime.fromisoformat('2019-06-02') timestamp0 = datetime.timestamp(date0) for t in opp_times: print(f"t = {t/day2sec:.2f} day: {datetime.fromtimestamp(t+timestamp0)}") ``` ## Task 4 Estimate standard errors of these times assuming that all initial positions and velocities (12 numbers) are normally distributed random numbers with means as specified in the list of parameters, and coefficients of variation (standard deviation divided by the mean) equal to 3x10$^{-5}$. The `estimate_errors` function should return two lists: 1. a list (or array) of the mean opposition times for 10 oppositions 2. a list (or array) of the standard deviation for each time Units should be in days. RUN TIME FOR N = 50 ~ 1min WHEN RAN LOCALLY ``` # gaussian sampling function. def get_sample(arr): arr = np.array(arr) sample = np.random.normal(loc = arr, scale = abs(3e-5*arr)) return sample def estimate_errors(): # number of Monte Carlo simulations. N = 50 sample_space = np.arange(0, tmax, tmax / N) # initialise array to store the opposition times from each simulation. opp_times_arr = np.zeros((N, 10)) # finding opposition times. for i in range(sample_space.size): # varying initial conditions through random sampling of normal dist. ini_r = get_sample(rs) ini_v = get_sample(vs) trajs, times = get_traj(ini_r, ini_v) opp_times_arr[i] = get_opp_times(trajs, times) # analysis. mean_opp_times = np.mean(opp_times_arr, axis = 0) error = np.std(opp_times_arr, axis = 0) return mean_opp_times, error tmean, tstd = estimate_errors() for i in range(10): print(f"{i}: {tmean[i]:.2f} +- {tstd[i]:.2f} days.") ``` ## Task 5 Use historical positions of Earth and Mars (boundary value problem) to improve the accuracy of your prediction. What are the standard errors now? The `estimate_errors_improved` function should return two lists: 1. a list (or array) of the mean opposition times for 10 oppositions 2. a list (or array) of the standard deviation for each time Units should be in days. PLEASE READ BELOW PARAGRAPH. In order to solve task 5, I wanted to solve the boundary value problem from t = -1000 days to t = 0 days. I did this by propagating my solution to task 1 backwards in time to t = -1000 days and then using this solution as my initial guess for the boundary value problem. Once I had the solution from BVP, I was able to obtain the velocities from BVP at t = 0 days. I used these velocities to run my Monte Carlo simulation which consisted of sample from gaussian --> solve ivp (past) --> solve bvp --> solveivp (future) --> get opp_times --> repeat RUNTIME FOR N = 50 ~ 2mins when ran locally ``` def bc(ya, yb): """ :param ya: array of positions and velocities of Earth and Mars at t = -1000 days :param yb: array of positions and velocities of Earth and Mars at t = 0 days """ # converting lists to arrays to have correct dimensions for solve_bvp. ra = np.array(rspast) ra = ra.reshape(6) rb = np.array(rs) rb = rb.reshape(6) bc_a = ya[:6] - ra bc_b = yb[:6] - rb return np.append(bc_a, bc_b) def reverse_ivp(initial_rs, initial_vs): # function used to the initial value problem in reverse e.g. propagate backwards in time. ini = np.append(initial_rs, initial_vs) # initial coordinates of Earth and Mars. sol = integrate.solve_ivp(earth_mars_motion, trange_r, ini, method = 'RK45', t_eval=ts_r, max_step = 1e6) rx1 = sol.y[0] # x pos of Earth. ry1 = sol.y[1] # y pos of Earth. rz1 = sol.y[2] # z pos of Earth. rx2 = sol.y[3] # x pos of Mars. ry2 = sol.y[4] # y pos of Mars. rz2 = sol.y[5] # z pos of Mars. vx1 = sol.y[6] # x velocity of Earth. vy1 = sol.y[7] # y velocity of Earth. vz1 = sol.y[8] # z velocity of Earth. vx2 = sol.y[9] # x velocity of Mars. vy2 = sol.y[10] # y velocity of Mars. vz2 = sol.y[11] # z velocity of Mars. y = sol.y t = sol.t return y, t # bvp solver. def get_traj_bvp(t, y, bc): sol = integrate.solve_bvp(earth_mars_motion, bc, t, y, max_nodes = 1e5) y_sol = sol.sol(t) return y_sol def estimate_errors_improved(): ts_r = np.linspace(0, 1000*day2sec, 5000) # reverse time domain. trange_r = (ts_r[0], ts_r[-1]) t_bvp = np.linspace(-1000*day2sec, 0, 5000) # bvp time domain. # reverse velocities to propagate backwards. vs_arr = np.array(vs) y_reverse, t_reverse = reverse_ivp(rs, -1 * vs_arr) y_reverse = np.flip(y_reverse, axis = 1) # flipping along the columns. y = y_reverse # setting initial guess for BVP to solution from reverse IVP. # using velocities obtained from BVP to run simulations again. y_bvp = get_traj_bvp(t_bvp, y, bc) new_vs = y_bvp[6:,-1] # number of Monte Carlo simulations. N = 50 sample_space = np.arange(0, tmax, tmax / N) # initialise array to store the opposition times from each simulation. opp_times_arr = np.zeros((N, 10)) for i in range(sample_space.size): # varying initial conditions. new_rs = get_sample(rs) new_vs = get_sample(new_vs) y_reverse, t_reverse = reverse_ivp(new_rs, -1 * new_vs) y_reverse = np.flip(y_reverse, axis = 1) # flipping along the columns. y = y_reverse y_bvp = get_traj_bvp(t_bvp, y, bc) # updating velocity for propagation into the future. new_vs = y_bvp[6:,-1] new_trajs, new_times = get_traj(new_rs, new_vs) opp_times_arr[i] = get_opp_times(new_trajs, new_times) # analysis. mean_opp_times = np.mean(opp_times_arr, axis = 0) error = np.std(opp_times_arr, axis = 0) return mean_opp_times, error tmean, tstd = estimate_errors_improved() for i in range(10): print(f"{i}: {tmean[i]:.2f} +- {tstd[i]:.2f} days.") ``` ## Task 6 Using the methods from Task 5, is there a better time point in the last 1000 days to get historical data for increasing the accuracy? Find such time t in the past 1000 days (-1000<$t$<0 days, where $t$=0 corresponds to 2 June 2019) which would yield a maximum error (std. deviation) of less than 0.2 days for each of the 10 oppositions. $t$ should be a negative number, accurate to +/- 50 days. The code for task 6 can take any form you like. ``` # Remove the line that says "raise NotImplementedError" # YOUR CODE HERE raise NotImplementedError() ```
github_jupyter
# Week 05, Part 1 ### Topic 1. For reference: about plotting polygons in R (we won't go over this, but it serves as reference) 1. Plotting normal distributions 1. Example: Manufacturing rulers 1. BACK TO SLIDES FOR PERCENTILES ``` # resize require(repr) options(repr.plot.width=8, repr.plot.height=8) ``` ## 1. Intro to polygons in R Now we'll go over some useful functions associated with drawing normal distributions. First, a little intro of sequences and polygons: ``` x = seq(-3,3, length=10) print(x) ``` Now, lets try to understand a `polygon` function. We'll use this to help us draw areas using the `plot_polygons.R` script, but let's look at a few `polygon` examples. Let's make a triangle -- say the triangle goes from -3 to +3 in x & 0-1 in y: ``` plot(NULL,xlim=c(-3,3),ylim=c(0,1)) # sets up axes xvertices = c(-3, 0, 3) yvertices = c(0, 1, 0) polygon(xvertices, yvertices,col="red") # plots on top of previous plot ``` Let's try overplotting a little rectangle at x = (-1,1), y = (0.4,0.6): ``` # set up empty axis plot(NULL,xlim=c(-3,3),ylim=c(0,1)) # sets up axes # red triangle xvertices = c(-3, 0, 3) yvertices = c(0, 1, 0) polygon(xvertices, yvertices,col="red") # plots on top of previous plot # blue rectangle xvertices = c(-1, -1, 1, 1) yvertices = c(0.4, 0.6, 0.6, 0.4) polygon(xvertices, yvertices, col="blue") ``` Essentially, polygon just fills in between a list of verticies we give it. We can use this to plot underneath our normal distributions. This will help us get a "feel" for how much of the graph is related to our measurement of interest. ## 2. Plotting normal distributions Now, let's build some tool's we will need to examine normal distributions. (1) Let's plot them using "dnorm" moving onto normal distributions. First, let's start by plotting a normal distribution: ``` help(dnorm) x=seq(-3,3,length=200) y=dnorm(x, mean=0, sd=1) plot(x,y) ``` Let's make a little fancier of a plot: ``` x = seq(-3,3,length=200) # plotting normal dist. -3,3 SD y1 = dnorm(x, mean=0, sd=1) plot(x,y1, type='l', ylim=c(0,2), ylab='Normal Distributions') ``` Overplot a few other normal distributions: ``` # orig plot x = seq(-3,3,length=200) # plotting normal dist. -3,3 SD y1 = dnorm(x, mean=0, sd=1) plot(x,y1, type='l', ylim=c(0,2), ylab='Normal Distributions') # other distribution y2 = dnorm(x, mean=0, sd=0.5) par(new=TRUE) # for overplotting plot(x, y2, type='l', col='red', ylim=c(0,2), ylab="") ``` Let's add to this by visualizing a Z-score and actually calculating it as well. We'll go back to just one normal distribution. Z-scores: remember this is a measure of how "far off" a score is from the mean. So first, as is always a good example, let's plot! ``` x = seq(-6,6,length=200) mean_dist = 1.0 sd_dist = 0.5 ``` Note here: I'm calling the dnorm function directly in the "y" data position of this function this is instead of doing "y = dnorm..." Its just us being fancy :) ``` plot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l') ``` Let's say I want the Zscore for x=2.5 - i.e. given this normal distribution, if I measure pick out an observation that is at the value of 2.5, how off from the mean is it? First of course, lets plot! ``` plot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l') abline(v=2.5,col="red") ``` We can see already that its pretty far off from the mean here $\rightarrow$ if we by eye try to compare the area to the right of this line (the little tail) it is very small compared to the area to the left - so we expect our Z-score to be pretty big! Now let's actually calculate. Recall: $Z_{score} = \frac{observation - mean}{SD}$ ``` Zscore = (2.5 - mean_dist)/sd_dist print(Zscore) ``` This is saying our measurement of 2.5 is 3 times bigger than the standard deviation of our normal distribution. So pretty gosh-darn big! Now, let's say I've got a 2nd distribution with mean = 0.5 and sd=2, is the $Z_{score}$ at x=2.5 higher or lower than the first one? As always, let's start by plotting: ``` # old plot plot(x,dnorm(x,mean=mean_dist,sd=sd_dist),ylim=c(0,1.0), type='l') abline(v=2.5,col="red") # 2nd distribution mean_dist2 = 0.5 sd_dist2 = 2.0 par(new=TRUE) # overplot on our original axis plot(x,dnorm(x,mean=mean_dist2,sd=sd_dist2),col="blue",ylim=c(0,1.0), type='l') ``` By eye we can see that the red line falls at a higher y-value on the blue, 2nd distribution this tells us at x=2.5, we are closer to the mean on the 2nd distribution so we expect a lower $Z_{score}$, but let's find out! ``` Zscore2 = (2.5-mean_dist2)/sd_dist2 print(Zscore2) ``` Indeed 1 < 3 - in our 2nd distribution, an observation of x=2.5 is only 1 SD from the mean. $Z_{scores}$ allow us to in a sense "normalize" each normal distribution to allow for comparisions between normal distributiosn with different means & SDs. For example, if these distributions were measuring a test then a student that scored a 2.5 on both would have done better than the overall class distribution on the first test. ### 3.A Example: Manufacturing rulers I am the manufacturer of rulers. My rulers should be 10cm long, but I am having issues: 1. On Run #1 I get rulers with a mean of 11cm and an SD of 2.0cm. 1. On Run #2 I get rulers with a mean of 10cm and an SD of 4.0cm. Q1: Which is the better run of my manufacturing equiptment *Note: could be differing answers!* Think on this for a bit! Q2: in each run, pull out a ruler to see how off it is. In both runs, I pull out a 9cm ruler - how unusual is it for me to pull out a ruler of this size? 1. Make a plot showing this & guess using the plot, 1. Then, calculate with a Zscore and say for sure. #### ANS 1: ``` options(repr.plot.width=8, repr.plot.height=5) # nicer plotting window #Plot: Run # 1: "mean of 11cm and an SD of 2.0cm" x = seq(5,15,length=200) plot(x,dnorm(x,mean=11,sd=2), type='l', ylim=c(0,0.2)) # further out for run #1 par(new=TRUE) # to overplot #Plot: Run # 2: "mean of 10 cm and an SD of 4.0 cm" plot(x,dnorm(x,mean=10,sd=4),col="blue", type='l', ylim=c(0,0.2)) # Our observation, a 9cm ruler: abline(v=9.0,col="red") # To remind us what is what: legend("topright", c("Run 1", "Run 2"), col=c("black","blue"), lw=1) ``` By eye, it looks like in run 1 (black) we are further from the mean (11cm), than for run 2 (blue). So this means that it is more unusual to get this 9cm ruler in run 1 than run2 But let's do the calculation to be sure: ``` Z1 = (9.0-11)/2.0 # -1.0 Z2 = (9.0-10)/4.0 # -0.25 print(c("Run 1", "Run 2")) print(c(Z1,Z2)) ``` Here -1.0 < -0.25 so run 1 is MORE SDs from the mean even though its negative! ## BACK TO SLIDES FOR PERCENTILES
github_jupyter
``` import os,requests,json; from ipywidgets import IntProgress,HTML,VBox; from IPython.display import display; r = requests.get( 'http://dz_gs:8080/geoserver/rest/settings/contact' ,auth=('admin','nhdplus') ); if r.status_code != 200 or r.json()["contact"]["contactOrganization"] != 'NHDPlusInABox': raise Exception('geoserver does not appear ready for configuration'); r = requests.get( 'http://dz_gs:8080/geoserver/rest/workspaces' ,auth=('admin','nhdplus') ); boo_check = False; for item in r.json()["workspaces"]["workspace"]: if item["name"] == "nhdplus": boo_check = True; if not boo_check: r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces' ,headers={'Content-Type':'application/json'} ,params={'default':True} ,data=json.dumps({'workspace':{'name':'nhdplus'}}) ,auth=('admin','nhdplus') ); r = requests.get( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores' ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('datastores get failed'); boo_check = False; if r.json()["dataStores"] != "": for item in r.json()["dataStores"]["dataStore"]: if item["name"] == "dzpg_nhdplus": boo_check = True; if not boo_check: payload = { "dataStore": { "name": "dzpg_nhdplus" ,"connectionParameters": { "entry": [ {"@key":"host" ,"$":"dz_pg"} ,{"@key":"port" ,"$":"5432"} ,{"@key":"database","$":"nhdplus"} ,{"@key":"user" ,"$":"nhdplus"} ,{"@key":"passwd" ,"$":os.environ['POSTGRES_PASSWORD']} ,{"@key":"dbtype" ,"$":"postgis"} ,{"@key":"schema" ,"$":"nhdplus"} ] } } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); r = requests.get( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('styles get failed'); sty = []; if r.json()["styles"] != "": for item in r.json()["styles"]["style"]: sty.append(item["name"]); if 'catchment_polygon' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>catchment_polygon</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#AAAAAA</CssParameter> <CssParameter name="fill-opacity">0</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#E67000</CssParameter> <CssParameter name="stroke-width">1.5</CssParameter> </Stroke> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'catchment_polygon'} ,data=payload ,auth=('admin','nhdplus') ); if 'wbd_polygon' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>wbd_polygon</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#AAAAAA</CssParameter> <CssParameter name="fill-opacity">0</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#C500FF</CssParameter> <CssParameter name="stroke-width">1.5</CssParameter> </Stroke> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'wbd_polygon'} ,data=payload ,auth=('admin','nhdplus') ); if 'wbd2_polygon' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>wbd2_polygon</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#AAAAAA</CssParameter> <CssParameter name="fill-opacity">0</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#C500FF</CssParameter> <CssParameter name="stroke-width">1.5</CssParameter> </Stroke> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'wbd2_polygon'} ,data=payload ,auth=('admin','nhdplus') ); if 'nhdwaterbody_polygon' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>nhdwaterbody_polygon</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#97DBF2</CssParameter> <CssParameter name="fill-opacity">1</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#AAAAAA</CssParameter> <CssParameter name="stroke-width">0</CssParameter> </Stroke> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'nhdwaterbody_polygon'} ,data=payload ,auth=('admin','nhdplus') ); if 'nhdarea_polygon' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>nhdarea_polygon</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#70D0F8</CssParameter> <CssParameter name="fill-opacity">1</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#AAAAAA</CssParameter> <CssParameter name="stroke-width">0</CssParameter> </Stroke> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'nhdarea_polygon'} ,data=payload ,auth=('admin','nhdplus') ); if 'nhdflowline_line' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>nhdflowline_line</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <LineSymbolizer> <Stroke> <CssParameter name="stroke">#0000FF</CssParameter> <CssParameter name="stroke-width">1</CssParameter> </Stroke> </LineSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'nhdflowline_line'} ,data=payload ,auth=('admin','nhdplus') ); if 'nhdline_line' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>nhdline_line</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <LineSymbolizer> <Stroke> <CssParameter name="stroke">#0000FF</CssParameter> <CssParameter name="stroke-width">1</CssParameter> </Stroke> </LineSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'nhdline_line'} ,data=payload ,auth=('admin','nhdplus') ); if 'nhdpoint_point' not in sty: payload = """<?xml version="1.0" encoding="UTF-8"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>nhdpoint_point</Name> <UserStyle> <FeatureTypeStyle> <Rule> <Name>Viewable</Name> <MaxScaleDenominator>288896</MaxScaleDenominator> <PointSymbolizer> <Graphic> <Mark> <WellKnownName>square</WellKnownName> <Fill> <CssParameter name="fill">#0000FF</CssParameter> </Fill> </Mark> <Size>6</Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>"""; r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/styles' ,headers={'Content-Type':'application/vnd.ogc.sld+xml'} ,params={'name':'nhdpoint_point'} ,data=payload ,auth=('admin','nhdplus') ); r = requests.get( 'http://dz_gs:8080/geoserver/rest/layers' ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layers get failed'); lyr = []; if r.json()["layers"] != "": for item in r.json()["layers"]["layer"]: lyr.append(item["name"]); if 'nhdplus:catchment_np21' not in lyr: payload = { "featureType":{ "name":"catchment_np21" ,"nativeName":"catchment_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"catchment_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); payload = """ <layer> <defaultStyle> <name>catchment_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/catchment_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:catchmentsp_np21' not in lyr: payload = { "featureType":{ "name":"catchmentsp_np21" ,"nativeName":"catchmentsp_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"catchmentsp_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>catchment_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/catchmentsp_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu2_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu2_np21" ,"nativeName":"wbd_hu2_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu2_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd2_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu2_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu4_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu4_np21" ,"nativeName":"wbd_hu4_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu4_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu4_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu6_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu6_np21" ,"nativeName":"wbd_hu6_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu6_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu6_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu8_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu8_np21" ,"nativeName":"wbd_hu8_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu8_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu8_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu10_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu10_np21" ,"nativeName":"wbd_hu10_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu10_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu10_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:wbd_hu12_np21' not in lyr: payload = { "featureType":{ "name":"wbd_hu12_np21" ,"nativeName":"wbd_hu12_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"wbd_hu12_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>wbd_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/wbd_hu12_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:nhdwaterbody_np21' not in lyr: payload = { "featureType":{ "name":"nhdwaterbody_np21" ,"nativeName":"nhdwaterbody_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"nhdwaterbody_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>nhdwaterbody_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdwaterbody_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:nhdarea_np21' not in lyr: payload = { "featureType":{ "name":"nhdarea_np21" ,"nativeName":"nhdarea_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"nhdarea_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>nhdarea_polygon</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdarea_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:nhdflowline_np21' not in lyr: payload = { "featureType":{ "name":"nhdflowline_np21" ,"nativeName":"nhdflowline_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"nhdflowline_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>nhdflowline_line</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdflowline_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:nhdline_np21' not in lyr: payload = { "featureType":{ "name":"nhdline_np21" ,"nativeName":"nhdline_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"nhdline_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>nhdline_line</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdline_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); if 'nhdplus:nhdpoint_np21' not in lyr: payload = { "featureType":{ "name":"nhdpoint_np21" ,"nativeName":"nhdpoint_np21" ,"namespace":{ "name":"nhdplus" } ,"title":"nhdpoint_np21" ,"nativeCRS":"EPSG:4269" ,"srs":"EPSG:4269" ,"projectionPolicy":"FORCE_DECLARED" ,"enabled": True ,"store":{ "@class":"dataStore" ,"name":"nhdplus:dzpg_nhdplus" } ,"maxFeatures":0 ,"numDecimals":0 ,"overridingServiceSRS": False ,"skipNumberMatched": False ,"circularArcPresent": False } } r = requests.post( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/datastores/dzpg_nhdplus/featuretypes' ,headers={'Content-Type':'application/json'} ,data=json.dumps(payload) ,auth=('admin','nhdplus') ); if r.status_code != 201: raise Exception('layer creation failed'); payload = """ <layer> <defaultStyle> <name>nhdpoint_point</name> </defaultStyle> </layer>""" r = requests.put( 'http://dz_gs:8080/geoserver/rest/workspaces/nhdplus/layers/nhdpoint_np21' ,headers={'Content-Type':'text/xml'} ,data=payload ,auth=('admin','nhdplus') ); if r.status_code != 200: raise Exception('layer alteration failed <' + r.status_code + '>'); ```
github_jupyter
# <center>Introduction on Using Python to access GeoNet's GNSS data In this notebook we will learn how to get data from one GNSS(Global Navigation Satellite System) station. By the end of this tutorial you will have make a graph like the one below. <img src="plot.png"> ## &nbsp;Table of contents ### 1. Introduction ### 2. Building the base FITS query ### 3. Get GNSS data ### 4. Plot data ### 5. Save data ## &nbsp;1. Introduction In this tutorial we will be learning how to use Python to access GNSS (commonly referred to at GPS) data from the continuous GNSS sites in the GeoNet and PositioNZ networks. GeoNet has a API (Application Programming Interface) to access its GNSS data. You do not need to know anything about APIs to use this tutorial. If you would like more info see https://fits.geonet.org.nz/api-docs/. To use this tutorial you will need to install the package pandas (https://pandas.pydata.org/). This tutorial assumes that you have a basic knowledge of Python. ###### About GeoNet GNSS data GeoNet uses GNSS technology to work out the precise positions of over 190 stations in and around NZ everyday. These positions are used to generate a displacement timeseries for each station, so we can observe how much and how quickly each station moves. <br> This data comes split into 3 components: <ul> <li> The displacement in the east-west direction where east is positive displacement. This data has a typeID of "e" <li> The displacement in the north-south direction where north is a positive displacement. This data has a typeID of "n" <li> The displacement in the up-down direction where up is a positive displacement. This data has a typeID of "u"</ul> For more on data types go to http://fits.geonet.org.nz/type (for best formatting use firefox) ## &nbsp;2. Building the base FITS query ###### Import packages ``` import requests import pandas as pd import datetime import matplotlib.pyplot as plt pd.plotting.register_matplotlib_converters() ``` ###### Set URL and endpoint ``` base_url = "http://fits.geonet.org.nz/" endpoint = "observation" ``` The base URL should be set as above to access the FITS database webservice containing the GeoNet GNSS data. The endpoint is set to observation to get the data itself in csv format. There are other endpoints which will return different information such as plot and site. To learn more go to https://fits.geonet.org.nz/api-docs/ ###### Combine URL and endpoint ``` url = base_url + endpoint ``` Combine the base URL and the endpoint to give the information to request the data. ## &nbsp;3. Get GNSS data In this section we will learn how to get all the GNSS observation data from a site and put it into a pandas dataframe, so we can plot and save the data ###### Set query parameters ``` parameters ={"typeID": "e", "siteID": "HANM"} ``` Set the parameters to get the east component(`'typeID':'e'`) of the GNSS station in the Hanmer Basin (`'siteID': 'HANM'`). To find the 4 letter site ID of a station you can use https://www.geonet.org.nz/data/network/sensor/search to find stations in an area of interest ##### Get GNSS data ``` response_e = requests.get(url, params=parameters) ``` We use `requests.get` to get the data using the URL we made earlier and the parameters we set in the last stage ``` parameters["typeID"] = "n" response_n = requests.get(url, params=parameters) parameters["typeID"] = "u" response_u = requests.get(url, params=parameters) ``` Here we've changed the typeID in the parameters dictionary to get the other components for the GNSS station ###### Check that your requests worked ``` print ("The Response status code of the east channel is", response_e.status_code) print ("The Response status code of the north channel is",response_n.status_code) print ("The Response status code of the up channel is",response_u.status_code) ``` The response status code says whether we were successful in getting the data requested and why not if we were unsuccessful: <ul> <li>200 -- everything went okay, and the result has been returned (if any) <li>301 -- the server is redirecting you to a different endpoint. This can happen when a company switches domain names, or an endpoint name is changed. <li>400 -- the server thinks you made a bad request. This can happen when you don't send along the right data, among other things. <li>404 -- the resource you tried to access wasn't found on the server. </ul> Now that we know our request for data was successful we want to transform it into a format that we can deal with in Python. Right now the data is one long string ###### Split the string of data ``` data_e = response_e.content.decode("utf-8").split("\n") ``` The above code decodes the response and then splits the east displacement data on the new line symbol as each line is one point of data. If you are using Python2 remove the code `.decode("utf-8")` ###### Split the points of data ``` for i in range(0, len(data_e)): data_e[i]= data_e[i].split(",") ``` The above code uses a for loop to split each point of data on the "," symbol as each value is separated by a ",", producing a list of lists ###### Reformat data values ``` for i in range(1, (len(data_e)-1)): data_e[i][0] = datetime.datetime.strptime(data_e[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object data_e[i][1] = float(data_e[i][1]) #makes 2nd value into a decimal number data_e[i][2] = float(data_e[i][2]) #makes 3rd value into a decimal number ``` The above code uses a `for` loop to go over each point of data and reformat it, so that the first value in each point is seen as a time, and the second and third values are seen as numbers.<br> Note that we choose to miss the first and last data points in our loop as the first data point has the names of the data values and the last point is empty due to how we split the data. ###### Convert nested list into dataframe object ``` df_e = pd.DataFrame(data_e[1:-1],index = range(1, (len(data_e)-1)), columns=data_e[0]) ``` `data_e[1:-1]` makes the list of data be the data in the data frame, `index = range(1, (len(data_e)-1))` makes rows named 1, 2, ... n where n is the number of data points, and `columns=data_e[0]` gives the columns the names that where in the first line of the response string ###### Print the first few lines of the data frame ``` df_e.head() ``` Here we can see on the 4th of June 2014 how much the site HANM had moved east (with formal error) in mm from its reference position, this being the midpoint of the position timeseries. ###### Make everything we have just done into a function ``` def GNSS_dataframe(data): """ This function turns the string of GNSS data received by requests.get into a data frame with GNSS data correctly formatted. """ data = data.split("\n") # splits data on the new line symbol for i in range(0, len(data)): data[i]= data[i].split(",")# splits data ponits on the , symbol for i in range(1, (len(data)-1)): data[i][0] = datetime.datetime.strptime(data[i][0], '%Y-%m-%dT%H:%M:%S.%fZ') #make 1st value into a datetime object data[i][1] = float(data[i][1]) #makes 2nd value into a decimal number data[i][2] = float(data[i][2]) #makes 3rd value into a decimal number df = pd.DataFrame(data[1:-1],index = range(1, (len(data)-1)), columns=data[0]) #make the list into a data frame return df df_e.head() ``` This makes code cells 8 to 11 into a function to be called later in the notebook. ###### Run the above function on the North and Up data ``` df_n = GNSS_dataframe(response_n.content.decode("utf-8")) df_u = GNSS_dataframe(response_u.content.decode("utf-8")) ``` Make sure to run this function on the content string of the requested data. If in Python2 use remove the code `.decode("utf-8")` ##### Why make the data into a data frame? A data frame is a way of formatting data into a table with column and row name much like a csv file and makes long list of data a lot easier to use. Data frame data can be called by column or row name making it easy to get the point(s) of data you want. Data, much like in a table, can be “linked” so that you can do something like plot a data point on a 2D plot. Sadly, data frames are not a built-in data format in Python, so we must use the pandas (https://pandas.pydata.org/pandas-docs/stable/dsintro.html#dataframe) package to be able to make a data frame. ## &nbsp;4. Plot data ###### Plot the east data ``` e_plot = df_e.plot(x='date-time', y= ' e (mm)', marker='o', title = 'Relative east displacement for HANM') #plt.savefig("e_plot") ``` The above code plots time on the x axis and the displacement in millimetres on the y axis. `marker = ‘o’` makes each point of data a small circle. If you want to save the plot as a png file in the folder you are running this code from you can uncomment ` plt.savefig("e_plot")` ###### Plot the north data ``` n_plot = df_n.plot(x='date-time', y= ' n (mm)', marker='o', title = 'Relative north displacement for HANM') #plt.savefig("n_plot") ``` ###### Plot the up data ``` u_plot = df_u.plot(x='date-time', y= ' u (mm)', marker='o', title='Relative up displacement for HANM') #plt.savefig("u_plot") ``` ## &nbsp;5. Save data ##### Make a copy of the east data frame ``` df = df_e ``` This makes what is call a deep copy of the data frame with the east displacement data in it. This means that if `df` is edited `df_e` is not effected. ###### Remove the error column from this copy of the data ``` df = df.drop(" error (mm)",axis=1) ``` The above code removes the column called error (mm) and all its data from `df`. ` axis=1` says that we are looking for a column. If we put ` axis=0` we would be looking for a row. ###### Add the up and north data to this data frame (but not the respective errors) ``` df["u (mm)"] = df_u[' u (mm)'] df["n (mm)"] = df_n[' n (mm)'] ``` ###### Print the first few lines of the data frame ``` df.head() ``` Here we can see the layout of the data frame with the columns date, east displacement, up displacement and north displacement ###### Save as CSV file ``` df.to_csv("HANM.csv") ``` This saves the data frame csv file with the same formatting as the data frame. It will have saved in the same place as this notebook is run from and be named HANM ## Useful links <ul> <li>This notebook uses Python https://www.python.org/ <li>This notebook also uses pandas https://pandas.pydata.org/ <li>There is a notebook on this data set in R at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R/Introduction_to_GNSS_data_using_FITS_in_R.ipynb <li>More tutorials on GNSS data can be found at https://github.com/GeoNet/data-tutorials/tree/master/GNSS_Data/R <li>To learn more about station codes go to https://www.geonet.org.nz/data/supplementary/channels <li>For more on data types in FITS go to http://fits.geonet.org.nz/type (for best formatting use firefox) <li>For more on FITS go to https://fits.geonet.org.nz/api-docs/ </ul>
github_jupyter
``` import keras import tensorflow as tf print(keras.__version__) print(tf.__version__) import numpy as np import pandas as pd from sklearn.feature_extraction.text import CountVectorizer from sklearn.model_selection import train_test_split from sklearn.metrics import classification_report,confusion_matrix NGRAMS = 2 SAMPLE = 1000000 EPOCHS = 15 # Florida voter df = pd.read_csv('/opt/data/fl_voterreg/fl_reg_name_race.csv.gz') df.dropna(subset=['name_first', 'name_last'], inplace=True) sdf = df[df.race.isin(['multi_racial', 'native_indian', 'other', 'unknown']) == False].sample(SAMPLE, random_state=21) del df # Additional features sdf['name_first'] = sdf.name_first.str.title() sdf['name_last'] = sdf.name_last.str.title() sdf rdf = sdf.groupby('race').agg({'name_first': 'count'}) rdf.to_csv('./fl_voter_reg/lstm/fl_name_race.csv', columns=[]) rdf sdf.groupby('race').agg({'name_last': 'nunique'}) ``` ## Preprocessing the input data ``` # concat last name and first name sdf['name_last_name_first'] = sdf['name_last'] + ' ' + sdf['name_first'] # build n-gram list vect = CountVectorizer(analyzer='char', max_df=0.3, min_df=3, ngram_range=(NGRAMS, NGRAMS), lowercase=False) a = vect.fit_transform(sdf.name_last_name_first) vocab = vect.vocabulary_ # sort n-gram by freq (highest -> lowest) words = [] for b in vocab: c = vocab[b] #print(b, c, a[:, c].sum()) words.append((a[:, c].sum(), b)) #break words = sorted(words, reverse=True) words_list = ['UNK'] words_list.extend([w[1] for w in words]) num_words = len(words_list) print("num_words = %d" % num_words) def find_ngrams(text, n): a = zip(*[text[i:] for i in range(n)]) wi = [] for i in a: w = ''.join(i) try: idx = words_list.index(w) except: idx = 0 wi.append(idx) return wi # build X from index of n-gram sequence X = np.array(sdf.name_last_name_first.apply(lambda c: find_ngrams(c, NGRAMS))) # check max/avg feature X_len = [] for x in X: X_len.append(len(x)) max_feature_len = max(X_len) avg_feature_len = int(np.mean(X_len)) print("Max feature len = %d, Avg. feature len = %d" % (max_feature_len, avg_feature_len)) y = np.array(sdf.race.astype('category').cat.codes) # Split train and test dataset X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=21, stratify=y) ``` ## Train a LSTM model ref: http://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/ ``` '''The dataset is actually too small for LSTM to be of any advantage compared to simpler, much faster methods such as TF-IDF + LogReg. Notes: - RNNs are tricky. Choice of batch size is important, choice of loss and optimizer is critical, etc. Some configurations won't converge. - LSTM loss decrease patterns during training can be quite different from what you see with CNNs/MLPs/etc. ''' from keras.preprocessing import sequence from keras.models import Sequential from keras.layers import Dense, Embedding, Dropout, Activation from keras.layers import LSTM from keras.layers.convolutional import Conv1D from keras.layers.convolutional import MaxPooling1D from keras.models import load_model max_features = num_words # 20000 feature_len = 25 # avg_feature_len # cut texts after this number of words (among top max_features most common words) batch_size = 32 print(len(X_train), 'train sequences') print(len(X_test), 'test sequences') print('Pad sequences (samples x time)') X_train = sequence.pad_sequences(X_train, maxlen=feature_len) X_test = sequence.pad_sequences(X_test, maxlen=feature_len) print('X_train shape:', X_train.shape) print('X_test shape:', X_test.shape) num_classes = np.max(y_train) + 1 print(num_classes, 'classes') print('Convert class vector to binary class matrix ' '(for use with categorical_crossentropy)') y_train = tf.keras.utils.to_categorical(y_train, num_classes) y_test = tf.keras.utils.to_categorical(y_test, num_classes) print('y_train shape:', y_train.shape) print('y_test shape:', y_test.shape) print('Build model...') model = Sequential() model.add(Embedding(num_words, 32, input_length=feature_len)) model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(num_classes, activation='softmax')) # try using different optimizers and different optimizer configs model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) print('Train...') model.fit(X_train, y_train, batch_size=batch_size, epochs=EPOCHS, validation_split=0.1, verbose=1) score, acc = model.evaluate(X_test, y_test, batch_size=batch_size, verbose=1) print('Test score:', score) print('Test accuracy:', acc) print('Test score:', score) print('Test accuracy:', acc) ``` ## Confusion Matrix ``` p = model.predict(X_test, verbose=2) # to predict probability y_pred = np.argmax(p, axis=-1) target_names = list(sdf.race.astype('category').cat.categories) print(classification_report(np.argmax(y_test, axis=1), y_pred, target_names=target_names)) print(confusion_matrix(np.argmax(y_test, axis=1), y_pred)) ``` ## Save model ``` model.save('./fl_voter_reg/lstm/fl_all_name_lstm.h5') words_df = pd.DataFrame(words_list, columns=['vocab']) words_df.to_csv('./fl_voter_reg/lstm/fl_all_name_vocab.csv', index=False, encoding='utf-8') ```
github_jupyter
# Principal Componenet Analysis (PCA) The PCA algorithm is a dimensionality reduction algorithm which works really well for datasets which have correlated columns. It combines the features of X in linear combination such that the new components capture the most information of the data. The PCA model is implemented in the cuML library and can accept the following parameters: 1. svd_solver: selects the type of algorithm used: Jacobi or full (default = full) 2. n_components: the number of top K vectors to be present in the output (default = 1) 3. random_state: select a random state if the results should be reproducible across multiple runs (default = None) 4. copy: if 'True' then it copies the data and removes mean from it else the data will be overwritten with its mean centered version (default = True) 5. whiten: if True, de-correlates the components (default = False) 6. tol: if the svd_solver = 'Jacobi' then this variable is used to set the tolerance (default = 1e-7) 7. iterated_power: if the svd_solver = 'Jacobi' then this variable decides the number of iterations (default = 15) The cuml implementation of the PCA model has the following functions that one can run: 1. Fit: it fits the model with the dataset 2. Fit_transform: fits the PCA model with the dataset and performs dimensionality reduction on it 3. Inverse_transform: returns the original dataset when the transformed dataset is passed as the input 4. Transform: performs dimensionality reduction on the dataset 5. Get_params: returns the value of the parameters of the PCA model 6. Set_params: allows the user to set the value of the parameters of the PCA model The model accepts only numpy arrays or cudf dataframes as the input. In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the PCA model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.html ``` import numpy as np import pandas as pd from sklearn.decomposition import PCA as skPCA from cuml import PCA as cumlPCA import cudf import os ``` # Helper Functions ``` # calculate the time required by a cell to run from timeit import default_timer class Timer(object): def __init__(self): self._timer = default_timer def __enter__(self): self.start() return self def __exit__(self, *args): self.stop() def start(self): """Start the timer.""" self.start = self._timer() def stop(self): """Stop the timer. Calculate the interval in seconds.""" self.end = self._timer() self.interval = self.end - self.start # check if the mortgage dataset is present and then extract the data from it, else do not run import gzip def load_data(nrows, ncols, cached = 'data/mortgage.npy.gz'): if os.path.exists(cached): print('use mortgage data') with gzip.open(cached) as f: X = np.load(f) X = X[np.random.randint(0,X.shape[0]-1,nrows),:ncols] else: # throws FileNotFoundError error if mortgage dataset is not present raise FileNotFoundError('Please download the required dataset or check the path') df = pd.DataFrame({'fea%d'%i:X[:,i] for i in range(X.shape[1])}) return df # this function checks if the results obtained from two different methods (sklearn and cuml) are the equal from sklearn.metrics import mean_squared_error def array_equal(a,b,threshold=2e-3,with_sign=True): a = to_nparray(a) b = to_nparray(b) if with_sign == False: a,b = np.abs(a),np.abs(b) error = mean_squared_error(a,b) res = error<threshold return res # the function converts a variable from ndarray or dataframe format to numpy array def to_nparray(x): if isinstance(x,np.ndarray) or isinstance(x,pd.DataFrame): return np.array(x) elif isinstance(x,np.float64): return np.array([x]) elif isinstance(x,cudf.DataFrame) or isinstance(x,cudf.Series): return x.to_pandas().values return x ``` # Run tests ``` %%time # nrows = number of samples # ncols = number of features of each sample nrows = 2**15 nrows = int(nrows * 1.5) ncols = 400 X = load_data(nrows,ncols) print('data',X.shape) # set parameters for the PCA model n_components = 10 whiten = False random_state = 42 svd_solver="full" %%time # use the sklearn PCA on the dataset pca_sk = skPCA(n_components=n_components,svd_solver=svd_solver, whiten=whiten, random_state=random_state) # creates an embedding result_sk = pca_sk.fit_transform(X) %%time # convert the pandas dataframe to cudf dataframe X = cudf.DataFrame.from_pandas(X) %%time # use the cuml PCA model on the dataset pca_cuml = cumlPCA(n_components=n_components,svd_solver=svd_solver, whiten=whiten, random_state=random_state) # obtain the embedding of the model result_cuml = pca_cuml.fit_transform(X) # calculate the attributes of the two models and compare them for attr in ['singular_values_','components_','explained_variance_', 'explained_variance_ratio_']: passed = array_equal(getattr(pca_sk,attr),getattr(pca_cuml,attr)) message = 'compare pca: cuml vs sklearn {:>25} {}'.format(attr,'equal' if passed else 'NOT equal') print(message) # compare the results of the two models passed = array_equal(result_sk,result_cuml) message = 'compare pca: cuml vs sklearn transformed results %s'%('equal'if passed else 'NOT equal') print(message) ```
github_jupyter
# Deploy and perform inference on Model Package from AWS Marketplace This notebook provides you instructions on how to deploy and perform inference on model packages from AWS Marketplace object detection model. This notebook is compatible only with those object detection model packages which this notebook is linked to. #### Pre-requisites: 1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio. 1. Ensure that IAM role used has **AmazonSageMakerFullAccess** 1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to this object detection model. If so, skip step: [Subscribe to the model package](#1.-Subscribe-to-the-model-package) #### Contents: 1. [Subscribe to the model package](#1.-Subscribe-to-the-model-package) 2. [Create an endpoint and perform real-time inference](#2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](#A.-Create-an-endpoint) 2. [Create input payload](#B.-Create-input-payload) 3. [Perform real-time inference](#C.-Perform-real-time-inference) 4. [Visualize output](#D.-Visualize-output) 5. [Delete the endpoint](#E.-Delete-the-endpoint) 3. [Perform batch inference](#3.-Perform-batch-inference) 4. [Clean-up](#4.-Clean-up) 1. [Delete the model](#A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](#B.-Unsubscribe-to-the-listing-(optional)) #### Usage instructions You can run this notebook one cell at a time (By using Shift+Enter for running a cell). **Note** - This notebook requires you to follow instructions and specify values for parameters, as instructed. ### 1. Subscribe to the model package To subscribe to the model package: 1. Open the model package listing page you opened this notebook for. 1. On the AWS Marketplace listing, click on the **Continue to subscribe** button. 1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell. ``` model_package_arn='<Customer to specify Model package ARN corresponding to their AWS region>' import json from sagemaker import ModelPackage import sagemaker as sage from sagemaker import get_execution_role import matplotlib.patches as patches import numpy as np from matplotlib import pyplot as plt from PIL import Image from PIL import ImageColor role = get_execution_role() sagemaker_session = sage.Session() boto3 = sagemaker_session.boto_session bucket = sagemaker_session.default_bucket() region = sagemaker_session.boto_region_name s3 = boto3.client("s3") runtime= boto3.client('runtime.sagemaker') ``` In next step, you would be deploying the model for real-time inference. For information on how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html). ### 2. Create an endpoint and perform real-time inference ``` model_name='object-detection-model' #The object detection model packages this notebook notebook is compatible with, support application/x-image as the #content-type. content_type='application/x-image' ``` Review and update the compatible instance type for the model package in the following cell. ``` real_time_inference_instance_type='ml.g4dn.xlarge' batch_transform_inference_instance_type='ml.p2.xlarge' ``` #### A. Create an endpoint ``` #create a deployable model from the model package. model = ModelPackage(role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session) #Deploy the model predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name) ``` Once endpoint has been created, you would be able to perform real-time inference. #### B. Prepare input file for performing real-time inference In this step, we will download class_id_to_label_mapping from S3 bucket. The mapping files has been downloaded from [TensorFlow](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). ``` s3_bucket = f"jumpstart-cache-prod-{region}" key_prefix = "inference-notebook-assets" def download_from_s3(key_filenames): for key_filename in key_filenames: s3.download_file(s3_bucket, f"{key_prefix}/{key_filename}", key_filename) img_jpg = "Naxos_Taverna.jpg" #Download image download_from_s3(key_filenames=[img_jpg]) #Mapping from model predictions to class labels class_id_to_label = {"1": "person", "2": "bicycle", "3": "car", "4": "motorcycle", "5": "airplane", "6": "bus", "7": "train", "8": "truck", "9": "boat", "10": "traffic light", "11": "fire hydrant", "13": "stop sign", "14": "parking meter", "15": "bench", "16": "bird", "17": "cat", "18": "dog", "19": "horse", "20": "sheep", "21": "cow", "22": "elephant", "23": "bear", "24": "zebra", "25": "giraffe", "27": "backpack", "28": "umbrella", "31": "handbag", "32": "tie", "33": "suitcase", "34": "frisbee", "35": "skis", "36": "snowboard", "37": "sports ball", "38": "kite", "39": "baseball bat", "40": "baseball glove", "41": "skateboard", "42": "surfboard", "43": "tennis racket", "44": "bottle", "46": "wine glass", "47": "cup", "48": "fork", "49": "knife", "50": "spoon", "51": "bowl", "52": "banana", "53": "apple", "54": "sandwich", "55": "orange", "56": "broccoli", "57": "carrot", "58": "hot dog", "59": "pizza", "60": "donut", "61": "cake", "62": "chair", "63": "couch", "64": "potted plant", "65": "bed", "67": "dining table", "70": "toilet", "72": "tv", "73": "laptop", "74": "mouse", "75": "remote", "76": "keyboard", "77": "cell phone", "78": "microwave", "79": "oven", "80": "toaster", "81": "sink", "82": "refrigerator", "84": "book", "85": "clock", "86": "vase", "87": "scissors", "88": "teddy bear", "89": "hair drier", "90": "toothbrush"} ``` #### C. Query endpoint that you have created with the opened images ``` #perform_inference method performs inference on the endpoint and prints predictions. def perform_inference(filename): response = runtime.invoke_endpoint(EndpointName='test-tensorflow-test', ContentType=content_type, Body=input_img) model_predictions = json.loads(response['Body'].read()) return model_predictions with open(img_jpg, 'rb') as file: input_img = file.read() model_predictions = perform_inference(input_img) result = {key: np.array(value)[np.newaxis, ...] if isinstance(value, list) else np.array([value]) for key, value in model_predictions['predictions'][0].items()} ``` #### D. Display model predictions as bounding boxes on the input image ``` colors = list(ImageColor.colormap.values()) image_pil = Image.open(img_jpg) image_np = np.array(image_pil) plt.figure(figsize=(20,20)) ax = plt.axes() ax.imshow(image_np) classes = [class_id_to_label[str(int(index))] for index in result["detection_classes"][0]] bboxes, confidences = result["detection_boxes"][0], result["detection_scores"][0] for idx in range(20): if confidences[idx] < 0.3: break ymin, xmin, ymax, xmax = bboxes[idx] im_width, im_height = image_pil.size left, right, top, bottom = xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height x, y = left, bottom color = colors[hash(classes[idx]) % len(colors)] rect = patches.Rectangle((left, bottom), right-left, top-bottom, linewidth=3, edgecolor=color, facecolor='none') ax.add_patch(rect) ax.text(left, top, "{} {:.0f}%".format(classes[idx], confidences[idx]*100), bbox=dict(facecolor='white', alpha=0.5)) ``` #### D. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged. ``` model.sagemaker_session.delete_endpoint(model_name) model.sagemaker_session.delete_endpoint_config(model_name) ``` ### 3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) ``` #upload the batch-transform job input files to S3 transform_input_key_prefix = 'object-detection-model-transform-input' transform_input = sagemaker_session.upload_data(img_jpg, key_prefix=transform_input_key_prefix) print("Transform input uploaded to " + transform_input) #Run the batch-transform job transformer = model.transformer(1, batch_transform_inference_instance_type) transformer.transform(transform_input, content_type=content_type) transformer.wait() # output is available on following path transformer.output_path ``` ### 4. Clean-up #### A. Delete the model ``` model.delete_model() ``` #### B. Unsubscribe to the listing (optional) If you would like to unsubscribe to the model package, follow these steps. Before you cancel the subscription, ensure that you do not have any [deployable model](https://console.aws.amazon.com/sagemaker/home#/models) created from the model package or using the algorithm. Note - You can find this information by looking at the container name associated with the model. **Steps to unsubscribe to product from AWS Marketplace**: 1. Navigate to __Machine Learning__ tab on [__Your Software subscriptions page__](https://aws.amazon.com/marketplace/ai/library?productType=ml&ref_=mlmp_gitdemo_indust) 2. Locate the listing that you want to cancel the subscription for, and then choose __Cancel Subscription__ to cancel the subscription.
github_jupyter
# Timeseries Ce notebook présente quelques étapes simples pour une série temporelle. La plupart utilise le module [statsmodels.tsa](https://www.statsmodels.org/stable/tsa.html#module-statsmodels.tsa). ``` from jyquickhelper import add_notebook_menu add_notebook_menu() %matplotlib inline ``` ## Données Les données sont artificielles mais simulent ce que pourraient être le chiffre d'affaires d'un magasin de quartier, des samedi très forts, une semaine morne, un Noël chargé, un été plat. ``` from ensae_teaching_cs.data import generate_sells import pandas df = pandas.DataFrame(generate_sells()) df.head() ``` ## Premiers graphiques La série a deux saisonnalités, hebdomadaire, mensuelle. ``` import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 2, figsize=(14, 4)) df.iloc[-30:].set_index('date').plot(ax=ax[0]) df.set_index('date').plot(ax=ax[1]) ax[0].set_title("chiffre d'affaire sur le dernier mois") ax[1].set_title("chiffre d'affaire sur deux ans"); ``` Elle a une vague tendance, on peut calculer un tendance à l'ordre 1, 2, ... ``` from statsmodels.tsa.tsatools import detrend notrend = detrend(df.value, order=1) df["notrend"] = notrend df["trend"] = df['value'] - notrend ax = df.plot(x="date", y=["value", "trend"], figsize=(14,4)) ax.set_title('tendance'); ``` Autocorrélations... ``` from statsmodels.tsa.stattools import acf cor = acf(df.value) cor fig, ax = plt.subplots(1, 1, figsize=(14,2)) ax.plot(cor) ax.set_title("Autocorrélogramme"); ``` La première saisonalité apparaît, 7, 14, 21... Les autocorrélations partielles confirment cela, plutôt 7 jours. ``` from statsmodels.tsa.stattools import pacf from statsmodels.graphics.tsaplots import plot_pacf plot_pacf(df.value, lags=50); ``` Comme il n'y a rien le dimanche, il vaut mieux les enlever. Garder des zéros nous priverait de modèles multiplicatifs. ``` df["weekday"] = df.date.dt.weekday df.head() df_nosunday = df[df.weekday != 6] df_nosunday.head(n=10) fig, ax = plt.subplots(1, 1, figsize=(14,2)) cor = acf(df_nosunday.value) ax.plot(cor) ax.set_title("Autocorrélogramme"); plot_pacf(df_nosunday.value, lags=50); ``` On décompose la série en tendance + saisonnalité. Les étés et Noël apparaissent. ``` from statsmodels.tsa.seasonal import seasonal_decompose res = seasonal_decompose(df_nosunday.value, freq=7) res.plot(); plt.plot(res.seasonal[-30:]) plt.title("Saisonnalité"); cor = acf(res.trend[5:-5]); plt.plot(cor); ``` On cherche maintenant la saisonnalité de la série débarrassée de sa tendance herbdomadaire. On retrouve la saisonnalité mensuelle. ``` res_year = seasonal_decompose(res.trend[5:-5], freq=25) res_year.plot(); ``` ## Test de stationnarité Le test [KPSS](https://en.wikipedia.org/wiki/KPSS_test) permet de tester la stationnarité d'une série. ``` from statsmodels.tsa.stattools import kpss kpss(res.trend[5:-5]) ``` Comme ce n'est pas toujours facile à interpréter, on simule une variable aléatoire gaussienne donc sans tendance. ``` from numpy.random import randn bruit = randn(1000) kpss(bruit) ``` Et puis une série avec une tendance forte. ``` from numpy.random import randn from numpy import arange bruit = randn(1000) * 100 + arange(1000) / 10 kpss(bruit) ``` Une valeur forte indique une tendance et la série en a clairement une. ## Prédiction Les modèles *AR*, *ARMA*, *ARIMA* se concentrent sur une série à une dimension. En machine learning, il y a la série et plein d'autres informations. On construit une matrice avec des séries décalées. ``` from statsmodels.tsa.tsatools import lagmat lag = 8 X = lagmat(df_nosunday["value"], lag) lagged = df_nosunday.copy() for c in range(1,lag+1): lagged["lag%d" % c] = X[:, c-1] lagged.tail() ``` On ajoute ou on réécrit le jour de la semaine qu'on utilise comme variable supplémentaire. ``` lagged["weekday"] = lagged.date.dt.weekday X = lagged.drop(["date", "value", "notrend", "trend"], axis=1) Y = lagged["value"] X.shape, Y.shape from numpy import corrcoef corrcoef(X) ``` Etrange autant de grandes valeurs, cela veut dire que la tendance est trop forte pour calculer des corrélations, il vaudrait mieux tout recommencer avec la série $\Delta Y_t = Y_t - Y_{t-1}$. Bref, passons... ``` X.columns ``` Une régression linéaire car les modèles linéaires sont toujours de bonnes baseline et pour connaître le modèle simulé, on ne fera pas beaucoup mieux. ``` from sklearn.linear_model import LinearRegression clr = LinearRegression() clr.fit(X, Y) from sklearn.metrics import r2_score r2_score(Y, clr.predict(X)) clr.coef_ ``` On retrouve la saisonnalité, $Y_t$ et $Y_{t-6}$ sont de mèches. ``` for i in range(1, X.shape[1]): print("X(t-%d)" % (i), r2_score(Y, X.iloc[:, i])) ``` Auparavant (l'année dernière en fait), je construisais deux bases, apprentissage et tests, comme ceci : ``` n = X.shape[0] X_train = X.iloc[:n * 2//3] X_test = X.iloc[n * 2//3:] Y_train = Y[:n * 2//3] Y_test = Y[n * 2//3:] ``` Et puis *scikit-learn* est arrivée avec [TimeSeriesSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html). ``` from sklearn.model_selection import TimeSeriesSplit tscv = TimeSeriesSplit(n_splits=5) for train_index, test_index in tscv.split(lagged): data_train, data_test = lagged.iloc[train_index, :], lagged.iloc[test_index, :] print("TRAIN:", data_train.shape, "TEST:", data_test.shape) ``` Et on calé une forêt aléatoire... ``` import warnings from sklearn.ensemble import RandomForestRegressor clr = RandomForestRegressor() def train_test(clr, train_index, test_index): data_train = lagged.iloc[train_index, :] data_test = lagged.iloc[test_index, :] clr.fit(data_train.drop(["value", "date", "notrend", "trend"], axis=1), data_train.value) r2 = r2_score(data_test.value, clr.predict(data_test.drop(["value", "date", "notrend", "trend"], axis=1).as_matrix())) return r2 warnings.simplefilter("ignore") last_test_index = None for train_index, test_index in tscv.split(lagged): r2 = train_test(clr, train_index, test_index) if last_test_index is not None: r2_prime = train_test(clr, last_test_index, test_index) print(r2, r2_prime) else: print(r2) last_test_index = test_index ``` 2 ans coupé en 5, soit tous les 5 mois, ça veut dire que ce découpage inclut parfois Noël, parfois l'été et que les performances y seront très sensibles. ``` from sklearn.metrics import r2_score r2 = r2_score(data_test.value, clr.predict(data_test.drop(["value", "date", "notrend", "trend"], axis=1).as_matrix())) r2 ``` On compare avec le $r_2$ avec le même $r_2$ obtenu en utilisant $Y_{t-1}$, $Y_{t-2}$, ... $Y_{t-d}$ comme prédiction. ``` for i in range(1, 9): print(i, ":", r2_score(data_test.value, data_test["lag%d" % i])) lagged[:5] ``` En fait le jour de la semaine est une variable catégorielle, on crée une colonne par jour. ``` from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OneHotEncoder cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] ct = ColumnTransformer( [('pass', "passthrough", cols), ("dummies", OneHotEncoder(), ["weekday"])]) pred = ct.fit(lagged).transform(lagged[:5]) pred ``` On met tout dans un pipeline parce que c'est plus joli, plus pratique aussi. ``` from sklearn.pipeline import make_pipeline from sklearn.decomposition import PCA, TruncatedSVD cols = ['lag1', 'lag2', 'lag3', 'lag4', 'lag5', 'lag6', 'lag7', 'lag8'] model = make_pipeline( make_pipeline( ColumnTransformer( [('pass', "passthrough", cols), ("dummies", make_pipeline(OneHotEncoder(), TruncatedSVD(n_components=2)), ["weekday"])]), LinearRegression())) model.fit(lagged, lagged["value"]) ``` C'est plus facile à voir visuellement. ``` from mlinsights.plotting import pipeline2dot dot = pipeline2dot(model, lagged) from jyquickhelper import RenderJsDot RenderJsDot(dot) r2_score(lagged['value'], model.predict(lagged)) ``` ## Templating Complètement hors sujet mais utile. ``` from jinja2 import Template template = Template('Hello {{ name }}!') template.render(name='John Doe') template = Template(""" {{ name }} {{ "-" * len(name) }} Possède : {% for i in range(len(meubles)) %} - {{meubles[i]}}{% endfor %} """) meubles = ['table', "tabouret"] print(template.render(name='John Doe Doe', len=len, meubles=meubles)) ```
github_jupyter
# ETL Processes Use this notebook to develop the ETL process for each of your tables before completing the `etl.py` file to load the whole datasets. ``` import os import glob import psycopg2 import pandas as pd from sql_queries import * conn = psycopg2.connect("host=127.0.0.1 dbname=studentdb user=student password=student") cur = conn.cursor() conn = psycopg2.connect("host=127.0.0.1 dbname=sparkifydb user=student password=student") cur = conn.cursor() #YD: set auto commit. pring connection conn.set_session(autocommit=True) print(conn) #YD: get all files. Use os.walk() def get_files(filepath): all_files = [] for root, dirs, files in os.walk(filepath): files = glob.glob(os.path.join(root,'*.json')) for f in files : all_files.append(os.path.abspath(f)) return all_files #YD: read json file df1 = pd.read_json('data/log_data/2018/11/2018-11-01-events.json', lines=True) df1.head() df2 = pd.read_json('data/song_data/A/A/A/TRAAAAW128F429D538.json', lines=True) df2.head() ``` # Process `song_data` In this first part, you'll perform ETL on the first dataset, `song_data`, to create the `songs` and `artists` dimensional tables. Let's perform ETL on a single song file and load a single record into each table to start. - Use the `get_files` function provided above to get a list of all song JSON files in `data/song_data` - Select the first song in this list - Read the song file and view the data ``` song_files = get_files('data/song_data') print(song_files[0]) len(song_files) filepath = song_files[0] df = pd.read_json(filepath, lines=True) df.head() ``` ## #1: `songs` Table #### Extract Data for Songs Table - Select columns for song ID, title, artist ID, year, and duration - Use `df.values` to select just the values from the dataframe - Index to select the first (only) record in the dataframe - Convert the array to a list and set it to `song_data` ``` # song_id, title, artist_id, year, duration #YD: Pandas tolist() is used to convert a series to list. song_data = df[['song_id','title','artist_id','year','duration']].values.tolist()[0] song_data song_data = df[['song_id','title','artist_id','year','duration']].values[0] song_data ``` #### Insert Record into Song Table Implement the `song_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song into the `songs` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songs` table in the sparkify database. ``` cur.execute(song_table_insert, song_data) conn.commit() ``` Run `test.ipynb` to see if you've successfully added a record to this table. ``` # cur.close() # conn.close() ``` ## #2: `artists` Table #### Extract Data for Artists Table - Select columns for artist ID, name, location, latitude, and longitude - Use `df.values` to select just the values from the dataframe - Index to select the first (only) record in the dataframe - Convert the array to a list and set it to `artist_data` ``` artist_data = df[['artist_id','artist_name','artist_location','artist_latitude','artist_longitude']].values.tolist()[0] artist_data type(artist_data) artist_data = df[['artist_id','artist_name','artist_location','artist_latitude','artist_longitude']].values[0] artist_data type(artist_data) ``` #### Insert Record into Artist Table Implement the `artist_table_insert` query in `sql_queries.py` and run the cell below to insert a record for this song's artist into the `artists` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `artists` table in the sparkify database. ``` cur.execute(artist_table_insert, artist_data) conn.commit() ``` Run `test.ipynb` to see if you've successfully added a record to this table. # Process `log_data` In this part, you'll perform ETL on the second dataset, `log_data`, to create the `time` and `users` dimensional tables, as well as the `songplays` fact table. Let's perform ETL on a single log file and load a single record into each table. - Use the `get_files` function provided above to get a list of all log JSON files in `data/log_data` - Select the first log file in this list - Read the log file and view the data ``` log_files = get_files('data/log_data') filepath = log_files[0] df = pd.read_json(filepath, lines=True) df.head() ``` ## #3: `time` Table #### Extract Data for Time Table - Filter records by `NextSong` action - Convert the `ts` timestamp column to datetime - Hint: the current timestamp is in milliseconds - Extract the timestamp, hour, day, week of year, month, year, and weekday from the `ts` column and set `time_data` to a list containing these values in order - Hint: use pandas' [`dt` attribute](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.html) to access easily datetimelike properties. - Specify labels for these columns and set to `column_labels` - Create a dataframe, `time_df,` containing the time data for this file by combining `column_labels` and `time_data` into a dictionary and converting this into a dataframe ``` df = df[df['page'] == 'NextSong'] df.head() #YD: note that to_datetime default is 'ns' t = pd.to_datetime(df['ts'], unit='ms') t.head() t = df.copy() t['ts'] = pd.to_datetime(df['ts'], unit='ms') t.head() time_data = t['ts'].tolist() len(time_data) time_data[72].year time_data[72].day_name() time_data[72].weekday() #YD: use day_name() to return weekday name time_data_list = (t['ts'], t['ts'].dt.hour, t['ts'].dt.day, t['ts'].dt.week, t['ts'].dt.month, t['ts'].dt.year, t['ts'].dt.day_name()) column_labels = ['start_time','hour','day','week','month', 'year','weekday'] # time_df = pd.DataFrame(time_data_list,column_labels) # time_df.head(10) time_df = pd.DataFrame(dict(zip(column_labels, time_data_list))) time_df.head(10) df['ts'] = time_df['start_time'] # start_time = [] # time_hour = [] # time_day = [] # time_week = [] # time_month = [] # time_year = [] # time_weekday = [] # for i in time_data: # start_time = i # time_hour.append(i.hour) # time_day.append(i.day) # time_week.append(i.week) # time_month.append(i.month) # time_year.append(i.year) # time_weekday.append(i.weekday()) # column_labels = pd.DataFrame({ # 'start_time': start_time, # 'hour': time_hour # 'day': time_day # 'week': time_week # 'month': time_month # 'year': time_year # 'weekday': time_weekday # }) # column_ # time_df = pd.DataFrame({ # 'start_time': start_time, # 'hour': time_hour, # 'day': time_day, # 'week': time_week, # 'month': time_month, # 'year': time_year, # 'weekday': time_weekday # }) # time_df.head() ``` #### Insert Records into Time Table Implement the `time_table_insert` query in `sql_queries.py` and run the cell below to insert records for the timestamps in this log file into the `time` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `time` table in the sparkify database. ``` #YD: how to insert rows one by one for i, row in time_df.iterrows(): cur.execute(time_table_insert, list(row)) conn.commit() # cur.close() # conn.close() ``` Run `test.ipynb` to see if you've successfully added records to this table. ## #4: `users` Table #### Extract Data for Users Table - Select columns for user ID, first name, last name, gender and level and set to `user_df` ``` user_df = df[['userId','firstName','lastName','gender','level']] user_df.head() ``` #### Insert Records into Users Table Implement the `user_table_insert` query in `sql_queries.py` and run the cell below to insert records for the users in this log file into the `users` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `users` table in the sparkify database. ``` for i, row in user_df.iterrows(): cur.execute(user_table_insert, row) conn.commit() ``` Run `test.ipynb` to see if you've successfully added records to this table. ## #5: `songplays` Table #### Extract Data and Songplays Table This one is a little more complicated since information from the songs table, artists table, and original log file are all needed for the `songplays` table. Since the log file does not specify an ID for either the song or the artist, you'll need to get the song ID and artist ID by querying the songs and artists tables to find matches based on song title, artist name, and song duration time. - Implement the `song_select` query in `sql_queries.py` to find the song ID and artist ID based on the title, artist name, and duration of a song. - Select the timestamp, user ID, level, song ID, artist ID, session ID, location, and user agent and set to `songplay_data` #### Insert Records into Songplays Table - Implement the `songplay_table_insert` query and run the cell below to insert records for the songplay actions in this log file into the `songplays` table. Remember to run `create_tables.py` before running the cell below to ensure you've created/resetted the `songplays` table in the sparkify database. ``` df.head(1) df['ts'] = df['ts'] songplay_table_insert song_select import pdb for index, row in df.iterrows(): cur.execute(song_select, (row.song,row.artist,row.length)) results = cur.fetchone() if results: songid, artistid = results else: songid, artistid = None, None # pdb.set_trace() songplay_data = (str(row['ts']), row['userId'], row['level'] ,songid, artistid, row['sessionId'], row['location'], row['userAgent']) cur.execute(songplay_table_insert, songplay_data) conn.commit() ``` Run `test.ipynb` to see if you've successfully added records to this table. # Close Connection to Sparkify Database ``` conn.close() ``` # Implement `etl.py` Use what you've completed in this notebook to implement `etl.py`.
github_jupyter
<a href="https://cognitiveclass.ai"><img src = "https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png" width = 400> </a> This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production. In case you are facing issues, please read the following two documents first: <https://github.com/IBM/skillsnetwork/wiki/Environment-Setup> <https://github.com/IBM/skillsnetwork/wiki/FAQ> Then, please feel free to ask: [https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/all](https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/all?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0201EN-SkillsNetwork-20647446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) Please make sure to follow the guidelines before asking a question: <https://github.com/IBM/skillsnetwork/wiki/FAQ#im-feeling-lost-and-confused-please-help-me> If running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells. ``` from IPython.display import Markdown, display def printmd(string): display(Markdown('# <span style="color:red">'+string+'</span>')) if ('sc' in locals() or 'sc' in globals()): printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>') !pip install pyspark==2.4.5 try: from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession except ImportError as e: printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>') sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]")) spark = SparkSession \ .builder \ .getOrCreate() ``` Welcome to exercise two of week three of “Apache Spark for Scalable Machine Learning on BigData”. In this exercise we’ll work on clustering. Let’s create our DataFrame again: ``` # delete files from previous runs !rm -f hmp.parquet* # download the file containing the data in PARQUET format !wget https://github.com/IBM/coursera/raw/master/hmp.parquet # create a dataframe out of it df = spark.read.parquet('hmp.parquet') # register a corresponding query table df.createOrReplaceTempView('df') ``` Let’s reuse our feature engineering pipeline. ``` from pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler, Normalizer from pyspark.ml.linalg import Vectors from pyspark.ml import Pipeline indexer = StringIndexer(inputCol="class", outputCol="classIndex") encoder = OneHotEncoder(inputCol="classIndex", outputCol="categoryVec") vectorAssembler = VectorAssembler(inputCols=["x","y","z"], outputCol="features") normalizer = Normalizer(inputCol="features", outputCol="features_norm", p=1.0) pipeline = Pipeline(stages=[indexer, encoder, vectorAssembler, normalizer]) model = pipeline.fit(df) prediction = model.transform(df) prediction.show() ``` Now let’s create a new pipeline for kmeans. ``` from pyspark.ml.clustering import KMeans from pyspark.ml.evaluation import ClusteringEvaluator kmeans = KMeans(featuresCol="features").setK(14).setSeed(1) pipeline = Pipeline(stages=[vectorAssembler, kmeans]) model = pipeline.fit(df) predictions = model.transform(df) evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) ``` We have 14 different movement patterns in the dataset, so setting K of KMeans to 14 is a good idea. But please experiment with different values for K, do you find a sweet spot? The closer Silhouette gets to 1, the better. [https://en.wikipedia.org/wiki/Silhouette\_(clustering)](https://en.wikipedia.org/wiki/Silhouette_(clustering)?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0201EN-SkillsNetwork-20647446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) ``` # please change the pipeline the check performance for different K, feel free to use a loop ``` Now please extend the pipeline to work on the normalized features. You need to tell KMeans to use the normalized feature column and change the pipeline in order to contain the normalizer stage as well. ``` kmeans = KMeans($$).setK(14).setSeed(1) pipeline = $$ model = pipeline.fit(df) predictions = model.transform(df) evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) ``` Sometimes, inflating the dataset helps, here we multiply x by 10, let’s see if the performance inceases. ``` from pyspark.sql.functions import col df_denormalized = df.select([col('*'),(col('x')*10)]).drop('x').withColumnRenamed('(x * 10)','x') kmeans = KMeans(featuresCol="features").setK(14).setSeed(1) pipeline = Pipeline(stages=[vectorAssembler, kmeans]) model = pipeline.fit(df_denormalized) predictions = model.transform(df_denormalized) evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) ``` Apache SparkML can be used to try many different algorithms and parametrizations using the same pipeline. Please change the code below to use GaussianMixture over KMeans. Please use the following link for your reference. [https://spark.apache.org/docs/latest/ml-clustering.html#gaussian-mixture-model-gmm](https://spark.apache.org/docs/latest/ml-clustering.html#gaussian-mixture-model-gmm?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-ML0201EN-SkillsNetwork-20647446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ) ``` from pyspark.ml.clustering import GaussianMixture gmm = $$ pipeline = $$ model = pipeline.fit(df) predictions = model.transform(df) evaluator = ClusteringEvaluator() silhouette = evaluator.evaluate(predictions) print("Silhouette with squared euclidean distance = " + str(silhouette)) ``` ### Thank you for completing this lab! This notebook was created by <a href="https://linkedin.com/in/romeo-kienzler-089b4557"> Romeo Kienzler </a> I hope you found this lab interesting and educational. Feel free to contact me if you have any questions! ## Change Log | Date (YYYY-MM-DD) | Version | Changed By | Change Description | | ----------------- | ------- | ---------- | ----------------------------------------------------------- | | 2020-09-29 | 2.0 | Srishti | Migrated Lab to Markdown and added to course repo in GitLab | <hr> ## <h3 align="center"> © IBM Corporation 2020. All rights reserved. <h3/>
github_jupyter
``` # Copyright 2020 IITK EE604A Image Processing. All Rights Reserved. # # Licensed under the MIT License. Use and/or modification of this code outside of EE604 must reference: # # © IITK EE604A Image Processing # https://github.com/ee604/ee604_assignments # # Author: Shashi Kant Gupta, Chiranjeev Prachand and Prof K. S. Venkatesh, Department of Electrical Engineering, IIT Kanpur ``` # Task 2: Image Enhancement II: Spatial Smoothing In this task, we will implement average, gaussian, and median spatial filter. ``` %%bash pip install git+https://github.com/ee604/ee604_plugins # Importing required libraries import cv2 import numpy as np import matplotlib.pyplot as plt from ee604_plugins import download_dataset, cv2_imshow download_dataset(assignment_no=2, task_no=2) # download data for this assignment def avgFilter(img, kernel_size=7): ''' Write a program to implement average filter. You have to assume square kernels. Inputs: + img - grayscaled image of size N x N - values between [0, 255] - 'uint8' + kernel_size - size of the kernel window which should be used for averaging. Ouputs: + out_img - smoothed grayscaled image of size N x N - values between [0, 255] - 'uint8' Allowed modules: + Basic numpy operations + cv2.filter2D() to perform 2D convolution Hint: + Not needed. ''' ############################# # Start your code from here # ############################# # Replace with your code... ############################# # End your code here ######## ############################# return out_img def gaussianFilter(img, kernel_size=7, sigma=3): ''' Write a program to implement gaussian filter. You have to assume square kernels. Inputs: + img - grayscaled image of size N x N - values between [0, 255] - 'uint8' + kernel_size - size of the kernel window which should be used for smoothing. + sigma - sigma parameter for gaussian kernel Ouputs: + out_img - smoothed grayscaled image of size N x N - values between [0, 255] - 'uint8' Allowed modules: + Basic numpy operations + cv2.filter2D() to perform 2D convolution + cv2.getGaussianKernel(). Note that this will give you 1D gaussian. Hint: + Not needed. ''' ############################# # Start your code from here # ############################# # Replace with your code... ############################# # End your code here ######## ############################# return out_img def medianFilter(img, kernel_size=7): ''' Write a program to implement median filter. You have to assume square kernels. Inputs: + img - grayscaled image of size N x N - values between [0, 255] - 'uint8' + kernel_size - size of the kernel window which should be used for smoothing. Ouputs: + out_img - smoothed grayscaled image of size N x N - values between [0, 255] - 'uint8' Allowed modules: + Basic numpy operations + np.median() Hint: + Not needed. ''' ############################# # Start your code from here # ############################# # Replace with your code... ############################# # End your code here ######## ############################# return out_img ``` ### Test --- Your observation should compare the different methods for different images. Must include a sentence on which method + kernel size worked best in each case. ``` # Do not change codes inside this cell # Add your observations in next to next cell # Your observation should compare the different methods for different images lena_orig = cv2.imread('data/lena_gray.jpg', 0) lena_noisy_1 = cv2.imread('data/lena_noisy_1.jpg', 0) lena_noisy_2 = cv2.imread('data/lena_noisy_2.jpg', 0) lena_noisy_3 = cv2.imread('data/lena_noisy_3.jpg', 0) def plot_frame(gridx, gridy, subplot_id, img, name): plt.subplot(gridx, gridy, 1 + int(subplot_id)) plt.imshow(np.uint8(img), cmap="gray", vmin=0, vmax=255) plt.axis("off") plt.title(name) # Do not change codes inside this cell # Add your observations in next cell img_arr = [lena_noisy_1, lena_noisy_2, lena_noisy_3] img_caption = ["Noisy 1", "Noisy 2", "Noisy 3"] for i in range(3): for kernel_size in [5, 7, 9]: print("\n-------------------------------------") print("# Lena", img_caption[i], "| kernel:", kernel_size, "x", kernel_size) print("-------------------------------------") plt.figure(figsize=(20, 13)) plot_frame(1, 5, 0, lena_orig, "Original") plot_frame(1, 5, 1, img_arr[i], "Noisy") tmp_img = avgFilter(np.copy(img_arr[i]), kernel_size=kernel_size) plot_frame(1, 5, 2, tmp_img, "Avg.") tmp_img = gaussianFilter(np.copy(img_arr[i]), kernel_size=kernel_size, sigma=int(kernel_size/5)) plot_frame(1, 5, 3, tmp_img, "Gaussian.") tmp_img = medianFilter(np.copy(img_arr[i]), kernel_size=kernel_size) plot_frame(1, 5, 4, tmp_img, "Median.") plt.show() your_observation = """ Replace this with your observations. """ print(your_observation) # Submission >>>>>>>>>>>>>>>>>>>>> # Do not change codes inside this cell. gen_imgs = [] img_arr = [lena_noisy_1, lena_noisy_2, lena_noisy_3] for i in range(3): for kernel_size in [5, 7, 9]: tmp_img = avgFilter(np.copy(img_arr[i]), kernel_size=kernel_size) gen_imgs.append(tmp_img) tmp_img = gaussianFilter(np.copy(img_arr[i]), kernel_size=kernel_size, sigma=int(kernel_size/5)) gen_imgs.append(tmp_img) tmp_img = medianFilter(np.copy(img_arr[i]), kernel_size=kernel_size) gen_imgs.append(tmp_img) task2_submission = np.array(gen_imgs) ```
github_jupyter
# A simple notebook to do MIRI coordinate transforms. # Some functionality depends on having the JWST pipeline and/or pysiaf module installed ``` import numpy as np import pdb as pdb from astropy.modeling import models from asdf import AsdfFile from jwst import datamodels from jwst.assign_wcs import miri import pysiaf ``` ### Imager transforms using standalone code plus pysiaf ### Import the miricoord standalone code: ``` import miricoord.imager.mirim_tools as mt ``` Read the MIRI apertures from the SIAF ``` siaf = pysiaf.Siaf('MIRI')#,basepath='/Users/dlaw/jwcode/pysiaf/pysiaf/pre_delivery_data/MIRI') ``` Get the MIRIM_FULL x,y reference location from the SIAF ``` xref,yref=siaf['MIRIM_FULL'].XDetRef,siaf['MIRIM_FULL'].YDetRef ``` Note that these are in the SIAF 1-indexed reference frame; in order to use them we'll first have to transform them to the 0-indexed frame used by all MIRI coordinates code (and the JWST pipeline): ``` xref,yref=xref-1,yref-1 xref,yref ``` Transform them to v2,v3 for filter 'F770W' ``` v2ref,v3ref=mt.xytov2v3(xref,yref,'F770W') v2ref,v3ref ``` This should be showing that the v2,v3 reference point of MIRIM_FULL (for which F770W is the reference filter) is -453.559, -373.814 (note that this changed in CDP-7) We can also convert a given location to RA,DEC if we assume a few JWST attitude keywords. First import the miricoord telescope tools module: ``` import miricoord.tel.tel_tools as teltools ``` Let's pretend that the telescope pointing had the reference point looking at RA=312.5, DEC=-76.0, and had spacecraft roll 73 degrees ``` raref,decref,rollref=312.5,-76.0,73.0 ``` Given that, we want to know where the location v2,v3=(-400,-420) is (this is somewhere in the coronagraphs): ``` v2,v3=[-400.],[-420.] ra,dec,newroll=teltools.jwst_v2v3toradec(v2,v3,v2ref=v2ref,v3ref=v3ref,raref=raref,decref=decref,rollref=rollref) ``` The RA,dec of this point is: ``` ra,dec ``` And the local roll at this new location is: ``` newroll ``` Note that if we instead had a FITS header with the appropriate keywords, we could have passed that to jwst_v2v3toradec instead of individual values. ### Now let's do an imager transform using the JWST pipeline code ### Import the miricoord pipeline access code: ``` import miricoord.imager.mirim_pipetools as mpt v2ref,v3ref=mpt.xytov2v3(xref,yref,'F770W') v2ref,v3ref ``` This should be the same answer as before, but under the hood it used the JWST pipeline! We can also access the pipeline distortion model directly: ``` model=mpt.xytov2v3model('F770W') ``` And use that to do forward transforms: ``` model(xref,yref) ``` And backward transforms: ``` model.inverse(v2ref,v3ref) ``` ### Now do a conversion to Ideal coordinates using the SIAF apertures: ### Let's work out where v2,v3=-415.069, -400.576 is for the LRS slit ``` v2,v3=-415.069, -400.576 xideal,yideal=mt.v2v3toIdeal(v2,v3,'MIRIM_SLIT') xideal,yideal ``` It's 0,0, which makes sense since this was the MIRIM_SLIT reference point. Now see what the lower-left corner of the LRS slit corresponds to in the SIAF: ``` xideal,yideal=siaf['MIRIM_SLIT'].XIdlVert1,siaf['MIRIM_SLIT'].YIdlVert1 v2,v3=mt.Idealtov2v3(xideal,yideal,'MIRIM_SLIT') xideal,yideal,v2,v3 siaf['MIRIM_SLIT'].plot() ``` As another example, APT requires Ideal coordinate offsets from the reference point If we wanted to see where an offset of XIdeal,YIdeal=10,0 in filter F2300C would land a star on the imager detector compared to the nominal Lyot coronagraph reference point in F770W: ``` xideal,yideal=10,0 v2,v3=mt.Idealtov2v3(xideal,yideal,'MIRIM_CORONLYOT') x,y=mt.v2v3toxy(v2,v3,'F2300C') print(x,y) siaf['MIRIM_CORONLYOT'].plot() ``` ### Now we'll do an MRS transform using standalone code plus pysiaf ### ``` import miricoord.mrs.mrs_tools as mrst ``` Get the MRS v2,v3 reference point from the SIAF ``` v2ref,v3ref=siaf['MIRIFU_CHANNEL1A'].V2Ref,siaf['MIRIFU_CHANNEL1A'].V3Ref v2ref,v3ref siaf['MIRIFU_CHANNEL1A'].plot() ``` Figure out what alpha,beta this is in Channel 1A ``` alpha,beta=mrst.v2v3toab(v2ref,v3ref,'1A') alpha,beta ``` By design, it's zero,zero since this was the reference point Now find out where pixels 50,60 55,60 and 60,60 on the SHORT detector would be for Ch1A ``` x,y=[50,55,60],[60,60,60] temp=mrst.xytoabl(x,y,'1A',trim=1) temp ``` Note that here the return is actually a dictionary of information, and that it is only 2 elements long. This is because we specified trim=1, which will remove any values that do not correspond to a light-sensitive slice. ``` v2,v3=mrst.abtov2v3(temp['alpha'],temp['beta'],'1A') v2,v3 ``` ### Now we'll do an MRS transform using the pipeline code ### ``` import miricoord.mrs.mrs_pipetools as mrspt x,y=30.31,511.0 a,b,l=mrspt.xytoabl(x,y,'1A') print(a,b,l) ``` Be warned: using the pipeline code in this way can give strange results if you try to transform a pixel that doesn't land on a slice in your channel specified!! (The pipeline itself has code elsewhere to deal with this, but here we're hooking directly into the transform modules).
github_jupyter
##### Copyright 2019 The TensorFlow Hub Authors. Licensed under the Apache License, Version 2.0 (the "License"); ``` # Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. ``` # Text Classification with Movie Reviews <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem. We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow, and [TensorFlow Hub](https://www.tensorflow.org/hub), a library and platform for transfer learning. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/). ## Setup ``` import numpy as np import tensorflow as tf import tensorflow_hub as hub import tensorflow_datasets as tfds import matplotlib.pyplot as plt print("Version: ", tf.__version__) print("Eager mode: ", tf.executing_eagerly()) print("Hub version: ", hub.__version__) print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE") ``` ## Download the IMDB dataset The IMDB dataset is available on [TensorFlow datasets](https://github.com/tensorflow/datasets). The following code downloads the IMDB dataset to your machine (or the colab runtime): ``` train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"], batch_size=-1, as_supervised=True) train_examples, train_labels = tfds.as_numpy(train_data) test_examples, test_labels = tfds.as_numpy(test_data) ``` ## Explore the data Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review. ``` print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples))) ``` Let's print first 10 examples. ``` train_examples[:10] ``` Let's also print the first 10 labels. ``` train_labels[:10] ``` ## Build the model The neural network is created by stacking layers—this requires three main architectural decisions: * How to represent the text? * How many layers to use in the model? * How many *hidden units* to use for each layer? In this example, the input data consists of sentences. The labels to predict are either 0 or 1. One way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have two advantages: * we don't have to worry about text preprocessing, * we can benefit from transfer learning. For this example we will use a model from [TensorFlow Hub](https://www.tensorflow.org/hub) called [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1). There are three other models to test for the sake of this tutorial: * [google/tf2-preview/gnews-swivel-20dim-with-oov/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1) - same as [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1), but with 2.5% vocabulary converted to OOV buckets. This can help if vocabulary of the task and vocabulary of the model don't fully overlap. * [google/tf2-preview/nnlm-en-dim50/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1) - A much larger model with ~1M vocabulary size and 50 dimensions. * [google/tf2-preview/nnlm-en-dim128/1](https://tfhub.dev/google/tf2-preview/nnlm-en-dim128/1) - Even larger model with ~1M vocabulary size and 128 dimensions. Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that the output shape of the produced embeddings is a expected: `(num_examples, embedding_dimension)`. ``` model = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1" hub_layer = hub.KerasLayer(model, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True) hub_layer(train_examples[:3]) ``` Let's now build the full model: ``` model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1)) model.summary() ``` The layers are stacked sequentially to build the classifier: 1. The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The model that we are using ([google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: `(num_examples, embedding_dimension)`. 2. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units. 3. The last layer is densely connected with a single output node. This outputs logits: the log-odds of the true class, according to the model. ### Hidden units The above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation. If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later. ### Loss function and optimizer A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function. This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions. Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error. Now, configure the model to use an optimizer and a loss function: ``` model.compile(optimizer='adam', loss=tf.losses.BinaryCrossentropy(from_logits=True), metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')]) ``` ## Create a validation set When training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy). ``` x_val = train_examples[:10000] partial_x_train = train_examples[10000:] y_val = train_labels[:10000] partial_y_train = train_labels[10000:] ``` ## Train the model Train the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set: ``` history = model.fit(partial_x_train, partial_y_train, epochs=40, batch_size=512, validation_data=(x_val, y_val), verbose=1) ``` ## Evaluate the model And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy. ``` results = model.evaluate(test_data, test_labels) print(results) ``` This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%. ## Create a graph of accuracy and loss over time `model.fit()` returns a `History` object that contains a dictionary with everything that happened during training: ``` history_dict = history.history history_dict.keys() ``` There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy: ``` acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) # "bo" is for "blue dot" plt.plot(epochs, loss, 'bo', label='Training loss') # b is for "solid blue line" plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.clf() # clear figure plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.show() ``` In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy. Notice the training loss *decreases* with each epoch and the training accuracy *increases* with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration. This isn't the case for the validation loss and accuracy—they seem to peak after about twenty epochs. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations *specific* to the training data that do not *generalize* to test data. For this particular case, we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback.
github_jupyter
# Sample grouping We are going to linger into the concept of sample groups. As in the previous section, we will give an example to highlight some surprising results. This time, we will use the handwritten digits dataset. ``` from sklearn.datasets import load_digits digits = load_digits() data, target = digits.data, digits.target ``` We will recreate the same model used in the previous exercise: a logistic regression classifier with preprocessor to scale the data. ``` from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression from sklearn.pipeline import make_pipeline model = make_pipeline(StandardScaler(), LogisticRegression()) ``` We will use the same baseline model. We will use a `KFold` cross-validation without shuffling the data at first. ``` from sklearn.model_selection import cross_val_score, KFold cv = KFold(shuffle=False) test_score_no_shuffling = cross_val_score(model, data, target, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score_no_shuffling.mean():.3f} +/- " f"{test_score_no_shuffling.std():.3f}") ``` Now, let's repeat the experiment by shuffling the data within the cross-validation. ``` cv = KFold(shuffle=True) test_score_with_shuffling = cross_val_score(model, data, target, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score_with_shuffling.mean():.3f} +/- " f"{test_score_with_shuffling.std():.3f}") ``` We observe that shuffling the data improves the mean accuracy. We could go a little further and plot the distribution of the testing score. We can first concatenate the test scores. ``` import pandas as pd all_scores = pd.DataFrame( [test_score_no_shuffling, test_score_with_shuffling], index=["KFold without shuffling", "KFold with shuffling"], ).T ``` Let's plot the distribution now. ``` import matplotlib.pyplot as plt import seaborn as sns all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7) plt.xlim([0.8, 1.0]) plt.xlabel("Accuracy score") plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Distribution of the test scores") ``` The cross-validation testing error that uses the shuffling has less variance than the one that does not impose any shuffling. It means that some specific fold leads to a low score in this case. ``` print(test_score_no_shuffling) ``` Thus, there is an underlying structure in the data that shuffling will break and get better results. To get a better understanding, we should read the documentation shipped with the dataset. ``` print(digits.DESCR) ``` If we read carefully, 13 writers wrote the digits of our dataset, accounting for a total amount of 1797 samples. Thus, a writer wrote several times the same numbers. Let's suppose that the writer samples are grouped. Subsequently, not shuffling the data will keep all writer samples together either in the training or the testing sets. Mixing the data will break this structure, and therefore digits written by the same writer will be available in both the training and testing sets. Besides, a writer will usually tend to write digits in the same manner. Thus, our model will learn to identify a writer's pattern for each digit instead of recognizing the digit itself. We can solve this problem by ensuring that the data associated with a writer should either belong to the training or the testing set. Thus, we want to group samples for each writer. Here, we will manually define the group for the 13 writers. ``` from itertools import count import numpy as np # defines the lower and upper bounds of sample indices # for each writer writer_boundaries = [0, 130, 256, 386, 516, 646, 776, 915, 1029, 1157, 1287, 1415, 1545, 1667, 1797] groups = np.zeros_like(target) lower_bounds = writer_boundaries[:-1] upper_bounds = writer_boundaries[1:] for group_id, lb, up in zip(count(), lower_bounds, upper_bounds): groups[lb:up] = group_id ``` We can check the grouping by plotting the indices linked to writer ids. ``` plt.plot(groups) plt.yticks(np.unique(groups)) plt.xticks(writer_boundaries, rotation=90) plt.xlabel("Target index") plt.ylabel("Writer index") _ = plt.title("Underlying writer groups existing in the target") ``` Once we group the digits by writer, we can use cross-validation to take this information into account: the class containing `Group` should be used. ``` from sklearn.model_selection import GroupKFold cv = GroupKFold() test_score = cross_val_score(model, data, target, groups=groups, cv=cv, n_jobs=-1) print(f"The average accuracy is " f"{test_score.mean():.3f} +/- " f"{test_score.std():.3f}") ``` We see that this strategy is less optimistic regarding the model statistical performance. However, this is the most reliable if our goal is to make handwritten digits recognition writers independent. Besides, we can as well see that the standard deviation was reduced. ``` all_scores = pd.DataFrame( [test_score_no_shuffling, test_score_with_shuffling, test_score], index=["KFold without shuffling", "KFold with shuffling", "KFold with groups"], ).T all_scores.plot.hist(bins=10, edgecolor="black", density=True, alpha=0.7) plt.xlim([0.8, 1.0]) plt.xlabel("Accuracy score") plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left") _ = plt.title("Distribution of the test scores") ``` As a conclusion, it is really important to take any sample grouping pattern into account when evaluating a model. Otherwise, the results obtained will be over-optimistic in regards with reality.
github_jupyter
# OSMnx + igraph for faster performance Author: [Geoff Boeing](https://geoffboeing.com/) NetworkX is ubiquitous, easy to use, and sufficiently fast for most use cases. But it can be slow for analyzing very large graphs because it is pure Python, trading off speed for ease of use. Fortunately, converting OSMnx-created NetworkX graphs to other graph libraries' types is relatively quick and simple and makes analysis much faster for those use cases where it's needed. For example, you might consider converting your NetworkX graph to igraph, graph-tool, or cugraph for analysis. This example demonstrates igraph. First install [igraph](https://igraph.org/python/) or run Jupyter from the [Docker container](https://hub.docker.com/r/gboeing/osmnx) (which already has it installed along with OSMnx and NetworkX). - [Overview of OSMnx](http://geoffboeing.com/2016/11/osmnx-python-street-networks/) - [GitHub repo](https://github.com/gboeing/osmnx) - [Examples, demos, tutorials](https://github.com/gboeing/osmnx-examples) - [Documentation](https://osmnx.readthedocs.io/en/stable/) - [Journal article/citation](http://geoffboeing.com/publications/osmnx-complex-street-networks/) ``` import operator import igraph as ig import networkx as nx import numpy as np import osmnx as ox %matplotlib inline ox.config(use_cache=True, log_console=False) print(ox.__version__) print(ig.__version__) weight = "length" ``` ## Construct graphs ``` # create networkx graph G_nx = ox.graph_from_place("Piedmont, CA, USA", network_type="drive") osmids = list(G_nx.nodes) G_nx = nx.relabel.convert_node_labels_to_integers(G_nx) # give each node its original osmid as attribute since we relabeled them osmid_values = {k: v for k, v in zip(G_nx.nodes, osmids)} nx.set_node_attributes(G_nx, osmid_values, "osmid") %%time # convert networkx graph to igraph G_ig = ig.Graph(directed=True) G_ig.add_vertices(G_nx.nodes) G_ig.add_edges(G_nx.edges()) G_ig.vs["osmid"] = osmids G_ig.es[weight] = list(nx.get_edge_attributes(G_nx, weight).values()) assert len(G_nx.nodes()) == G_ig.vcount() assert len(G_nx.edges()) == G_ig.ecount() ``` ## Shortest paths ``` source = list(G_nx.nodes())[0] target = list(G_nx.nodes())[-1] %%time path1 = G_ig.get_shortest_paths(v=source, to=target, weights=weight)[0] %%time path2 = nx.shortest_path(G_nx, source, target, weight=weight) assert path1 == path2 %%time path_length1 = G_ig.shortest_paths(source=source, target=target, weights=weight)[0][0] %%time path_length2 = nx.shortest_path_length(G_nx, source, target, weight) assert path_length1 == path_length2 ``` ## Centrality analysis ``` %%time closeness1 = G_ig.closeness(vertices=None, mode="ALL", cutoff=None, weights=weight, normalized=True) max_closeness1 = np.argmax(closeness1) %%time closeness2 = nx.closeness_centrality(G_nx, distance=weight, wf_improved=True) max_closeness2 = max(closeness2.items(), key=operator.itemgetter(1))[0] # confirm same node has max closeness in both graphs assert G_nx.nodes[max_closeness2]["osmid"] == G_ig.vs[max_closeness1]["osmid"] ```
github_jupyter
<a href="https://colab.research.google.com/github/harnalashok/deeplearning-sequences/blob/main/tf%20data%20API.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` #Last amended: 12th March, 2021 # My folder: harnalashok/github/deeplearning-sequences # References: # https://www.tensorflow.org/tutorials/text/text_classification_rnn # https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb#scrollTo=m5bz7R1xhX1f # https://stackoverflow.com/a/49579995/3282777 # https://www.tensorflow.org/tutorials/load_data/text # Objectives: # i) Learning to work with tensors # ii) Learning to work with tf.data API # iii) Text Classification--Work in progess # 1.0 Call libraries import numpy as np import tensorflow_datasets as tfds import tensorflow as tf import matplotlib.pyplot as plt import os # 1.1 More libraries from tensorflow.keras import utils from tensorflow.keras import preprocessing from tensorflow.keras.layers.experimental.preprocessing import TextVectorization # 1.2 Set numpy decimal printoptions # Limit display to precision of 3 np.set_printoptions(precision=3) # 1.3 from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" ``` # Introduction to tensors Refer this [page](https://www.tensorflow.org/guide/tensor) Tensors are (kind of) like np.arrays. All tensors are immutable like Python numbers and strings: you can never update the contents of a tensor, only create a new one. Rank in tensor is akin to dimensions in numpy. ### Basics ``` # 2.0 Rank 0 # A "scalar" or "rank-0" tensor . # A scalar contains a single value, # and no "axes". print( tf.constant(4)) # 2.1 Rank 1 # A "vector" or "rank-1" tensor is like # a list of values. A vector has one axis: print(tf.constant([2.0, 3.0, 4.0])) # 2.2 Rank 2 # A "matrix" or "rank-2" tensor has two axes: s=tf.constant( [ [1, 2], [3, 4], [5, 6] ] ) print(s) # 2.3 Rank 3 # There can be an arbitrary number of # axes (sometimes called "dimensions") rank_3_tensor = tf.constant( [ [ [0, 1, 2, 3, 4], [5, 6, 7, 8, 9] ], [ [10, 11, 12, 13, 14], [15, 16, 17, 18, 19] ], [ [20, 21, 22, 23, 24], [25, 26, 27, 28, 29] ], ] ) rank_3_tensor ``` ### Tensor to numpy Convert a tensor to numpy array using `np.array` method or using `tensor.numpy` method: ``` # 3.0 Tensor to numpy np.array(s) print() s.numpy() print() np.array(rank_3_tensor) print() rank_3_tensor.numpy() ``` ### Basic maths on tensors Basic math on tensors, including addition, element-wise multiplication, and matrix multiplication. ``` # 3.1 a = tf.constant([[1, 2], [3, 4]]) b = tf.constant([[1, 2], [1, 0]]) c = tf.ones([2,3]) # 3.2 a+b print() tf.add(a,b) print() a * b print() tf.multiply(a,b) print() tf.matmul(a,b) # 3.3 This fails. Tensors are very sensitive # to data types r = tf.constant([1.0,2.0], shape = [2,1]) p = tf.constant(5) r * p # 3.4 This also fails # even though both operands # are floats r = tf.constant([1.0,2.0], shape = [2,1]) r p = tf.constant(5) r * tf.cast(p, tf.float16) # 3.5 This succeeds r = tf.constant([1.0,2.0], shape = [2,1]) r p = tf.constant(5) r * tf.cast(p, tf.float32) ``` #### Some operations on tensors ``` # 4.0 c = tf.constant([[4.0, 5.0], [10.0, 1.0]]) # 4.1 Find the largest value print(tf.reduce_max(c)) # 4.2 Find the index of the largest value print(tf.argmax(c)) # 4.3 Compute the softmax # Note that each 'row' # (or axis 0) or putput # sums to 1 print(tf.nn.softmax(c)) ``` #### Some vocabulary Tensors have shapes. Some vocabulary: > **Shape**: The length (number of elements) of each of the axes of a tensor. > **Rank**: Number of tensor axes. A scalar has rank 0, a vector has rank 1, a matrix is rank 2. > **Axis** or Dimension: A particular dimension of a tensor. > **Size**: The total number of items in the tensor, the product shape vector. ``` # 5.0 rank_4_tensor = tf.zeros([3, 2, 4, 5]) # axis 0 is 3 # axis -1 is 5 ``` ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAT4AAAGNCAYAAACIZh3rAAAgAElEQVR4nO3dWZBUVZ7H8cq8uVflvtbGTgGyCQoiAqIi7rY7arug6DTYOooLjWILDj0K7TIN0tpTLY10I9U6Q0OjoY7TXcIY2FFGE2DIaFRFdBhMmD5U+OADDzxkxH8eNLEoc7n3nHvOP2/d38M3wq7KU+dfHZxP3MyqutnU1NRECCHkstgHQAgh3bEPgBBCumMfACGEdMc+AEII6Y59AIQQ0h37AAghpDv2ARBCSHfsAyCEkO7YB0AIId2xD4AQQrpjHwAhhHTHPgBCCOmOfQCEENId+wAIIaQ79gEQQkh37AMghJDu2AdADZhheCmdjFM2nWSprZBj2zubTlLX+DGUy6RY9k7EW2jqpPGs+6dTSfZ/g4pjHwA1WIbhpWQsSgf3bqdS8bD2Vq+8ndLJONv+D6y4hTKpJMv+e159gcaOaqdcJsWy/7Zn1lI4FKTWfJr936Hi2AdADRTQ40PvrT9spQljOimfTbOi98GebmrNZ9j/LSqOfQDUIAE9PvSO/OWPNH50R0OgVyoeBnzIHQE9XvQmjR/dMOgBPuSKgB4fesWj79PkCWMon+V9TW8oeoAPjfiAHi9606dMYP9BxnD0AB8a0QE9PvRO/ONvdObUroZED/ChERvQ40Vv7qzpDYse4EMjMqDHh16peJjmzzmzodEDfGjEBfR40btowZyGRw/woREVN3oP3vNjV6N32UULHYEe4EMjJm70/vneW1yN3nVXLHEMeoAPjYiAHi96t914uaPQA3zI8QE9XvTu+fG1jkMP8CFHB/R40btv+TJHogf4kGMzDC9lUnz3s2srZNnuJ5dNJ7/7M7A02/7nnjWDCrkM0/30opRNJ4XRA3zIkXFf6f3s/uWsV3pPPrSC9Upv489WsV7pBYMBaivkpL4O4EOOCugBvd493dTRCvjqxD4AsimgB/R6v3t6C/jqxj4AsiGgB/R6h7ymB/jqxj4AkgzoAb3eYT/IAHx1Yx8ASQT0gN5w9EpFwGci9gGQYEAP6FVCr1QEfCZiHwAJBPSAXjX0SkXAZyL2AZDFgB7Qq4VeqQj4TMQ+ALIQ0AN69dArFQGfidgHQCYDekDPDHqlIuAzEfsAyERAD+iZRa9UBHwmYh8A1QnoAT0r6JWKgM9E7AOgGgE9oGcVvVIR8JmIfQBUJaAH9ETQKxUBn4nYB0BVSiXibPeTy6aTrPfza4z76fHsn4hHKZNKCKMnC9/Bvdspl0mx//tXHPsAqEKxaITtSufR++6gRJzvStPNV3p7Xn2BfD6f9P30ROE7uHc7JeJRymUS7GdAcewDoGEBPXei99YftlI4FKSe32ySfqoqsr6M3tZfrMFTXaQ3TvTwmh4fekf+8kcKBgO0Y8vTwnANzer6oeiViniND2kM6LkXvVi0mZ57avWpj+mEbzh6pSLgQ5oCeu5Er3j0fUrEo7T+0ZWnfVwXfJXQKxUBH9IQ0HMveoVchu6/e9kPPqcDvmrolYqADykO6LkTvRP/+Bt1thdo+bKrK35eNXy10CsVAR9SGNBzL3qTJ46lay67gE4e/7jiY1TCVw+9UhHwIUUBPXeiVyoeptnTJ9PFi+ZVRa9UVAefGfRKRcCHFAT03Ive+fPPonPPnkHfDHxY83Eq4DOLXqkI+JDNAT33onf1JYtpxpSJddErFe2Hzwp6pSLgQzYG9NyL3p03XUVd40fTl0feN/V4O+Gzil6pCPiQTQE996K3+ie3UVshZxq9UtE++ETQKxUBH5LMMLwUjzUDPZeit/7RldRWyNGxg3ssrbMDPlH0SkXAhyTC/fTcjd62Z9dSPNZiGb1S0R74RNErFQEfksjN99Mb09lGuUyKbf+ZUyey3s9v7qyp5PV6hdGVvZ9ePpsWRg/wIeHcfGuptQ/cxXql+fNH7mW90vzdrzZQPpsiwzCEv4bs/fQKubTU9wD4kOWAHtA7uHc7+Xx64Sujt+2Ztbj1fP3YBxhRAT0+9J5+bGXDoFcqHtYK31D0RNYPD/Ah0wE9PvS4f5AxHL1SUR98w9Gzur5SgA+ZCugBveH764CvEnpW1lcL8KG6AT2gV2l/1fBVQ8/s+loBPlQzoAf0qu2vEr5a6JlZXy/Ah6oG9IBerf1VwVcPvXrrzQT4UMWAHtCrt78K+MygV2u92QAf+kFAD+iZ2d9u+MyiV229lQAfOi2gB/TM7m8nfFbQq7TeaoAPnQroAT0r+9sFn1X0hq8XCfAhamoCekDP+v52wCeC3tD1ogE+5Gr0nnhwBdATQK9UlIdPFL3yepnvG/C5PDej5/b76cmgVyrKwyeKXnm9zPcO+FxcOpWgQi7DUjoVp1wmxbb/uNEd1Mq0dyGXoVnTuqitkGXbf97s6RQOBaXQFYXv2/vppYTRk4Xv4N7tlErE2M8f4GOI80rvkVW3s17pcd+5uVFuLRUI+KW+jgh8n/z1TWqOhCkveT89mfv5xaIR9vMH+ICe1oDe909vg8GA1NeyCt+xg3sonYzTI6vuUPaG4rVyEXqAD+h9H9A7/TU9nfB9eeR9SsSjtPLOG6lUVPOG4rVyGXqAD+h9G9D74Q8ydMH35ZH3qb01R9deduGpj+mEz4XoAT6gB/Sq/fRWB3zfDHxIozpaacmic077uC74XIoe4AN6QK/ar6yohu+bgQ/pjK5xNGv65B98Tgd8LkYP8AE9oFdtf5XwnTz+MZ05bRJNmjCm4udVw+dy9NwNH9ADerX2VwXfyeMf08JzZlFHa55O/ONvFR+jEj6g52L4gB7Qq7e/KviWLj6XsulkVfRKRXXwAT0Xwwf0gJ6Z/VXAd8NVSygebaHi0fdrrlUBH9BzMXxAD+iZ3d9u+Fbceg2Fw6G66JWK9sMH9FwMH9ADelb2txO+h1feTsGAn4785Y+m1toJH9BzMXxAD+hZ3d8u+Db+7D4yDMM0eqWiffABPRfDB/SAnsj+dsD3q42PkWEY9NYftlpaawd8QM/F8AE9oCe6vyx8Ho+HPB6PZfRKRXvgA3ouha85EqZCLi1RhlolyqaTUl9Ddv+2Qo51/9Gdbaz7JxMxKXSDAXH4Xt70BDU1NbHcSPTg3u2UTiXYz1+Dxz6AssKhoG1XDyIl4zHW/bPpJOv+hVyGdf+OtrzUetErvj/v/BUZhkFer1d8dtxPD/ABPrEAn374yuhte2atkjcUrxXQA3yArwj4dMM3FL1S0f43FK8V0AN8pwJ8gE9mvRX4hqNXKuqDD+gJxT6AsgAf4JNZbxa+SuiVinrgA3rCsQ+gLMAH+GTWm4GvGnqlonr4gJ5U7AMoC/ABPpn19eCrhV6pqBY+oCcd+wDKAnyAT2Z9LfjqoVcqqoPvW/RaKJ5IUSZXsL14IkXpbIH9/AI+wCcU4FMDnxn0SkU18B3cu51y2Sx19/RS38BJ21uzYQuFws2UzgE+xwb4AJ/M+krwmUWvVLQfPl3odff0UgbwOTfAB/hk1g+Hzwp6paK98OlEr2/gJOBzcoAP8MmsHwqfVfRKRfvg04FeeAh6gM/hAT7AJ7O+DJ8IeqWiPfDpvtIDfCMgwAf4ZNYHgwFh9EpFefi40AN8Dg/wAT6Z9X6fTxi9UlEevm/R+6t29ACfwwN8LoevVRy+lzc9QR6PRxi9UlEcvoN7t1M+m2ZDD/A5PMDncvgEr/j2vfZvZBgG+f0+qf1F4Pvkr29SPBajV3b9Fxt6gM/hAT7AZ3VNGb1tz6xV9obi1Tp2cA+1tRZo07YeVvQAn8MDfIDPyuOHolcqqnlD8Wp9eeR9ymWz9C8v7GBHD/A5PMAH+Mw+djh6paI++L488j5NHD+OHn7yuYZAD/A5PMAH+Mw8rhJ6paIe+L4Z+JCmTOqiVas3NAx6gM/hAT7AV+8x1dArFdXD983Ah3TOnLPpluX3NxR6gM/hAT7AV+vztdArFdXCd/L4x3T+gvn0o5vuajj0AJ/DA3yAr9rn6qFXKqqD7+Txj2n+nDMpHI5QrtBhe8lUlto6x0r9xQfgc3CAD/BV+rgZ9EpFdfDdfON1dO6ipfTR5yeUXOmFI82UK7RLfR3A5+AAH+Ab/jGz6JWKauD7yYrldObZ8+nA0a+Vodfd0wv4AB/fwQN8jQWfFfRKRfvhW7P6n6lrygzl6PUNnAR8gI/v4AG+xoHPKnqlor3w/euGJ2nMuEn0zqHjytEDfICP9eABvsaATwS9UtE++F56cTPlCu3a0AN8gI/14AE+fvhE0SsV7YFvx2+2Ua7QTm+894k29ACf6+ELsR68RDzKur/b4QsFA1L305OFz+PxUDQW144e4HM5fF6vl4KBgFxB8bxeLwUCfvGvIbm/3++T+xqS+xsG7/cvez+9YEAcvk0/f4g8Ho+SuyfXQw/wuRw+PNV19xWfyjcUr9Xbe3oonc2zoQf4AB/rwQN87oOvEdADfICP9eABPnfB1yjoAT7Ax3rwAJ974Gsk9AAf4GM9eIDPHfA1GnqAD/CxHjzAN/Lha0T0AB/gYz14gM/h8NX5dZZGRQ/wAT7Wgwf4Ri58zz75EHkNg5KprJr76XWMkQJVBr7unl6KxZPs5xfwAT6hAJ8a+FRe6W3+9ZsUCASlr9hE13f39FI40kyxRIr9/AI+wCcU4LP/NT6V6L34233U3BKjTS/1sMBXRm/Nhi14quvkAB/gk1k/HD6V6L2+/+8UjjTT08/vkLpiE4VvKHp9A3iNz9EBPsAns34ofKrRS6Qy9PC656Wu2EThG44e4HN4gA/wyawvw6cSvXc/Ok6pTJ5WDntfXV3wVUIP8Dk8wAf4ZNYHgwHl6LV3jqWblz8gdcUmCl819ACfwwN8gE9mvd/nU4beh8e+odHjJtGPbqz8vrqq4auFHuBzeIDP5fC1isOn8n56Hx77hqZMP4suvPS6qm8xqRK+eugBPocH+FwOn+AVn8qnt30DJ2nWnIV131dXFXxm0AN8Dg/wAT6ra1Sjt+CCK2jmWfXfV1cFfGbRA3wOD/ABPiuPV43epT+6hbqmzDT1vrp2w2cFPcDn8AAf4DP7WNXo3Xj7fZbeV9dO+KyiB/gcHuADfGYepxq9Ffevs/y+unbBJ4Ie4HN4gA/w1XuMavQeXLtJ6H117YBPFD3A5/AAn8vhq/PrLKrRe2rzqxSNJYTeV9cO+ETRA3wOD/ABvmqfU3k/vVyhg5ZedTN5PF6WG4l29/RSSywujB7gc3iAD/BV+rjqK73H1m+haCxBXsOQumITRS8cbqZkKiv1PQA+Bwf4XA5fhdf4VKP30BO/pFyhnXa/fZgMw6cVvvJreut/uR23ngd8gA/w6UHv7p8+Tuls/tRrejrhK6P39AuvCcMJ+EZIgA/w6ULv1rsf/MHv6emCr4zec6/8pzCcgG8EBfgAnw70rrz+DhrfNfUHv6enA75v0Ws5DT3AB/gAn8vhU4neR5+foCWX31D1b29Vw9fd00uR5hZ6acfbwnACvhEYN3yJeJR1/0wq4Wr4goGAUvTOXXQJzZ1/YdW/vVUJXy30AJ/L4fN4POTzGUry+33k9/spmytQW8eoinm93pqfF621rZPC4TAFAoGaX98w1O0fiTSb2N9Qtn9zc0vd/f3+gBL0Dhz9mmbNWVjzfnoq4fsWvSi9suu/heEEfPwDOK5QOEyxeErZa0b1Dt2UabMpmcqy7T9rzgJKpXNs+y9YfBmlMupes6v///9ZdOV1t9dEr29ADXzdPb3U3FIbvVrrzQb40GlxovfOoeM0vmsaJdM86L1z6DhNmzmXDb13Dh2nuQsuYkPvnUPHaeyEyRXfI6NSdsP3LXoxU9874Ksb+wCOiRu9MeMmsaI3ZdpsVvTOnnc+pRnRax81ju594EnTa+yEzwp6ldZbDfAhamriRe+N9z6hts4xbOi98d4nNKFrGqUyPOi98d4nNHP2fDb03njvE0pnC6e9762Z7IKvu6eXWqJxS9874AN8IwI9riutxkDvXHb0ntr8quW1dsAngt7Q9aIBPpfHiV53Ty9lC21s6HX39NLosRPZ0Ovu6aUzpp/Nhl75BwmbXuoRWi8LX3dPL0VjCaHvHfABPkei1zdwkvKtHWzo9Q2cpLHjJ7P9IKFv4CRNmzmXDb2+gZMUiydp2853hdfLwieKXnm9zPcO+FxaMBymbL5Nyb3azDR24mTKt/LsnSt00IxZ86jQNopt/4UXXkGt7aPZ9p+3cCmFwhEpPETh6+7ppUQyLQW+DHy//v27FIm0sJ9BwKcbvVCIwpEWtiuNS65eRs0tUbb9r77hTks/QbS7G368Uuh1Lbu675GN1BKNk98fkPo6IvCVf3orez89Ufie2fI6+Xx+9jMI+DQXCIYpmc7S7rcPsxy68y++itLZPNv+S6+4iTK5Vrb9r7r+Tsrm29j2v/u+x6l8P71AICj1tazCV0ZvzYYt0k9VRdZveO53FApH2M8g4NOOXpBi8ZTQeyTY0YzZ8yiRyrDtP2feYkqms2z7n7f4Mkplcmz7X7NsBaUz399PTyd8Q9HrG1DzhuK1Wv3ELykQDLOfQcCnOX8gSK0doyy9BaCdTZwyg9pHjWPbf+rMOdQ5egLb/rPnLqJRYyey7b/0ypto9Liu0/bXBd9w9PoG9ML3k4fWU6Q5xn4GAZ9u9PwBau8cy3boWjvGsB76MeMm0Zjxk/nQnzyDxk6Ywrb/3AVLaNzEM36wvw74KqHXN6APvpvvvJ+ao3H2Mwj4NOfzBWjCpBlVby2ksgOffE2ZXCtNmX4W2/6t7aNp2plz2fYfPbaLZsyax7L/h8e+oRmzz616Pz3V8FVDr29APXwffX6Crr5xOSWTWfYzCPg0Zxh+OoMRnVg8WfXQ6dg/lcnT7LmL2PbPt3XS2fMWs6E3YfJ0mjP/gqr7q4SvFnp9A2rh++jzE3TBJddSKp1jP4OATzt6Bs2et6jurYVU9Ke/fkaRliidt/hStv1j8RQtuuhKtv3TmTwtXvojlv3f/eg4dY6ZQBdecm3N/VXBVw+9vgF18B04+jXNW3gxZQtt7GcQ8OlGz+ej8xZdwnboQ6EwLbn8Brb9m1uidMlVN7PtH0+k6LJrbmVDL9/aQVdce1vd/VXAZwa9vgE18B04+jXNnH0uFdpHsZ/BBoh9AK15vF66+PIbtR+4voGT9Oud/0X+QICuWbaCbf9QKELX3/oTtv2bW2J00+33sez/+v6/UzKdpWV33m/q8XbDZxa9vgH74Xvn0HHqmjKTxk48g/0MNkjsA2hF76oblrMdep/PT7fd8zDb/oFgiJavXMO2fzjcTCt++jjL/q/v/zvF4km654F1ptfYCZ8V9PoG7IWvfC/HabPmsp/BBop9AD3oeTy07I6fshy6tRu3kWEYtOrhp9n29/n8dP9jz7DtHwgE6cHHN7Ps/+Jv91E40kyr1z1naZ1d8FlFr2/APvjeeO8TyhXaafbchexnsMFiH0ALenetXMt26A3DoDUbtrLt7/P56fGNL7PtHwgE6clnfsOyfxm9n2/qtrzWDvhE0OsbsAe+MnrnLFzCfgYbMPYBlKO3cjXPldatdz9IHo+X1j/3O7b9vV6DNr74e7b9DcNHz2x5nWX/NRu2kN8foGe3it1PTxY+j8crhF7fgD3wRWMJOm/x5exnsEFjH0A5fIbhY6nJ42Hd38O+v5d1f38gSC/tfEcYDxn4unt6yev1CqEnC193Ty+1to+mCy65lv38NXDsAygrGArbdvUgUjSeZN0/kcqw7p/JFlj3zxU6pNaLwld+euv1eiVmF4Ovu6eX0tk8LbnsBvbz1+CxDwD4FAX49MM39DW9Wn+5UX926/ABPcAH+AYAn274hv8gQyd8QA/wAb7vAnz64Kv001td8AE9wHdagA/wyaw3C1+1X1nRAR/QE459AGUBPsAns94MfLV+T081fEBPKvYBlAX4AJ/M+nrw1fvlZJXwAT3p2AdQFuADfDLra8Fn5i8yVMF3Cr1Lr2c/Yw6OfQBlAT7AJ7O+Gnxm/wxNBXxl9Hb+6U/UP/ip7a3fvI68Xi8lkgn28wv4AJ9QgM9++Kz87a3d8OlC76lnn6BcfsTfkp59AMCnKMBnL3xWbzhgJ3w60esf/BTwOTnAB/hk1g+FT+QuK3bBpxs9wOfwAB/gk1lfhk/01lJ2wMeBHuBzeIAP8MmsDwSCwuj1DcjDx4Ue4HN4gA/wyaz3+fzC6PUNyMPHhR7gc3iAz+3wyd3Tzu8PCKPXNyAOX/l+elzoAT6HB/jcDp/YFV/56a3P55faXwS+3W8fpmgsQdvffJMNPcDn8AAf4LO6ZuhreqreULxa5ffIePHf/50VPcDn8AAf4LPy+OE/yNAJ3zuHjlM6m6dnt25lRw/wOTzAB/jMPrbST291wVd+39ufbdjQEOgBPocH+ACfmcdV+5UVHfAdOPo1je+aSg+sWdMw6AE+hwf4AF+9x9T6PT3V8B04+jXNPGs+3XbPPQ2FHuBzeIAP8NX6fL1fTlYJ30efn6C58y+k6265peHQA3wOD/ABvmqfM/MXGarg++jzE3ThpddR55gJ1ORpUlNTkzB6gM/hAT7AV+njZv8MTRV8V153O523eDF99tVRZVd6qXRK6usAPgcH+ADf8I9Z+dtbFfDdvPwBmjVnDh35ok8Zet2vb6NCWx7wAT6eAF+Bdf/h8Fm94YDd8N37wJM0eepU5ej1D34K+AAf38EDfAXW/YfCJ3KXFTvhe3jd8zR2wgQ6dOwD5egBPsDHevAAX4F1/zJ8oreWsgu+pza/SvnWVm3oAT7Ax3rwAF+Bdf9coUPqfnp2wLfppR7Kt7bSu4f2a0MP8LkcvgA3fLEE6/6JJC98aWb4QuGI1P30ZOHzeDwUjcW0owf43A5fIEit7aPZaonGWPePxZOs+ydTWdb9I80tUvfTk4Gvu6eXvF4v7X5rp3b0AJ/L4cNTXTzVlVkvCl93Ty9lslk29AAf4GM9eICvwLo/B3yNgB7gA3ysBw/wFVj31w1fo6AH+AAf68EDfAXW/XXC10joAT7Ax3rwAF+BdX9d8DUaeoAP8LEePMBXYN1fB3yNiB7gA3ysBw/wFVj3Vw1fo6IH+AAf68EDfAXW/VXC193TS6FwhAzDIJ/PZ2uGYZDXEEdPFr7db+2kfGuO/fwCPsAnFOBTA5/KK71f79xCHo+HUumk1NcRhW/3WzspnU3T7Dmz2M8v4AN8QgE+++FTiV737pfJMAza+MJ66aeqIuvL6K156hE81XVygA/wyawfDp9K9PYf2ENer5fW/WKt1BWbKHxD0esfxGt8jg7wAT6Z9UPhU41eIOCn+1b/k9QVmyh8w9EDfA4P8AE+mfVl+FSid+h/D1AwGKQ7/+k2qSs2UfgqoQf4HB7gA3wy6wOBoHL0WmItdPUNV0pdsYnCVw09wOfwAB/gk1nv8/mVoffp/x2mRDJOSy67UOqKTRS+WugBPocH+NwOX7vw2u6eXvL7/crQK7QVaN6CuVXfYlIlfPXQA3wOD/C5HT6xKz6VT2/7Bz+lMeNG0cyzptd8X11V8JlBD/A5PMAH+KyuUY3e1BlnUNeUiXXfYlIFfGbRA3wOD/ABPiuPV43eOQvm0qgxnabeV9du+KygB/gcHuADfGYfqxq9pVcuodb2gum3mLQTPqvoAT6HB/gAn5nHqUbvptuvp2QqYel9de2CTwQ9wOfwAB/gq/cY1eituH85JVMJy28xaQd8ougBPocH+NwOX+1fZ1GN3poNj1I4EhZ6X1074BNFD/A5PMAH+Kp9TuX99Mr31PN4PMKoyt5Pr72zTRg9wOfwAJ/b4av8VFf1ld76zeuoJdpChmFIXbGJopfJZWjxxedLfQ+Az8EBPsDHhd7ut3aS4dMLXxm9Z7f8AreeB3yAD/DpR69/8FOt8A1FTxROwDdCAnyAjws9nfANRw/wAT7AB/i+Qy+nFT1d8FVCD/ABPsDncvi40NMBXzX0AB/g44UvluCFL8kLX5oZvlA4woaeavhqoQf4XA6f3x+kXKFdYR3U2j6KWjsqF2mO1vy8TIW2TsoV2mt+/Wgszrp/PJFi//650FMJ3/fobRSGE/DxD+DIDJ9Br/x+q5JDVa+7Vt1B/oCfdv35Nbb9g6Eg7d6vBhUz+4cjYWWo1csMev2DauAro/fMvz0ttN5sgA/9IKAH9Mzsbzd8ZtGrtt5KgA+dFtADemb3txM+K+hVWm81wIdOBfSAnpX97YLPKnrD14sE+BA1NQE9oGcNvf5Be+ATQW/oetEAHwJ6QE9of1n4RNErr5f5vgGfywN6QE90f1n4RNErr5f53gGfi/N6vUru1WYmr9dLhsG7P9fep/b38+1vGAZ5vV4pdEXhK99PTxQ9Wfh2v7WTCm159vMH+JjQe/E3v7Tt6sFKd626g3w+H72y6yW2/YOhIO3at4Nt/0a40vMH/FJfRwS+t/9nL0VjUVp00QKpvaXu55dN0+QzutjPIOBjQO/5VzazHXqfz0e/3/s7tv3x9Pbbp7eBYEDqa1mF791D+ynfmqefPrJK2RuK18pF6AE+oHf6/kDv+9f0dMJ36NgHlMllaMV9y6l/UM0bitfKZegBPqD3/f5A7/QfZOiC79CxD2jchLF0zU1Xn/qYTvhciB7gA3pAr9pPb3XAd+SLPpo4eQItvWLJaR/XBZ9L0QN8QA/oVfuVFdXwHfmij2bPnUVz58/5wed0wOdi9AAf0AN61fZXCd9nXx2lcxeeQzNmTa/4edXwuRw9d8PHjZ4/4Kc/7AN6HPub+eVkVfB99tVRWnrlxTShazx9+n+HKz5GJXxAz8XwNQJ6+IuMxkWvf1AdfNcuu5o6R3dURa9/UB18QM/F8AE9oGdmfxXw3XHvbZQv5OjQ/x6ouVYFfEDPxfABPaBndn+74bv/0VWUTCXrotc/aD98QM/F8AE9oGdlfzvhe/xf1lAsHqX9B/aYWmsnfEDPxfABPaBndX+74Ht2y0YKR8Km0esftA8+oOdi+IAe0BPZ3w74tqwB97AAAAcbSURBVL76IoUjYere/bKltXbAB/RcDB/QA3qi+8vC5/F4yO/3W0avf9Ae+ICeS+EzDIOCoSCFw2HxIuIFgwEKhULiX0Ny/1A4JLe/ZJHmCO/+kujKwLdr3w7yeDy0fvM6YbhE99791k5q62wDem6FLxQK2Xb1IFI8EWPdP5VOsu6fzWdY9y+0FaTWi8L3+p93UigUJK/XKzE77qcH+ACfUIBPP3xl9NZvXqfkDcVrBfQAH+AbBHy64RuKXv+g/W8oXiugB/gA33cBPn3wDUevf1AffEAP8J0W4AN8MuvNwlcJvf5BPfABPeHYB1AW4AN8MuvNwFcNvf5B9fABPanYB1AW4AN8MuvrwVcLvf5BtfABPenYB1AW4AN8MutrwVcPvf5BdfABPVtiH0BZgA/wyayvBp8Z9PoH1cAH9GyLfQBlAT7AJ7O+Enxm0esftB8+oGdr7AMoC/ABPpn1w+Gzgl7/oL3wAT3bYx9AWYAP8MmsHwqfVfT6B+2DD+gpiX0AZQE+wCezvgyfCHr9g/bAB/SUxT6AsgAf4JNZHwgGhNHrH5SHD+gpjX0AZQE+wCez3uf3CaPXPygPH9BTGvsAygJ8bodP/J52u/btIJ/PJ4xe/6A4fLifnpbYB1AW4HM7fGJXfOWntz6/T2p/Efje/p+9FI210KQzJrGfnxEe+wDKAnyAz+qaoa/pqXpD8Wq9e2g/5VtzNK5rLPvZcUHsAygL8AE+K48f/oMMnfAdOvYBZbJpGjdxHPu5cUnsAygL8AE+s4+t9NNbXfAdOvYBjRk/mjrHdLCfGRfFPoCyAB/gM/O4ar+yogO+I1/00YRJ46mto439vLgs9gGUBfgAX73H1Po9PdXwHfmij2bPOZNyhRz7WXFh7AMoC/ABvlqfr/fLySrh++yrozRv4TmUyWXYz4lLYx9AWYAP8FX7nJm/yFAF32dfHaUll11IqVSK/Yy4OPYBlAX4AF+lj5v9MzRV8F194xUUT8TZz4fLYx9AWYAP8A3/mJW/vVUB320rbqVoLMp+NhD/AMoCfIBv6P+2esMBu+FbtfpeijRH2M8FaqKmBhhAWYAP8JX/W+QuK3bC99j61RQKhdjPBDoV+wDKAnyAr39Q/H56dsH39PNPUTAYZD8P6LTYB1AW4AN8MvfTswO+F17ZBPQaM/YBlBUK88IXi/PCl+SGL8cLXzgSlrqfnix8Ho+HDJ/Bfg5QxdgHUJbf76dsLsNWOBJm3b8l2sy6fyweZd0/EAxI3U9PBr4d/9FNHo+H/QygqrEPoCw81cVTXZn1gYAYfDv+47fkD/jZ//2jmrEPoCzAB/hk1ovAB/QcE/sAygJ8gE9mvVX4gJ6jYh9AWYAP8MmstwIf0HNc7AMoC/ABPpn1ZuEDeo6MfQBlAT7AJ7PeDHxAz7GxD6AswAf4ZNbXgw/oOTr2AZQF+ACfzPpa8AE9x8c+gLIAH+CTWV8Nvl37XqNAIMD+7xtJxT6AsgAf4JNZXwm+Xftew9/ejozYB1AW4AN8MuuHwwf0RlTsAygL8AE+mfVD4QN6Iy72AZQF+ACfzPoyfEBvRMY+gLIAH+CTWR8IBIDeyI19AGUBPsAns97n9wG9kRv7AMrihi8G+Fj3L7Tlhdfu2rcDNxEd2bEPoCzA52748q1i8OHprStiH0BZgA/wWV0D9FwT+wDKAnyAz8rjgZ6rYh9AWYAP8Jl9LNBzXewDKAvwAT4zjwN6rox9AGUBPsBX7zFAz7WxD6AswAf4an0e6Lk69gGUBfgAX7XPAT3Xxz6AsgAf4Kv0caCHmhpgAGUBPsAH9FCV2AdQFuADfEAPVYl9AGUBPsAH9FCV2AdQFuADfEAPVYl9AGUBPsAH9FCV2AdQFjt8cV74ktzw5XjhC0fCQA9Vi30AZRk+g1LpFFuhcIh1/0hzhHX/lmgz6/54C0hUI/YBEEJId+wDIISQ7tgHQAgh3bEPgBBCumMfACGEdMc+AEII6Y59AIQQ0h37AAghpDv2ARBCSHfsAyCEkO7YB0AIId2xD4AQQrpjHwAhhHTHPgBCCOmOfQCEENId+wAIIaQ79gEQQkh37AMghJDu2AdACCHdsQ+AEEK6Yx8AIYR0xz4AQgjpjn0AhBDSHfsACCGkO/YBEEJId+wDIISQ7tgHQAgh3bEPgBBCumMfACGEdMc+AEII6Y59AIQQ0h37AAghpDv2ARBCSHfsAyCEkO7YB0AIId2xD4AQQrpjHwAhhHTHPgBCCOmOfQCEENId+wAIIaQ79gEQQkh37AMghJDu2AdACCHdsQ+AEEK6Yx8AIYR0xz4AQgjpjn0AhBDSHfsACCGkO/YBEEJId+wDIISQ7tgHQAgh3bEPgBBCumMfACGEdMc+AEII6Y59AIQQ0h37AAghpDv2ARBCSHfsAyCEkO7YB0AIId2xD4AQQlr7f+Pnhn4aHZXjAAAAAElFTkSuQmCC) ``` # 5.2 Like in numpy arrays, we have attributes: # dtype, ndim, shape rank_4_tensor.dtype print() rank_4_tensor.ndim print() rank_4_tensor.shape print() rank_4_tensor.shape[0] print() rank_4_tensor.shape[-1] print() tf.size(rank_4_tensor).numpy() ``` It is important to keep in mind inherent or implied meaning of each axis. In the above example, here are the implied meanings: *Batch size:* 3 *Depth*: 2 *Width*: 4 *Height*: 5 ### Indexing (See [here](https://www.tensorflow.org/guide/tensor#indexing)) Single-axis indexing TensorFlow follows standard Python indexing rules, similar to indexing a list or a string in Python, and the basic rules for NumPy indexing. > indexes start at 0 > negative indices count backwards from the end > colons, :, are used for slices: start:stop:step ``` # 6.0 Sample 1-axes tensor rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34]) print(rank_1_tensor.numpy()) # 6.2 Indexing with a : slice keeps the axis: #6.2.1 Everything rank_1_tensor[:].numpy() print() #6.2.2 Before 4: rank_1_tensor[:4].numpy() print() #6.2.3 From 4 to the end rank_1_tensor[4:].numpy() print() #6.2.4 From 2, before 7: rank_1_tensor[2:7].numpy() print() #6.2.5 Every other item rank_1_tensor[::2].numpy() print() #6.2.6 Reversed rank_1_tensor[::-1].numpy() print() ``` ### Manipulating shapes ``` # 7.0 Shape returns a `TensorShape` object # that shows the size along each axis x = tf.constant([[1], [2], [3]]) print(x.shape) ``` You can reshape a tensor into a new shape. The tf.reshape operation is fast and cheap as the underlying data does not need to be duplicated. ``` # 7.1 You can reshape a tensor to a new shape. # Note that you're passing in a list reshaped = tf.reshape(x, [1, 3]) reshaped.shape # 7.2 We created this tensor earlier print(rank_3_tensor) ``` If you flatten a tensor you can see what order it is laid out in memory. ``` #7.3 A `-1` passed in the `shape` argument # says "Whatever fits". print(tf.reshape(rank_3_tensor, [-1])) ``` Typically the only reasonable use of tf.reshape is to combine or split adjacent axes (or add/remove 1s). For this 3x2x5 tensor, reshaping to (3x2)x5 or 3x(2x5) are both reasonable things to do, as the slices do not mix: ``` # 7.4 print(tf.reshape(rank_3_tensor, [3*2, 5]), "\n") print(tf.reshape(rank_3_tensor, [3, -1])) ``` #### Reshaping can be a mess Reshaping will "work" for any new shape with the same total number of elements, but it will not do anything useful if you do not respect the order of the axes. It will be a mess. ``` # 8.0 Bad examples: don't do this # Only multiply adjacent indices # 8.1 You can't reorder axes with reshape. print(tf.reshape(rank_3_tensor, [2, 3, 5]), "\n") # 8.2 This is a mess print(tf.reshape(rank_3_tensor, [5, 6]), "\n") ``` ### Broadcasting Broadcasting is a concept borrowed from the equivalent feature in NumPy. In short, under certain conditions, smaller tensors are "stretched" automatically to fit larger tensors when running combined operations on them. The simplest and most common case is when you attempt to multiply or add a tensor to a scalar. In that case, the scalar is broadcast to be the same shape as the other argument. ``` # 9.0 x = tf.constant([1, 2, 3]) y = tf.constant(2) z = tf.constant([2, 2, 2]) # 9.1 x * y x + z ``` Likewise, axes with length 1 can be stretched out to match the other arguments. Both arguments can be stretched in the same computation. In this case a 3x1 matrix is element-wise multiplied by a 1x4 matrix to produce a 3x4 matrix. Note how the leading 1 is optional: The shape of y is [4]. ``` # 9.2 These are the same computations x = tf.reshape(x,[3,1]) y = tf.range(1, 5) # 9.2.1 print(x, "\n") print(y, "\n") print(tf.multiply(x, y)) # 9.2.2 x = tf.constant([1, 2, 3]) y = tf.constant(2) z = tf.constant([2, 2, 2]) # 9.2.3 All of these are the same computation print(tf.multiply(x, 2)) print(x * y) print(x * z) ``` #### Logical operations ``` # 9.3 Logical operations on tensors # Pl refer: tf.math # https://www.tensorflow.org/api_docs/python/tf/math print(tf.constant([1,2,10]) < 5) print(tf.constant([True, False, True,True], dtype = tf.bool)) # 9.4 A tensor can be reshaped as per its shape argument print(tf.constant(np.arange(20), shape = [5,4])) x = tf.constant(np.arange(20), shape = [5,4]) # Display Ist two rows and columns x[: 2, :2] # 9.5 Show first and third rows # This works as an exception x[tf.constant([True, False,True,False, False], dtype = tf.bool)] # But the following do not work: # Valid indices are: integers, slices (`:`), # ellipsis (`...`), tf.newaxis (`None`) # and scalar tf.int32/tf.int64 tensors #x[ # tf.constant([True, False,True,False, False], dtype = tf.bool), # tf.constant([True, False,True,False], dtype = tf.bool) # ] #x[ # tf.constant([1,0,0,1,0], dtype = tf.int32), # tf.constant([1,0,1,0], dtype = tf.int32) # ] ``` ### Define custom loss function Refer: Page 384 of Book by Aurelion ``` # 10.0 Define a model import pandas as pd from tensorflow import keras from sklearn.datasets import load_boston # 10.1 X,y = load_boston(return_X_y= True) X.shape # (506, 13) # 10.2 model = keras.models.Sequential( [ keras.layers.Dense(5, activation = 'relu'), keras.layers.Dense(1,activation = 'sigmoid') ] ) ``` #### Huber loss For a comparative picture of loss functions being used in Regression, see [here](https://heartbeat.fritz.ai/5-regression-loss-functions-all-machine-learners-should-know-4fb140e9d4b0). Briefly, if data is noisy, use MAE and if their are more outliers or noise, use huber loss. Huber loss is less sensitive to outliers in data than the squared error loss. It’s also differentiable at 0 (unlike MAE). It’s basically absolute error, which becomes quadratic when error is small. How small that error has to be to make it quadratic depends on a hyperparameter, 𝛿 (delta), which can be tuned. ![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfgAAAFoCAYAAAC7Tuk8AAAgAElEQVR4nOzdd1RU1/42cH4xeY2KEimWqGDUYDsIBkvUxEIssZsYjd2b6LXEGDXWVOzYWxJj51DEgqKIiIhKUVHBihRFQQEFBel92vP+cWQuCAMDzMye8v2sxVoKM2cemPLM2XPO3kYghBBCiN4xYh2AEEIIIapHBU8IIYToISp4QgghRA9RwRNCCCF6iAqeEEII0UNU8IQQQogeooInhBBC9BAVPCGEEKKHqOAJIYQQPUQFTwghhOghKng1CAwMRIsWLVjHMGjOzs7o06cP6xhEhaZPn47ffvsNABASEgJra2uN3K6RkREeP36s0m0+ffoURkZGEIvFKt1uVaKiomBvb6+SbWn765y6/sYvX75Ehw4dUFRUpNLtqgMVfAWsrKwQEBBQ5nvVKQx1P/A1XV7FxcUwMzNDbm5uuZ9duXIFvXr1QqNGjdC4cWP07t0bYWFhGssGVPxEru7fyMrKCu+99x7S0tLKfN/W1hZGRkZ4+vRpme87OjrCyMgIN2/eLPN9Z2dnvPPOO2jQoEGZrxcvXlR4u+ooD1ZK/+4NGzaEra0tfHx8VLb90gVfnUy1fa7oU8F//fXXOHLkSJnvHT58GPb29mjQoAGaNWuGL7/8EleuXKlyW+p+nevXrx/q1q0LY2NjNGzYEJ988gmcnJyULta3/8b9+vXD/v37VZJt7ty52LVrl0q2pU5U8BXQpoKv6AVA0wUfEBCAL774otz3s7OzYWJiAg8PD0gkEhQUFMDf3x/379/XWDZAdQVvbW1d5kkbEREBa2vrcgUvk8nQpk0bmJqa4ocffiiznererr4VfMnvLpVKsWvXLtSrVw/p6enlLluTYqOCr53k5GQ0btwYhYWF8u9t3boVFhYWOHnyJPLy8iASiXDmzBksWbKkyu3V5nVOIpFUeZnShZyXl4fAwEDY2trCwcEBMpmsyuurs+CvXr2Kzp07q2Rb6kQFXwFlCv7tJ33pF5+SB/66detgZmYGKysruLu7yy9bVFSExYsXo1WrVmjSpAlmz56NgoKCMtfdsGEDmjZtiilTppTLV9mL1osXLzBy5Eg0btwYbdu2xb59++Q/u3nzJuzt7dGwYUM0adIEixYtAgAUFhZi8uTJMDU1hYmJCbp164aXL1/Kr7do0SJs3bq13G2Fh4fDxMRE4d/R2dkZvXv3xsKFC2FiYoKPPvoI165dg7OzM1q2bAkLCwvwPC+/fFZWFqZOnQpzc3NYWlpizZo1kEqlAITCWLNmDSwtLWFhYYGpU6ciKysLANCqVSsYGRnJ95ZDQ0Plf6PFixfjgw8+QOvWrXHu3DmFWa2srLBmzRp069ZN/r3Fixdj7dq15Qo+ODgY77//Ptzc3GBqaori4uIyv7MqCr6y37ey+8vZ2RkfffQRjI2N0bp16zKPuxIvXrzA+++/X6Z479y5AzMzM4hEIjx+/Bh9+/ZFo0aNYGZmhvHjxyv1u7z9u+fl5cHIyAjh4eEKH9c+Pj6wtbWFiYkJevXqVebN4Z07d9C1a1cYGxtj/Pjx+Pbbb8s9x0okJibiq6++grm5OUxNTTFv3jxER0ejbt268lGFksdqZc8/ANi0aROaNWuG5s2b4+DBgwrvoyNHjpQb7t62bRtGjhwJADh79izs7OzQsGFDtGzZEo6OjvLLvV0+b7/mODo6YvLkyfL/X79+Hb169YKJiQm6dOmCwMDAMn/3qu5zAHBxcSnzRj0rKwsNGjTA8ePHK7x8yd9qwYIFaN68OZo3b44FCxbI96Dfvg+io6PRr18/mJiYoFOnTvD29pb/bPr06ZgzZw6GDh2K+vXrl3t9rUhFhZyQkIB69erJR4akUimcnJzkb7jHjRsnf1yX/hv/+uuveOedd1C3bl00aNAA8+bNAwD89NNPaNmypXyEICQkRH5bil4vAeENar169fDs2bMqfw+WqOAroIqCr1OnDhYtWoSioiIEBQWhfv36ePjwIQBgwYIFGDlyJNLT05GTk4MRI0ZgxYoVZa67bNkyFBUVlXnhUZSltL59+2Lu3LkoLCzE3bt3YW5ujosXLwIAPv30U7i6ugIAcnNzcf36dQDAnj17MGLECOTn50MikeDWrVvIzs6Wb7N9+/by7KVlZ2fD1NQU06ZNw7lz55CRkVEuZ506dXDo0CFIJBL89ttvaNWqFX744QcUFRXB398fxsbG8qH/qVOnYtSoUcjJycHTp0/x8ccf48CBAwCAgwcPom3btoiLi0Nubi6++uoreUko2oN/9913sW/fPkgkEuzevRvNmzdX+M6/5D63trZGdHQ0JBIJWrZsiWfPnpUr+O+//x7jxo2DSCSCqakpTp48qdR9UxFF5VHZ76vo/srLy0PDhg3l91VycjIiIyMrvN0BAwaUefO3ZMkSzJ49GwAwYcIErF27FlKpFIWFhUoN1779u4vFYuzYsQPGxsbIysqq8HF9+/ZtWFhY4MaNG5BIJOB5HlZWVigqKkJxcTEsLS2xbds2iEQieHp64t13362w4CUSCbp06YKFCxciLy+vTOaK7o/Knn9+fn5o0qQJHjx4gLy8PEycOFHhfZSfnw9jY2PExsbKv9etWzf5EHhgYCAiIiIglUpx//59NGnSBKdOnQJQvYJ//vw5TE1N4evrC6lUigsXLsDU1BSpqanVus+XLFlSZsTJz88PderUqXQU4Y8//kDPnj3x6tUrpKamolevXvj999/L3QcikQht27bFunXrUFxcjEuXLsHY2Fiea/r06WjUqBGuXr0qf1wdPnwYNjY2Cm9b0R73559/jmXLlgEAtm/fjp49eyIpKQlFRUWYNWsWJkyYUOHfuKLtubm54fXr1xCLxdiyZQuaNm0qH+FQ9HpZwsbGpsybGG1EBV8BKysr+Tv+kq969epVu+Dz8vLkPx83bhxWr14NmUyG+vXr48mTJ/KfhYaGonXr1vLrvvfee2WG0d6mqEQSExPxzjvvICcnR/69FStWYPr06QCEJ8aff/5Z7nPmgwcPltt7KhEXF4c2bdoozBIdHY3p06ejRYsWqFOnDkaOHFlmb7Jdu3byy0ZERMDIyKjM6ICpqSnu3r0LiUSC//f//h+ioqLkP9uzZw/69esHAHBwcMA///wj/9nDhw/x7rvvQiwWKyz4tm3byv+fn58PIyMjpKSkVPh7lLzArlmzBitWrICfnx8GDhwIsVhcpuDz8/PRsGFD+Qv1rFmzMGrUqDK3W6dOnTKPncr+forKo7LfV9H9lZeXBxMTE5w4caLCN4al7d+/HwMGDAAgfOTQsmVLBAcHAxDeaP33v/9FUlJSpdt4W+nf3czMDD179pSXVkWP6zlz5sjLooS1tTWCgoIQHBxc7g1Zr169Kiz40NBQmJubK/VxVlXPv++++w7Lly+X/+zRo0eVDtFPnjwZq1atAgDExsbC2NgY+fn5FV52wYIFWLhwIYDqFfyGDRvKjeQNHjwYPM9X6z6fOXNmmd/N3d0dTZs2rfQ6bdq0ga+vr/z/58+fh5WVFYCy90FISAiaNm0qH3EDhDeKJaMW06dPx9SpUyu9rbcpKvhvv/0WM2fOBAB06NBBvgMDCG9wFL0uKDNE/8EHH+DevXsAFL9elujduzdcXFyq9TtpGhV8BVSxB29ubl7m+kuWLMGcOXPw6tUrGBkZlSmARo0aoUGDBvLrfvjhh5XmU1TwN27cKHe7//77LwYOHAhAeAGaMGECzMzM0K1bN/kwl0gkwsqVK9GxY0c0b94cS5cuhUgkAgDs2rULP/74Y6V5SsTExMDe3l7+DvrtnI8fP4aRUdmHXIsWLXDlyhW8fPkSRkZGZd4U+fn5yd8gdOjQAWfPnpX/rLCwEEZGRnj+/LnSn8FX9kJdcp8/e/YMlpaW+Pbbb+Hq6lqu4N3d3dG4cWP5sHxwcDDee+89pKamKrzdyijKVNnvW9n9df78eQwcOBAmJiYYNmwYYmJiKrzdzMxMvP/++3jx4gWCgoLQqlUreZmmpKRg5syZaN68OTp16oSDBw8q9btU9rtX9LgeOnQo6tWrV+6NtIeHB44cOVLm4xJAKIyKCv7YsWMKjwx/O1NVz78hQ4bg77//ll++qKio0seNr68vOnToAABYuXJlmSK+ceMG+vfvD3NzczRq1Ah169ZVOOpUWcHPnTsXdevWLZO5fv36cHJyAqD8fb506dJq78G///77ZUYEYmJi8N577wEoex8cPXq03P21fPlyeRFPnz4dv/76q8LbqYiiQv7ss8/ke/D16tVDw4YNy/xt6tatW+HrQkXb27JlCzp06IBGjRrBxMQE//d//yd/w6Do9bIE7cHrKGUKvn79+mX2oIYMGVLpHvz48eOxevVqSKVS1KtXD8+fP6/wtpU5cKU6e/C//PKLfA++hFQqhaenJ+rWrVsmIyC88HTs2FE+NP7ll1/Cz8+v0jyl/fXXX+A4rsKclRW8RCLBe++9V2YPfu/evQr34B89eiR/p14yjK6KggeEF4KGDRsiLy+vXMEPGjQI7733Hpo2bYqmTZuiSZMmMDIyws6dOxXebmWU3YMv/fuW9vb9VaKgoAA///wzPvvsM4W3PWrUKGzfvh2zZs2Sv2C+7cqVK6hbt65SB5lVVfBvP65nzZqFtWvXVnj5oKCgcnvwvXv3VrgHb2FhUWFR8TxfJlNVz7///Oc/ZfZyY2NjK33ciMVimJub4+7du2jfvn2Z4zzatGmDbdu2yUctFixYIC/tt8vn7c+sZ8+eLb/s+vXr5UVZmaruczc3N/mbfeB/n8F7enoq3Obbe/D+/v5K78FPnDixzB58dQ+QrKiQExMTy3wGb21tjatXr1Z4/bf/xv379y+zvZCQEFhYWMg/RgGEPfi3X/srer2kz+B1mDIF37t3byxfvhwSiQR+fn54//33yxX84sWLUVxcjJCQENSvX1/+zvqnn37CuHHj8OrVKwDCZ2znz5+XX1eZgu/duzcKCwvLfAHCu9t58+ahsLBQ/rnfhQsXAAhP8JI9zYCAANStWxeFhYW4fPkyIiIiIJFIkJ6eji5dusDZ2RkFBQUwNTVV+HFBTEwMtmzZIh/KTUxMRO/eveUvRtUpeEAY7hwzZgxycnLw7NkztG/fXv6E3L9/P9q1a4f4+Hjk5uZi7Nix8hfA/Px8vPPOO3j06JHC+wtQvuCfPHmC8PBwAChT8M+fP8c777wDf39/pKSkyL+WL1+OTz75ROHtVsbIyAhRUVFl7keJRFLp76vo/nr58iW8vb2Rl5cHqVSKP//8U/4GqSJHjx5F165dYWZmJh+WBIDjx4/L79PIyEi8//77iI+Pr/J3qW7Bh4eHo2XLlrhx4wZkMhny8vJw9uxZ5OTkoLi4GK1atcKOHTsgFotx8uTJKj+DX7x4sfwz+JIXfT8/P1hZWZU5ELKy59+5c+fQtGlTREVFIT8/H5MnT67yKPo5c+Zg4MCB5d5klD6I9ObNm7CwsFBY8JMmTcLEiRMhEokQHh4OMzMz+WUTExPRtGlTnD9/HhKJBIWFhQgMDERSUlK17vOXL1+Wez5v3bpVfmxAfn4+RCIRzp07h6VLlwIAfvvtN/Tq1QupqalIS0tDnz59KrwPiouL0aZNGzg5OUEkEiEwMBDGxsby17zaFnx+fj6CgoLQtWtX9OvXT17I27ZtQ79+/eRFm5qaitOnT1f4N/7222/xyy+/yLfv6+uL5s2bIyUlBcXFxVi1ahXeeecd+euAotdLALh27Ro6duxYrd+HBSr4CihT8OHh4ejUqROMjY0xZcqUCocP165dCzMzM7Rq1Up+sAYgDLf+8ssv+Oijj9CwYUN06NBBvgeobMEbGRmV+xKLxUhKSsLw4cPRuHFjtGnTBv/++6/8epMnT4aFhQUaNGiATp06yT9H9vDwgLW1NerXr48mTZpg/vz5EIvF8PHxwfDhwxXmeP78OcaNG4cPP/wQ9evXx4cffohZs2bJD9CrbsFnZGRg8uTJMDc3R8uWLbFq1aoyR9GvWrUKLVu2hLm5OSZPnlzmoL4//vgD5ubmMDExwfXr12tV8KWVLngnJyd5kZf24sULvPvuu3jw4EGZc8FLfymaG6Ci+3H//v2V/r6K7q/k5GT50e8mJibo169fmRGRtxUUFMDY2BidOnUq8/2lS5fiww8/RIMGDdCmTRvs3btX/rNOnTopPEq7ugUPCAXcrVs3mJiYoFmzZvjmm2/kI1Dh4eGws7OTH0U/fvx4hUfRJyQkYPTo0TA1NYWZmRnmz58PQCieYcOGoXHjxjAzMwNQ+fMPAJycnNC0adMqj6IvERISAiMjo3KnTHp6esLS0hLGxsYYPnw45s2bp7Dg4+Li0KNHDzRo0ADDhg3D/PnzyxxFf+PGDfTt2xeNGzeGubk5hg0bhoSEhGrf59988w2OHj1a5nvu7u6wt7dH/fr10bRpUwwbNgzXrl2T/63mz5+PZs2aoVmzZpg/f7685N6+DyIjI+VZOnbsCC8vL/nPKip4d3f3co+90kqfB29sbAw7OzusXbu2zBsUqVSKrVu3wtraGsbGxmjTpo28xN/+G4eGhuLjjz/GBx98gPnz50MikeD7779Hw4YN0axZM2zcuLHM64Ci10sA+OGHH8o8ZrQVFTxRaO7cuWWGiQkhui0qKgrdunVT6jxyUrFXr16hQ4cOlR4IrS2o4IlCe/fuRXJyMusYhBBCaoAKnhBCCNFDVPCEEEKIHqKCJ4QQQvQQFTwhhBCih3Sy4M3MzGBvb09fOvbV0bYj3m/9PqxtrJlnoS/6MpSvptZN0ahNI+Y56KvmXyWneFaXTha8vX3F01IS7VYgLoCtiy123tb+80cJ0RdfnvgSiwIXVX1BorVq2nlU8ESjxnqPxawLs1jHIMQgZBVlgeM57I+ofJEVot2o4IlOcLzmiD5H+tBEG4RoQOiLUHA8h9AXoayjkFqggic64fij4+B4DonZiayjEKL3DkQcAMdzyCrKYh2F1AIVPNEJ0a+jwfEczsWfq/rChJBaWRS4CENODGEdQ61EIhHi4+MRHR2t81/x8fHypZ9Lo4InOkEkFcHezR6bwjaxjkKI3hvkOQhLgpawjqFW8fHxSEtL0/mP/WQyGdLS0ipcuZEKnuiMSb6TMO3cNNYxCNFraQVp4HgOfCTPOopaRUdH63y5l5DJZIiOji73fSp4ojPW31iP7u7dIZFKWEchRG8FJQaB4zncfnmbdRS1qqgQdRkVPBW8Tjvz5Aw4nkNsRizrKITorb/u/IUuLl2QL8pnHUWttKHg/fz8YG1tjbZt28LJyancz4ODg9G1a1fUqVMHnp6elW6LCp4KXqfFZcaB4zl4xXqxjkKI3podMBtfe3/NOobasS54iUSCNm3aIC4uDsXFxejSpQuioqLKXObp06e4f/8+pk6dSgVfFSp43SaVSdHzcE+sub6GdRRC9JJMJkOfI33geM2RdRS1Y13woaGhGDx4sPz/69evx/r16yu87PTp06ngq0IFr/u+O/8dJvhMYB2DEL2UmJ0Ijufg+ajyMtEHpQtx5ZlIjN8TqtKvlWciK719T09PzJgxQ/5/V1dXzJs3r8LLUsErgQpe920N34qurl0hkpQ/55MQUju+cb7geA4x6TGso6gd64I/fvx4uYL/8ccfK7ys3hb8d999BwsLC3Tu3LnczzZv3gwjIyOkpaUptS0qeN13/ul5cDyHyLTKnzyEkOrbGLYR3dy6QSTV/zfQNESvmMYKPjg4GLdv3y5X8ImJiRg8eDAsLS2p4A3I89zn4HgOR2OOso5CiN6Zem4qpp6byjqGRrAueLFYjI8++gjx8fHyg+wiIyvecdHbggeEIwnfLvixY8fi3r17sLKyooI3IDKZDJ8f+Ry/X/2ddRRC9IpYKkY3t27YcHMD6ygawbrgAcDX1xcff/wx2rRpg7Vr1wIA/vjjD3h7ewMAwsLC0KJFC9SvXx+mpqbo1KmTwm3pTcF7e3vjp59+AgAqeAM0J2AOxpwewzoGIXrlYfpDcDwH3zhf1lE0QhsKXpX0ouDz8/PRo0cPZGUJqxxVVfB79+6Fvb097O3tYWlpqZG8RL3+vvu3QUzEQYgmeT7yNKgVG6ngFWNW8BEREbCwsICVlRWsrKxQp04dtGrVCikpKVVuh/bg9UNgYiA4nsOtl7dYRyFEbzhec0SfI330Zn72qlDBK8b8M/gSNERveAxlMQxCNOlr768x+8Js1jE0hgpeMY0V/IQJE9CsWTO8++67aNGiBQ4cOFDm51Twhmmg50C9X86SEE3JF+XD1sUWf935i3UUjaGCV4wmuiFMLQpchCEnhrCOQYheuP3yNjieQ1BiEOsoGkMFrxgVPGGKj+TB8RzSCpQbvSGEKGaIzycqeMWo4AlTJXsclxMus45CiM5bGrQUgzwHsY6hUdpQ8JXN1FpdVPBU8HqjQFwAOxc77Li9g3UUQnTelye+xKLARaxjaJQ2FLyimVprggqeCl6vjDszDjPOz6j6goQQhTIKM8DxHA49OMQ6ikZpQ8EDlZ8lVh1U8FTwemXN9TXoebgnJFIJ6yiE6KyQpBBwPIewlDDWUTSqTCGeWw4cGqbar3PLlcpBBa8iVPD6xfuJNzieQ2xGLOsohOis3Xd3w4a3QZ4oj3UUjaKCV4wKnjAXnxUPjudwMvYk6yiE6KzZAbPxlfdXrGNoHA3RK0YFT5iTyqTo7dEbjtccWUchRCcZ8nOICl4xKniiFWZfMMy9D0JUoWQUzCvWi3UUjdOGgq9qptbqoIKngtc7JSvLGdrnh4SowunHp8HxHJ5kPmEdReO0oeBViQqeCl7vlBwBfDP5JusohOicVaGr0OtwL0hlUtZRNI4KXjEqeKIVMgszwfEc9kfsZx2FEJ0z1nssZl2YxToGE1TwilHBE60x3Gs4frr0E+sYhOiUfFE+urh0wd93/2YdhQkqeMWo4InWWBGyAv2P9YdMJmMdhRCdEZYSBo7nEJIUwjoKE1TwilHBE61xOPowOJ5Dcm4y6yiE6Iz9EfvB8RyyirJYR2GCCl4xKniiNSLTIsHxHPye+rGOQojO+PHSjxjhNYJ1DGao4BWjgidaQyQR4RPXT7ApbBPrKIToBJlMhr5H++LXK7+yjsKMNhR8RcvFpqenY+DAgWjXrh0GDhyIjIwMpbZFBU8Fr7em+E7B1HNTWccgRCck5SSB4zkce3iMdRRmtKHgK1oudunSpXBycgIAODk5YdmyZUptiwqeCl5vbQzbCHs3e4ikItZRCNF6Z+POguM5xKTHsI7CjDYUPFB+qlpra2skJwvHEyUnJ8Pa2lqp7VDBU8HrLb94P3A8h8jXkayjEKL11t9Yj+7u3SGWillHYaZ0IW64uQH/8fuPSr823NygVI63C97ExKTMzz/44INq/z4lqOCJXniR+wIcz+Fw9GHWUQjRehN8JuC789+xjsEUFbxiVPBEq8hkMjgcc8CyYOU+ryLEUBWKC2HnYoftt7azjsIUDdErRgVPtM6iwEUYcmII6xiEaLU7r+6A4zlcSrjEOgpT2lrwS5YsKXOQ3dKlS5XaDhU8Fbxec4l0AcdzeJX/inUUQrQWH8mD4zmkFaSxjsKUNhR8RcvFvn79Gg4ODmjXrh0cHByQnp6u1Lao4Kng9dr91PvgeA4Xnl1gHYUQrUUjXQJtKHhVooKngtdrNOENIVVzOO6ApcHKDfvqMyp4xajgiVaadm4aJvlOYh2DEK2UkpcCjufgHu3OOgpzVPCKaazgK5rKb8mSJWjfvj1sbGwwZswYZGZmKrUtKnj9t/XWVti52qFIUsQ6CiFax/+pPzieQ0RqBOsozFHBK6axgq9oKj9/f3+IxcIEDcuWLVN6Kj8qeP13OeEyOJ7DnVd3WEchROtsuLlBmPFRQjM+RkdH680S0zKZTDcLHih/GkFpXl5emDRJuSFZKnj9l16YDo7ncPDBQdZRCNE6E3wmYLrfdNYxtEJ8fDzS0tJ0vuRlMhnS0tIQHx9f7mc6X/AjRoyAm5ubUtuhgjcMw72GY/6l+axjEKJVCsQFsHOxw47bO1hH0QoikQjx8fGIjo7W+a/4+HiIROVHZXS64NeuXYsxY8ZU+g5s7969sLe3h729PSwtLdUZk2iJX6/8ir5H++r8O3NCVCksJQwczyE4KZh1FKIhOlvwPM/j008/RX5+vtLboT14w3D80XFwPIeE7ATWUQjRGnvu7QHHc8gqymIdhWiITha8n58fOnbsiNTU1GpthwreMMRmxILjOXg/8WYdhRCtMTtgNkafGs06BtEgrS/4iqbya9u2LVq2bAlbW1vY2tpi9uzZSm2LCt4wSGVSfHr4U6wKXcU6CiFaQSqTotfhXnC85sg6CtEgrS94VaKCNxyzL8zGV95fsY5BiFZ4lPEIHM/h9OPTrKMQDaKCJ3pp973dsOFtkFOcwzoKIcwde3iMjksxQFTwRC9dT74Ojudw7fk11lEIYW5FyAr0O9qPziwxMFTwRC/lifLQxaUL/rn7D+sohDA35MQQLLy8kHUMomFU8ERvfXPmG8z0n8k6BiFMpeanguM58JE86yhEw6jgid5ac30Nerj3gEQqYR2FEGZKFpi5n3qfdRSiYVTwRG/5xPmA4znEpMewjkIIM7TAjOGigid663nuc3A8h8PRh1lHIYQZWmDGcFHBE70lk8nwxfEvsDhoMesohDBBC8wYNip4oteWBi/FgGMD6PQgYpBogRnDRgVP9NrRmKPgeA6J2YmsoxCicbTAjGGjgid6rWThmVOPT7GOQojG0QIzho0Knug1qUyK3h698cfVP1hHIUSjaIEZQgVP9N6PF3/EcK/hrGMQolG0wAyhgid6z/mBMzieQ1pBGusohGjMkZgjdPyJgaOCJ3rvfup9cDwH/6f+rKMQojFLgpbA4bgDnUFiwKjgid4TSUXo7t4dTjedWEchRCNkMhkGHBuApcFLWUchDFHBE4Mw402NNAIAACAASURBVPwMfHPmG9YxCNGIhOwEcDyHYw+PsY5CGKKCJwbhn7v/wIa3QU5xDusohKidV6wXOJ5DXGYc6yiEISp4YhCuJ18Hx3MISQphHYUQtfv1yq/4/Mjn9Pm7gaOCJwYhX5RPc3ITgzHkxBAsvLyQdQzCGBU8MRgTz07EtHPTWMcgRK1S8lLA8RzcotxYRyGMUcETg7E5bDO6unZFkaSIdRRC1OZs3FlwPIfo19GsoxDGqOCJwbiUcAkcz+HWy1usoxCiNqtCV+HTw59CIpWwjkIYo4InBiOzMBMcz2Hf/X2soxCiNqNOjcLcgLmsYxAtQAVPDMqY02MwO2A26xiEqMXrgtfgeA4HIg6wjkK0ABU8MSirQ1ej5+GeNHxJ9FLAswBwPIe7r+6yjkK0ABU8MSglByBFvo5kHYUQldtwcwO6uXWDSCJiHYVoASp4YlBe5b8Cx3PgI3nWUQhRuXFnxmHG+RmsYxAtofUF/91338HCwgKdO3eWfy89PR0DBw5Eu3btMHDgQGRkZCi1LSp4AgAjvEZg3sV5rGMQolI5xTmw4W2w++5u1lGIltD6gg8ODsbt27fLFPzSpUvh5CSsDObk5IRly5YptS0qeAIAK0NX0mlERO8EJwWD4zncTL7JOgrRElpf8ADw9OnTMgVvbW2N5ORkAEBycjKsra2V2g4VPAEA3zhf+hye6J1tt7bBztUOBeIC1lGIltDJgjcxMSnz8w8++ECp7VDBEwBIzU8Fx3NwfuDMOgohKjPp7CRM8Z3COgbRInpf8Hv37oW9vT3s7e1haWmptoxEt4zwGoEfLv7AOgYhKpEnyoOtiy123dnFOgrRIjpZ8DRET2qrZDpPsVTMOgohtVby+fuN5BusoxAtopMFv2TJkjIH2S1dulSp7VDBkxLn4s+B4zk8SHvAOgohtbYlfAu6unZFobiQdRSiRbS+4CdMmIBmzZrh3XffRYsWLXDgwAG8fv0aDg4OaNeuHRwcHJCenq7UtqjgSYmSz+EPPTjEOgohtTbeZzz+4/cf1jGIltH6glclKnhS2shTI2lRDqLzsoqy6Px3UiEqeGKwSualp8/hiS4rWQY5PCWcdRSiZajgicHyi/cDx3OISI1gHYWQGttwcwPs3exRLClmHYVoGSr4mqJZ0HReWkEaOJ7DwQcHWUchpMa+9v4aM/xp/nlSHhV8TRTnAwe/BMLpAC1dN+rUKMwJmMM6BiE1klGYAY7nsO/+PtZRSG0U5wPBmwAVrwJIBV8TxXmA+zjAsREQ6ATIZKrZLtG4NdfXoId7D/ocnuikC88u0Prvui4/Hdg/EHA0AeICVbppKviakoiAUz8IJX/mJ0BCBaGL/J4Kn8PfT73POgoh1bb2+lp0d+8OkZTWf9dJmQnAX92A1RZAlLfKN08FXxsyGXBxtVDyHhMBES3yoGtKPoffH7GfdRRCqm30qdGYHTCbdQxSEy8jgS3tgfWtgKdX1XITVPCqcHOfMLxyYJAw3EJ0ypjTYzDrwizWMQipFjpIVIc9vSIU+5YOQtGrCRW8qkSdBlabA391BzIT1Xc7ROWcbjqhm1s3FEmKWEchRGk03bKOijotDMlroCuo4FWpzLuyKPXeFlGZwMRAcDyHm8k3WUchRGkrQ1fSgkm6pmS0d/9AjYz2UsGrmgY+VyGqlVucC1sXW+y8vZN1FEKUNtxrOOZdnMc6BlFGmeO1JginxWkAFbw6ZCaq9chIonqTfSdj0tlJrGMQopSUvBRwPAc+kmcdhVRFIgZOvznjynu+Rs+4ooJXl9LnNobREdra7q87f6GLSxfkFOewjkJIlU4/Pg2O5/Aw/SHrKKQypedMubxe43OmUMGrU3G+MBzj2Ai4tIYmxNFi4Snh4HgOlxIusY5CSJWWhyxH36N9IZVJWUchiuS9BvY5ACs/AMIOMIlABa9uErEwLOPYSBimoQlxtJJIIkJ39+5Yd2Md6yiEVEomk6Hf0X5YFryMdRSiSMYzYJe98DFt9BlmMajgNUEmE4ZnHBsBh8dr7AALUj2zA2Zj1KlRrGMQUqlHGY/A8RxOPT7FOgqpSEoEsNkacGoFPAtlGoUKXpPCDwrDNfu/EIZviFZxfuAMjufwMu8l6yiEKMRH8uB4Dil5KayjkLfFhwDrWwJbOwKvolmnoYLXuGgfYE0TYfgm4xnrNKSUmPQYcDyHM0/YDakRUpU5AXMw8tRI1jHI2yK9hMnO/u4BZCWxTgOACp6NZ6HC8M1ma2E4h2gFqUyKz498jl+v/Mo6CiEVKpYU07Ei2ujGHuGMqYNDtGq6cip4Vl5FC8M461sC8cGs05A3FgcthsMxB8jojAeihcJSwsDxHC4nXGYdhQDC8VUBK4Xjq45M0roFx6jgWcp6DvzdUxjWeXCSdRoCwPORJzieQ1xWHOsohJSz8/ZO2LrYIrc4l3UUIhEBXnPeLBm+AJBKWCcqhwqetYIMYVjH0UQY5iFMJeUkgeM5HI4+zDoKIeVMPDsRU3ynsI5BivMAt7FCuQdu0No5TqjgtYGoQBjecWwEBDhq7YPFUAw5MQTzL81nHYOQMrKKstDFpQv+ufsP6yiGLS8N2DdAOCPqljPrNJWigtcWUgngs1Aoea/ZwvAPYcLxmiN6He5Fq3QRrRLwLAAcz+H2y9usoxiujKfAzq7CmVAxZ1mnqRIVvDaRyYCgTULJu30NFNHnbCycf3oeHM/h7qu7rKMQIrc6dDV6uPeASEpv/plIvg9s/hhwsgQSrrNOoxQqeG10ixeGf/b2F4aDiEaVDIX+ffdv1lEIkRt2chgtD8tKXBCwrgWwtRPwKoZ1GqVRwWurh+eANU2F4aD0eNZpDM5k38mYeHYi6xiEAACe5z4Hx3Nwj3ZnHcXwPDgBrDITznjKes46TbXodMFv27YNnTp1QufOnTFhwgQUFhZWenmdKngASLwJbLACNrUDku+xTmNQdt/dDRveBpmFmayjEPK/0zcz6fRNjbr+75sJbL4UznjSMTpb8M+fP0fr1q1RUCBMLDBu3Dg4OztXeh2dK3gASH0EbOssDA89ocktNOVe6j1wPAe/eD/WUQjBz4E/0wRMmiSTARf+1NoJbJSl0wXfsmVLpKenQywWY/jw4fD396/0OjpZ8ACQnQz800sYJorwZJ3GIEikEvT26I3frvzGOgoxcGKpGL08euGPq3+wjmIYJCLhTCbHRoDPIq2cwEZZOlvwALBjxw40aNAA5ubmmDRpUpWX19mCB4CCTODQMOFBF0oHf2kCTVtLtMGdV3fA8Rz8n1a+A0NUoCgXcP1KeJ0N2qTzc5LobMFnZGRgwIABSE1NhUgkwujRo+Hm5lbucnv37oW9vT3s7e1haWnJIKkKiQqBY1OFB5//b4BUyjqRXvOK9QLHc3iU8Yh1FGLAdt3ZBVsXW2QXZ7OOot/y0oQzl1Z+IJzJpAd0tuCPHz+O77//Xv5/FxcXzJ07t9Lr6PQefAmpBPBdIpT8iZmAuJh1Ir31Mu8lOJ7DoQeHWEchBuxbn28x9dxU1jH0W3o8sNNOOHPp4TnWaVRGZwv+xo0b6NSpE/Lz8yGTyTBt2jTs2rWr0uvoRcEDwrBRyFah5F1GA0U5rBPprTGnx2CG/wzWMYiBel3wGhzPYe/9vayj6K/ke8KZShushDOX9IjOFjwA/Pnnn2jfvj06d+6MKVOmoKioqNLL603Bl7jjDqxsDOz5HMh9xTqNXtocthldXbsiX5TPOgoxQGeenAHHc4h8Hck6in56clk4Q2lbZyD1Ies0KqfTBV9delfwAPDIH1jbDNjRBXj9hHUavRP6IhQczyE4KZh1FGKAlocsR9+jfSGV0fE2KhfhKZyZ9E8vIPsF6zRqQQWvD5LCgQ2tgY1tgOe0EIUqFUmK0M2tG9bfWM86CjEwUpkUnx/5HCtCVrCOon9C/xY+4jw0VDhDSU9RweuLtMfAdg5Y2xx4HMA6jV6ZEzAHI7xGsI5BDMyDtAfgeA5n47R/1TKdIZUKZyA5NgKOThHOTNJjVPD6JCcF+LcPsMoUuHeEdRq94R7tDo7nkJSTxDoKMSD/3vsXNrwN0gvTWUfRD+Ji4OR/hXI/u1inJ7BRFhW8vinMBvgRwoP46g6dn6hBG8RnxYPjORyNOco6CjEgk30nY4LPBNYx9ENRDuA6RnhdDN5sMK+LVPD6SFwEeH4nPJj9fqEJcWpJJpNhyIkhtFQn0RhasliFclOBPX2FM47ulJ8MTZ9ptOCfPHkiP5UtMDAQO3fuRGam5g5wMJiCB4RS91shlLznd0Lpkxpbf2M9url1Q6FYvz+zI9rB76kfOJ7D3Vd3WUfRbelxwA5bYQKbR+dZp9E4jRa8ra0txGIxHj9+jDZt2mDhwoUYOnRojQLUhEEVPCAMQ13dKZQ8P0IYvic1cvX5VTpdjmjMb1d+Q2+P3hBLxayj6K4Xd4BNbYUzjBLDWKdhQqMF37VrVwDApk2b5LPO2dnZ1ShATRhcwZe4d1Q48O7fPsKBeKTaiiRF6O7eHWuur2Edheg5qUyKvkf7YmnwUtZRdNeTS8C6D4FtHJAWyzoNMxot+B49esDDwwOdO3dGfHw8AKBz5841ClATBlvwAPD4onAK3XZOOKWOVNv8S/MxyHMQrS5H1Op+6n06Pa427h8Tdmh29xaW2jZgGi34qKgozJ8/Hx4eHgCA+Ph4ODk51ShATRh0wQPCJDgb2wAbPxImxyHVcuLRCXA8h9gMw90jIOq3684udHHpgqyiLNZRdM+1XcJHks7DgUL6+zE7ij4jIwP379+v7WaqxeALHhCms91hK0xv+4jWl66OV/mvwPEc9kfsZx2F6LFvznyDaeemsY6hW6RS4PyvQrkfm0YHFb+h0YLv168fsrOzkZ6ejlatWuGTTz7BokWLahSgJqjg38h9JSxQs7KxsGANUdq4M+No6U6iNil5KeB4DgcfHGQdRXeIi4ETM4Ry911iEBPYKEujBV9yQN3+/fvx559/AgBsbGxqFKAmqOBLKcoRlpp1bASEbDGYiR9q6687f6GLSxdkFurv/NWEnWMPj4HjOTzJpIWjlFKUA7iMevM6tpVex96i0YLnOA7JyckYNGgQwsKE0xao4BkSFwMnZtI732qISI0Ax3PwifNhHYXooXkX5+HLE1/SgZzKoJHIKmm04I8fPw4bGxvMmTMHABAXF4evv/66RgFqggq+AqUXXzg2Ve8XX6gtOoWJqEuBuAD2bvZwuqm5A4911usnwhLZa5sBsRdYp9FaNFUtEVz7683yicP0evlEVfjtym/o5dGLJiEhKhWcFAyO53DtxTXWUbRbydlAG1rT2UBV0GjBJyUlYcyYMbCwsECTJk3w9ddfIylJcyt0UcFXIcITWGUG/NMLyH7BOo3WuvDsAjiew62Xt1hHIXpkVegq9HDvgWJJMeso2utxAM3nUQ0aLfiBAwfi0KFDEIvFEIvFcHZ2xsCBA2sUoCao4JXw5PKbGaA6A6mPWKfRSrnFubBztcPWW1tZRyF6QiaTweG4AxZeXsg6ivYqmZFzN83IqSyNz0WvzPfUhQpeScn3gE3tgA1WQOJN1mm00gz/GRh9ajTrGERPxKTHgOM5eMV6sY6ifcqtqUET2ChLowX/xRdfwM3NDRKJBBKJBG5ubnBwcKhRgJqggq+G9HhgZ1dhFaaHfqzTaB23KDdwPIeE7ATWUYge2HNvD2x4G6QVpLGOol2kUmHJa8dGwPH/0AQ21aTRgk9ISMDIkSNhbm4OCwsLjB49GgkJmnuBpIKvprw0YG9/4TSU2y6s02iV57nPwfEc+EiedRSiByb4TMCks5NYx9Au4iJhqWvHRsC55ULZk2phfhT99u3bVbWpKlHB10BRLuA2VniSBW2iiSRKoSlFiSqUzF5HUyCXUpgN8COF150r2+l1p4aYF3yrVq1UtakqUcHXkEQEeM0Rnmw+C2lCnDd2390NG94Grwtes45CdJhHjAc4nkNcVhzrKNoh56WwtPUqU+CuB+s0Oo15wbds2VJVm6oSFXwtyGRAwEqh5I9MAkQFrBMx9zD9ITiew8nYk6yjEB02038mRniNYB1DO6Q9BrbbCKfCxQawTqPzmBc87cHrmBt7AEcT4OAQoCCDdRqmZDIZhpwYgh8v/sg6CtFRWUVZsHOxw/ZbmvuoUms9vyUsZb3xI+HfpNY0UvDGxsZo2LBhuS9jY2PUqVOnRgFqggpeRR6cBFabA3/3BLI0N1GRNtpwcwM+cf0E+aJ81lGIDjrz5Aw4nsP9VM0una11YgOEaWe32wjT0BKVYL4Hr0lU8CoUHwysbwls7Qi8imadhpmwlDBwPIeAZzScSKpvUeAiOBxzgFRmwEeI3/UQPm//9zPh83eiMlTwpOZSIoDN1oBTK+BZKOs0TIilYnx25DP8EvIL6yhExxSKC9HdvTvWXF/DOgobMhlwZdubCWxGCkfOE5XS6YLPzMzE2LFj0b59e3To0AGhoZWXDBW8GmQ8A3bZA2uaANGGuYTqb1d+Q2+P3hBJRayjEB0SmBhouIvLSKXCue2OjQDP74Wlq4nK6XTBT5s2Dfv3C+eOFhcXIzOz8lXQqODVJO81sM8BWPkBEHaAdRqNu5RwCRzP4UbyDdZRiA75/erv6HW4F0QSA3tjKC4Cjk8Xyt3vF5rARo10tuCzs7PRunVryKoxAQIVvBoV5wHu44Qn7eV1BjUxRYG4AN3cumH9jfWsoxAdUfLRzrLgZayjaFZhFuA8XHiduLqTdRq9p7MFf/fuXXTv3h3Tp0+HnZ0dZsyYgby8vEqvQwWvZhIxcPoH4cnrPV/4v4H46dJPGOg5sFpvOInhCk8JB8dz8H/qzzqK5uSkCCvBrTIVVoYjaqezBR8eHo46dergxg1hWPSnn37C77//Xu5ye/fuhb29Pezt7WFpaanpmIZHJgMurRFK3mMCUGwYp495P/EGx3OISI1gHYXoAIM7vTItVljDfW1z4PFF1mkMhs4WfEpKCqysrOT/DwkJwbBhwyq9Du3Ba9DNfcKEOAcGAfnprNOoXXZxNuxc7bAlfAvrKETLyWQyDPIchHkX57GOohlJ4cCG1sDGNsDz26zTGBSdLXgA+Oyzz/Dw4UMAgKOjI5YsWVLp5angNSzKG1htAfzVHchMZJ1G7eYGzMVgz8E0TE8qdT/1Pjieg/cTb9ZR1O+RvzCBzY4uNIENAzpd8Hfv3oW9vT1sbGwwevRoZGRUPnUqFTwDz64J58lvaQ+8jGSdRq1OPT4FjucQmabfvyepnc1hm2HnaofsYj0/7/uOu7DU9J7PgdxXrNMYJJ0u+OqigmfkZRSwpQOwvhXw9ArrNGpTMq/41ltbWUchWqpkeP6Hiz+wjqI+MhkQskU4DsdlNFCUwzqRwaKCJ5qRlSQM1a+2AKJOs06jNrMDZmPIiSE0TE8qpPfD81IJ4LtEKPcTM2gCG8ao4Inm5KcLB905mggH4ekhr1gvcDyHqNdRrKMQLaTXw/OiQuDYNKHcz/9KE9hoASp4olmiAuH0OcdGwMVVejchDi3/SRSRyWQY7DlYP4fnS09gc20X6zTkDSp4onkSMXDmJ+HF4NQPgJ5N1TnrwiwMPTmUhulJGRGpEeB4Dqcf69lHVNnJwO7ewCoz4P5x1mlIKVTwhA2ZDAh0Ekre/Rthqls9ceLRCXA8h+jXhruMLilPL4fnUx8B2zoD6z4EnlxinYa8hQqesBV+SFikZt8AYdEaPZBRmAFbF1vsvE1zbROBXg7PJ4YJE9hsagu8uMs6DakAFTxhL+assNzsrk+AjKes06jETP+ZGO41nIbpCQA9HJ5/6AesaQrstAPS41inIQpQwRPtkHAdcLIENn8MJN9nnabWPB950jA9kdsSvkV/hudvu76ZwKYvkJvKOg2pBBU80R6vYoCtnYB1LYC4INZpaiWrKAt2rnbYHLaZdRTCmFQmxSDPQZgbMJd1lNqRyYDgTcJxM65jgKJc1olIFajgiXbJeg788ymw2hx4cJJ1mlr58dKPcDjmAIlUwjoKYej2y9vgeA4+cT6so9ScVAKc/Vko95P/pQlsdAQVPNE+BRnAwS+FCXGu/8s6TY35xfuB4zmEpYSxjkIYWh26Gt3du+vu0rCiQuDoFKHc/X+nCWx0CBU80U6iQuDoZOFF5cKfOjkhToG4AN3du8PxmiPrKIQRkUSEPkf6YGnwUtZRaqYgEzg0VHgehv7NOg2pJip4or1KDwt6zdbJCXFWhKxAb4/eKJbQkKYhCkoMAsdzCErUwWNKsl8IH5etMgMiPFmnITVABU+0W5kDe77SuQN7QpJCwPEcLidcZh2FMLA0eCk+O/IZRFIde3Oa+rDUAa+BrNOQGqKCJ7qh5NScvf106tQckVSEz498jiVBS1hHIRqWL8pHd/fuWB26mnWU6km4IZyyuqkdkHyPdRpSC1TwRHeUmVwjnnUapa25vgbd3LohT6Q/0/GSqvnE+YDjOdx6eYt1FOU9PPfmOdZVp55jpGJU8ES3JIYBG6x0anrMO6/ugOM5nHlyhnUUokFzA+ZikOcgSGU6ctT5LV6YNnpvfyAvjXUaogJU8ET3pD4CtnE6s8CFVCbFYM/BmB0wm3UUoiHphemwdbHFtlvbWEepmkwGBG0UjnNx+1rnjnMhilHBE90kX6LSFLh/jHWaKu24vQO2LrZIK6A9I0PgEeMBjufwMP0h6yiVk0oAn4U6faYKUYwKnuiuwizAebjw4nRtF+s0lXqS+QQcz8El0oV1FKIBE3wmYKz3WNYxKicqAI5MEp4/AY46OdcEqRwVPNFt4iLg2DThRer8r1o9y9bEsxPxlfdXtMKcnnuc8Rgcz8E1ypV1FMX0ZLZIUjkqeKL7pFLAd6lQ8p7fa+082UdijtAKcwZga/hW2LnY4XXBa9ZRKpb1HPi755v1Hk6wTkPUiAqe6AeZDLiyTSh5l1FAofYty5lVlIWurl2x4eYG1lGImoilYgw4NgA/XvqRdZSKvYoBtnbUixUbSdWo4Il+uXtYmBDn38+A3Fes05Tzc+DP+PzI5xDRwUx6qWTmwovPLrKOUl7CdWECm80fA8n3WachGkAFT/RP7AVgbTNgRxfg9RPWacoITgoWCiBBCwuA1NrioMXC1LTa9gYu5iywpokwgU3GU9ZpiIZQwRP9lHQL2PgRsLEN8Fx7ZhITS8Xof6w/5l+azzoKUbGSj2CcbjqxjlJW+CFhApt9A2gCGwNDBU/01+snwHYbYG1zIDaAdRo5rT8Ii9TI0Zij2nUQpUwGBDoJx6W4fwMU01TJhkbnC14ikcDOzg7Dhw+v8rJU8AYo5yXwbx9hQpx7R1inAaAjp1GRapt4diK+9v5aO06DlEqAMz8J5X5qLk1gY6B0vuC3bt2KiRMnUsETxQqzAX6k8GJ3ZbtWTOgxwWcCnROvR0retPGRPOsoZSewubhKKx7vhA2dLvikpCQ4ODjg0qVLVPCkcuJi4Rx5x0bAueXMJ8Q5/ug4OJ7DvVRajlMfON10QlfXrsgozGAbJD8dODBYmMDmxl62WQhzOl3wY8eOxa1btxAYGEgFT6omlQJ+vwglf3y6MAseI3miPHR3747fr/7OLANRjUJxIXp59MLSoKVsg2QlAX/3ECawifRim4VoBZ0teB8fH8ydOxcAKi34vXv3wt7eHvb29rC0tNRkRKKtru0SSt55uDCfPSOO1xzRza0bcopzmGUgtXfmyRlwPIebyTfZhXgVDWzpAKxvCcSHsMtBtIrOFvyKFSvQokULWFlZoWnTpqhXrx4mT55c6XVoD57I3T8mHHi3uw+Qk8IkQmRaJDiew5EY7Tj4j9TMtHPTMNxrOLvjKZ5dA5xaAZutgZQHbDIQraSzBV8aDdGTGnl8UVhTfhsHpMUyiTDuzDjtOfKaVFvJKoHOD5zZBIg+A6y2AHbZA5kJbDIQrUUFTwzbizvAprbAhtZAUrjGb/7Yw2PgeA4RqREav21SextuboCdqx3SC9M1f+NhB95MYOMA5NGcCqQ8vSh4ZVHBkwqlxwE7bIE1TYFH5zV607nFueju3h1/XvtTo7dLaq9IUoTeHr2xJGiJZm9YJgMur3szgc04msCGKEQFTwgA5KYCe/oKC9Xc1uwENI7XHNHdvTtyi3M1erukdnzifMDxHG4k39DcjUrEgPePQrmf/kH4PyEKUMETUqIoF3AdI7x4Bm/S2AQhD9IegOM5eMR4aOT2iGpMOzcNQ08OhVSmoTkVivMBjwnC4/PSGprAhlSJCp6Q0sTFwMn/Ci+iZxcLU36qmUwmw7c+32LUqVF0sJ2OiEmP0ezMdfnpwIFBwgQ2N/dp5jaJzqOCJ+RtUilw4Q+h5I9OAUSFar9J7yfe4HgOoS9C1X5bpPb+uPoHurt3R3ZxtvpvLDMR+Ku7cLR81Gn13x7RG1TwhCgS+o9Q8oeGAgWZar2pYkkx+h7tix8v/ajW2yG1l1mYCXs3e6wKXaX+G3sZCWxpD6xvBTy9ov7bI3qFCp6QykR4AqvMgH8+BbJfqPWmdt7eCRveBkk5SWq9HVI7ByIOgOM5xGaoee6Ep1eFYt/SXih6QqqJCp6QqsQFAutaAFs7AakP1XYzKXkpsHWxxZbwLWq7DVI7YqkYgzwH4fvz36v3hqJOC0Pyf3UThugJqQEqeEKUkXwP2NQOcLIEEtR3WtTPgT+jt0dvFIgL1HYbpOYuPrsIjudw8dlF9d3IzX3CwXT7BwoH1xFSQ1TwhCgr4ymwsyuwpgkQ46uWm7j18hY4noPnI0+1bJ/Uzvfnv8cgz0EQS9Vw/rlMBlxcLRz3cfhb4bQ4QmqBCp6Q6shLA/b2F6YIveWs8s3LZDKM9R6Lr7y/olPmtExsRiw4nsOBiAOq37hELExc49hImMiGJrAhKkAFT0h1FecBbmOFF+PADSqfcMQr1otOmdNCf1z9A93cuiGzUMVnVBTndA6JcgAAIABJREFUA4fHC4+ny+toAhuiMlTwhNSERAScmiu8KJ9ZoNIJcYolxRhwbAD+6/9flW2T1E5qfiq6unbFmutrVLvh/HRg/xfCZ+5hahgZIAaNCp6QmpLJgIurhJI/MgkQqe7AuP0R+8HxHGLSY1S2TVJz229tRxeXLkjMVuER7ZkJwjKvqy2AKG/VbZeQN6jgCamtG3uFPbADg1V21HN2cTZ6uPfA8pDlKtkeqbk8UR56efTCosBFqttoygNgszXg1Ap4dk112yWkFCp4QlQh8hSw2hz4uweQpZqJajaGbYStiy2Sc5NVsj1SM65RruB4DvdT76tmg/EhwPqWwJYOwMso1WyTkApQwROiKqVfuF9F13pzybnJsHWxxcawjSoIR2pCJBVhkOcgTDs3TTUbLHkj+Fd3lb0RJEQRKnhCVCnlgTC1qIqGXpeHLEcP9x6aWdSElOMb5wuO5xCYGFj7janhoxxCKkMFT4iqZSYIU4yutgCiz9RqUw/TH4LjOeyP2K+icERZUpkUX3l/hVGnRtVuzffSB2N6TFDpwZiEVIYKnhB1KDn9aeUHtT79afaF2eh7tC9NX6thlxIugeM5nHlSizdpZU6n/IkmsCEaRQVPiLqUnsDk0toaT2By59UdcDwHl0gXFQckishkMoz3GY+hJ4fWfFra4jzA/Zs3EyI50QQ2ROOo4AlRJ4kYOD1PeJE/Pa/Ge3Azzs9A/2P9USguVHFAUpGQpBBwPAevWK+abSDvNbBvgDCCE35IteEIURIVPCHqJpMJe/COjYQ9+hosIhKWEgaO5+Ae7a6GgKQ0mUyGyb6TMchzEERSUfU3kPEM2PWJsChRtI/qAxKiJCp4QjQlbP+bZUC/qNFR1NP9psPhmAOKJEVqCEdKXE++Do7ncDTmaPWvnBIBbP74zVkUtJYAYYsKnhBNivIWjq7fZS8cbV8NtSoeohSZTIb/+P2nZm+k4oOFeRC2dlTJPAiE1BYVPCGa9uyasIe32Vo4b15JMpkMU3ynYKDnQNqLV5NrL66B4zkcjj5cvSs+OPlmJsOeQNZz9YQjpJqo4Alh4WWUMOPd+pbCDHhKKtmLd41yVWM4wySTyfCtz7cY7DkYxZJi5a94/V/ho5eDQ4CCDPUFJKSaqOAJYSUrSZi7frU5EKn80doz/Wfi8yOfI0+Up8ZwhifgWQA4nsOpx6eUu4JMBgQ4qmU1QUJUgQqeEJby04U9P0cT4MYepa7yIO0BOJ7D7ru71RzOcEikEow6NQqjTo2CRCpR4goiwGv2mwlsFgDKXIcQDdPZgk9MTET//v3RoUMHdOrUCTt27KjyOlTwRCuJCoQ9QMdGQMBKpSZEWRS4CD3ceyC9kOY0VwXvJ97geA4Xnl2o+sJFuYDb18L9FbSRJrAhWktnCz45ORm3b98GAOTk5ODjjz9GVFTlSy9SwROtJZUIe4KOjQCvOcIeYiXisuLQxaULrTSnAsWSYgw5MQTjfcZDVlVZ56UBe/sLE9jc4jUTkJAa0tmCf9uoUaNw4ULl776p4IlWk8mEPULHRoDbWGGq00r8ee1PdHXtiue5dNR2bTg/cAbHc7j2vIrV/9LjgZ1dhQlsYnw1E46QWtCLgn/69ClatWqF7OzKl9Skgic64ZazsIe4b4Cwx6hASl4Kurl1w+KgxZrLpmfSC9Px6eFPMTdgbuUXTL4HbGoHOFkCCTc0E46QWtL5gs/NzcUnn3yCkydPVvjzvXv3wt7eHvb29rC0tNRwOkJqKMZX2FPc2RXIeKrwYrvv7gbHc7j18pbmsumRNdfXwNbFFnGZcYovFBcIrGsBbO0EpD7UVDRCak2nC14kEmHw4MHYunWrUpenPXiiUxJuCHuMmz8W9iArUCAuwEDPgRh3Zlzt1iw3QI8zHqOLSxesu7FO8YUiPIFVZsA/nwLZLzQXjhAV0NmCl8lkmDp1KhYsWKD0dajgic5JfSjsOa5rIexJVsA3zrd2K58ZqNkXZqOXRy9kFmZWfIHQf4TjIQ4NBQoUXIYQLaazBX/lyhUYGRnBxsYGtra2sLW1ha9v5Qe+UMETnZT9QtiDXGUm7FG+pWT1s/7H+tPkN0oKTgpWPCOgVAr4/y6U+9HJgIiW6CW6SWcLviao4InOKsgU9iQdGwl7lm8pmfxmU9gmBuF0S4G4AENODMHoU6Mhevt0RIkIODlL+Duf/ZkmsCE6jQqeEF0hKgSOThHKx/93YU+zlJWhK2HrYovo17SSWWW239oOjucQnhJe9gdFuYDrV8LfN3gTTWBDdB4VPCG6RCoBzi4WSujkfwHx/xZFySrKQt+jfTHBZ4Jy060aoMcZj2HnYoffr/5e9ge5qcDefsDKxsBtWsiH6AcqeEJ0jUwGBG8WSt51DFCUI/9RyQF31V7u1ABIZVJMOzcNfY70QUZhqVXf0uOBnXbAmqbAQz92AQlRMSp4QnTVHTdhj3NPXyD3FQDhgLtZF2ah5+GeeJn3knFA7eIV61X+bIMXd4FNbYENVkBiGLNshKgDFTwhuuzReWHPc4ctkC5M1pKYnQh7N3vMvzS/6rnVDURKXgo+PfwppvtN/998AU8uA+s+BLZ1BlIfsQ1IiBpQwROi65LCgQ2thT3RF3cA/G9+de8n3ozDsVcyqtHdvTsSsxOFb94//mYCm15AdjLbgISoCRU8IfogLRbYzgFrmwOPL0IilWDauWn49PCnSMlLYZ2OKc9HnuB4DkdijgjfuPbXmwlshtEENkSvUcEToi9yUoDdfYBVpsD9Y0jMSUR39+6Y6T/TYIfqn+c+Rw/3HphxfgakEjFw/leh3I9NpQlsiN6jgidEnxRmAc7DhRK7tgvHHh4Dx3Nwj3ZnnUzjRFIRJvtORs/DPfEi8xlwYuabCWwW0wQ2xCBQwROib8RFwPHpgGMjyM6twLyAH9DVtSuiXkexTqZRO27vAMdzOPfoJOAyWij3kC00gQ0xGFTwhOgjqRQ4twxwbITM41PxxfEvMPTkUOQW57JOphHXXlyDDW8Dx6BlwJ7PhdMJ7xjeKAYxbFTwhOgrmQy4sh1wbIQ7/CDYuthicdBivf88PjU/FX2P9sXoE0NRsMMGWNsMeOTPOhYhGkcFX0N9N13GoG1BmLjvOuZ73MGqM1H4J/AxjoUn4lLMS9xPysSLzAIUiemzPsLYXQ9glSn277cHx3Nwi3JjnUhtiiXFmOw7Gd1dP0Hs1nbC6YNJ4VVfkRA1yisS49nrPNx6lg6/Bylwu/4M2wMe4bdTEZjtegtjd19Dv02X8TAlp+qNVQMVfA1IpTL84hWB/7qE46t/ruLzjZfR8Q8/WC0/W+GXjeN5DNgSiHF7QvGD+238efoBdl2MhcfNBPhHpuB2QgYS0/NRUExvBoiaxAZAurY5Fuzj0IXvgmv/v707j2rqzPsAPp0zZ86cM2eWU2baOdMCkiAquRCRglq3Vi1qtZSZslhrR1FfrYr1nY7b2FqqUnBDbV0QFZK6gIJaUSGAAgICIsii7FtYBNl3DJCQ7/sHNS+BEIIiSS6/zzmcw03uDc9zn5DvzXOf594nCdou0YiTy+X4NuFbMEIG4d7GvdMG6wq0XSzCQnK5HC2SbhTXtiG5pAGhj6rwU6IY3hF52HH1Edb8lAKHE/cwc38UJn4zeDZM3h2B+d53sdQ3CW4BaSiqHdlTaBTwI6ijS4qy+g48LGtERNZTXLxfhh/uFGDX9cdYfyEVTj6JeP9gDBj38EEb3HyXCLMPROOfJxOw9lwKdl57hMOR+TiXKEbYoyo8EDegpK4drZJu1ne1khH2JBUdBzj455lJmH7BFuJmsbZLNKICcgPACBn8eNS4d7pg69ie/0+GRy6Xo6mjC4U1rUgsqseNjEr43yvBgfBcbAvOxCrBA9gfi8e7XlEY/3WYys/vcTtuwXpvJBYcicVnZ+5jc2Aa9t7Mhs/dIgSnViAmrwaPnzSjukWCblnP0IV6SRTwWiLplqGy6RkyK5oQlVuNyw/KcTy6EN/dyIJbQBqW+iZhvvddTN4dMejBgNnXYXjXKwr2x+9htfABtl/JxMHwPPjfK8GNjEokFtWjsKYVTR1ddDBAetUX4ckPDGb5mWPJpffRJGHHhV7iKuLAF1rA7QQHPYIPe6cLkjGvp0eOhvYu5Fe3IqGwDtfTn+BsfAm8wnLxn6AMrPBPxuIf4zD1+zsw3Rmq8nOW899Q2HjcxqKjcfjcLxn/vpwOz9AcnI4txrW0CsQV1CKnqgW1rZ2Q9ejW5ywFvB7olvWgpkWCrMpm3M2vxZXUCpy6WwSPW9n430vpWH72PhYciYX13tsw2aH6YMB0Zyimfn8Hi3+Mwwr/ZPwnKAOeYTk4E1eMn9OeIL6gDrlPW1DXpntvUjLCWquR6jsNVgIelgXZoaO7Q9sleimZ1emwEfLh5GuG9svLe6cJEtaS9chR29qJnKoWxBXU4lpaBU7HFsMzNAf/vpyOz/2SsehoHGw8boPzX9Whrerz0CssF2fiinE9/QnuFdYh72krGtq70KPHn4cU8CzT0yNHfVsn8p624t4vR6xn4ooHHLFO8xz8iNVkxy1Y772NBUdisfys9ruZyCvQ2Yo7wnmwFPCw7vICdPe5r7w+Ka7Pw0zhZCw6MwF1tzb3Tg8keqdb1oPqFgkeP2lGTF4NglMr4HO3CHtvZmNzYBo+O/P8S0zkoF9ixj/v0TwWj1WCB9gWnIkD4bljukeTAn4Mk8vlaO7oRmFNG5KK63EzsxKCeyU4GJ6H7VcysVr4APbH72HGvihM+Eb1Oaf+A0U2XnwI95AsHI8uxKUHZbiTU42M8iY8oRkFukfahauBH4ERMtga+AGkevbNt7wuB/OFfMz2m4Ty6O/oAjY6plP68qchJ34jwsz9UXA4cQ9rfkrBjquP4B2Rh58SxQh9VIXkkgYU17ahhcYkqUQBTzQil8vR1imFuK4dKeIGiB5X4VxSKQ5H5mPntUdYey5FMdWD9+3ggwgZ93C8fzAGTj6J+OJ8Kr75+TGO3i7AhfulCM96itTSRpTVd6CjS6rtKo8NPT3wv+oMRsjgP+dnortTPy6EU1b1EPP8LTDDzxy5iYe1XZwxQ9ItQ3lDB9LKGhGZXY2A5DL8eKcA315/jA0XHsLpVCLePxQDCzUDiXnfhmPOgWh8cjIB686l4uufH+HI7XycTyqF6PFTpJY2oLS+He2d9BnwsijgySvxrEv9B4HzqUTMPRQDy+8GP3qftEuEWfuj6eh9FAhvrQEjZOAmsEFXe622i6NWiTgGc/0ZzPQzR166QNvF0XvPZ/+kljYiPOspLtwvxdHbBfjm58f44nwqHH2Gd+C+/kIqdl1/jB/uFODi/TJEZlcjjaYCawUFPNG6LmkPqpqf4VFFM6Jza3A5pRwnYgqx+0Y2vgxMw7IzSbA7HIspeyIxTs35t+med/DRsXi4Ch5ga3AG9oty4RdfgpCMSiQU1aGguhWNej5o5lUKvLMVjJDBGj8+WnR0/nha5nnM8Odhth8P+bk/a7s4Okkul6NV0o2SunY8EDcg7FEVziWK4R2ZP6zrd/B3R2Ce9124+CaqPfUm6abQ1lUU8ESvSGU9qGmVILuyBbH5tbj6sAK+sUXDGkHL1eNpL69aSIIXJgt4sPdjUF4Wp+3iKAmP24MpAh6W+DEoL2ffhXrU0XS8zLteUTBTM0d7yp5I2B2OxbIzSfgyMA17bmbjZEwRglLKEf3L4NmnzRJ0SWmwIhtQwBPWUjUH1i++BPtFudganAFXwQN8dCwe0z3vYPxO+lB87sHjAMzw52GWvzkSUk5ouziQdkvgHewARshgucAKTQ3F2i7SiBhsjvY+US62BGVgpX8ylvwYj2lq3p8mO27hHY/bWHg0DsvP3se/L6Xj+9Ac+MYW4erDCsTm1yK7sgU1rRJIacbLmEMBTwh++Yb0rBtFtW24X1yPW5kvdulJy+8iMPdQDJxPJWLDL92ax6IKEJhchtvZ1Ugvb0J5Q4fOd2uWlsXDwc8CFgIejl75BN1amitfVZmKlYLea+jvvbQIXZ0je63ukSbrkaOuTbM52lw1PUy239/Ghz/E4V9+yfjqsuo52vVtnXS6iahFAU/IC+h784jBBia9dzAGjJqBSZqMJhbXtaOtU6qVQYTPOhrgHvABGCEDF38+cvKuj9rflvf04ErkV5jqz4ONPw83or8etb/dn3QELjQ1fieNESGjjwKekFdM0i1DRWMHMsqbcCenGpcelOF4dCHcQ7Kw8eJDuPgmYp73XfDVzAee8E0YZuyLwsfH72G1MAU7rmbiUEQehAli3MqsQlJxPQpr2tDcMfIzCiLi9mKOHw98AQ8HguzR3Fw2oq/fX1buVcW39lUCa1RUJI3431A3sHNTQBo+PZ2EDw7fhZWagZ1Dtcn94noU1bah+RnN8iDaodcBLxKJYGZmBi6XCy8vryHXp4Anuq5b1oOnzS95Ra+dYZjmeQdLfozHSv9kbAnKwD5RLs7Gl7xwF29zc5ni2/x0fx58Q5ajvW1kb+ZSUCjC9gvvgREymOXPw+WIzeiRaT4X+vmBVHp5E25nVyMwuQzHogrgHpKFDRc1m5o56M2ekkohelyFFLF2e1UIGQ69DXiZTAYOh4Pi4mJ0dXXB0tIS2dnZarehgCds8vx8b+7TFsQX1OHntN7LEnuG5eCryxn4l18yPvwhDrbfD36+l/Pf0EEHaV1JrcDd/FpkVTajpqV3kFZe/k24/TQdjJCBrT8Pey8tQl7+Tchf8BKxkmeNuB3viXVCWzBCBjb+PBy58glaW54A0GyO9lCnQphvw/HeIHO06XbNhM30NuATExNhZ2enWPb09ISnp6fabSjgyVg1nFthqptmZbUnEh8cvosNJ73wP6dmYIqAB0bIYMFZHr4+txAXwnYhPfcuJF3dKsshkTQhPfs6zoZ8gS8FMzDV3xyMkMEcP3NsPeOADf4hGs3R1mQwY0Wj7g9mJORV0tuADw4OxurVqxXL586dw8aNG9VuQwFPyNA0vVDK7APRmOYuwKf7neB8wgpT/HvDnhEysPbnYe4Zc9j7WuDj0xZYcprBnLPmiucZYe/yZz9Mh/3eL8HdcU1x8PDp6SRsCkjD7hvZOBFTiMsp5YjOrcGjimZUNT9jzXREQl41vQ34oKCgAQHv5uY2YD1fX19YW1vD2toaRkZGo1lEQsaE55clTi4qQ+BtP+y/uAZb/BZi3ekZWO47BZ+dssa/Ttlgne9MbD/rgP0Xv8SFqBuIya1W6v4nhIwsvQ146qInhBBCBqe3AS+VSmFiYoKSkhLFILusrCy121DAE0IIGSv0NuABIDQ0FOPHjweHw4GHh8eQ61PAE0IIGSv0OuCHiwKeEELIWEEBTwghhLAQBTwhhBDCQhTwhBBCCAtRwBNCCCEsRAFPCCGEsBAFPCGEEMJCFPCEEEIIC1HAE0IIISxEAU8IIYSwEAU8IYQQwkIU8IQQQggLUcATQgghLEQBTwghhLDQmAp4AwMDWFtbj9iPkZHRiL6etn7YUg+qi27+sKUeVBfd/GFLPV5FXQwMDF4oK/Uy4EeatTU7egTYUg+A6qKL2FIPgOqii9hSD0B36kIBD91pjJfFlnoAVBddxJZ6AFQXXcSWegC6UxcKeOhOY7wsttQDoLroIrbUA6C66CK21APQnbpQwAPw9fXVdhFGBFvqAVBddBFb6gFQXXQRW+oB6E5dKOAJIYQQFqKAJ4QQQlhozAR8UFAQzM3N8dprryElJUXpOU9PT3C5XJiZmSE8PFzl9iUlJbC1tYWpqSmcnZ3R1dU1GsVWy9nZGXw+H3w+H8bGxuDz+SrXMzY2BsMw4PP5OnNuqD93d3f8/e9/V9QnNDRU5XoikQhmZmbgcrnw8vIa5VJqZsuWLZgwYQIsLCzg4OCApqYmlevparsMtY87Ozvh7OwMLpcLW1tbiMXi0S+kBsrLy/Hee+9h4sSJMDc3x9GjRwesExMTgz/+8Y+K993u3bu1UNKhDfVekcvl2LRpE7hcLiwsLPDw4UMtlHJoeXl5in3N5/Pxhz/8AUeOHFFaR5fbxNXVFX/961/B4/EUjzU0NGD+/PkwNTXF/Pnz0djYqHJboVAIU1NTmJqaQigUjkp5x0zA5+TkIC8vD3PmzFEK+OzsbFhaWqKzsxMlJSXgcDiQyWQDtndyckJgYCAAYN26dTh58uSolV0TX3311aD/CMbGxqirqxvlEg2Pu7s7Dh48qHYdmUwGDoeD4uJidHV1wdLSEtnZ2aNUQs1FRERAKpUCALZt24Zt27apXE8X20WTfXzixAmsW7cOABAYGAhnZ2dtFHVIVVVViqBrbW3F+PHjB9QlJiYGixcv1kbxhmWo90poaCgWLlwIuVyOpKQk2NrajmLpXoxMJsObb76J0tJSpcd1uU1iY2Px8OFDpYDfunWr4kDYy8tL5f97Q0MDTExM0NDQgMbGRpiYmAx6IDCSxkzAP9c/4D09PeHp6alYtrOzQ2JiotI2crkcBgYGig/txMRE2NnZjU6BNSCXy/H222+joKBA5fO6GCT9aRLw/fd7/7bTRdeuXcOyZctUPqeL7aLJPu77PyKVSmFgYAC5XD6q5XwR9vb2iIyMVHpMl8Okr6HeK2vXrkVAQIBi2czMDFVVVaNRtBcWERGBd999d8Djut4mYrFYKeD77uuqqiqYmZkN2CYgIABr165VLPdvr1dlzAf8xo0bcf78ecXyqlWrEBwcrLRNXV0duFyuYrm8vFypgbUtNjZWbRfvuHHjYGVlhSlTpujM6M7+3N3dYWxsDAsLC7i6uqo8ug0ODsbq1asVy+fOncPGjRtHs5jDtmTJEqX3V1+62C6a7GMej4eKigrFMofD0bkDlf7EYjEMDQ3R0tKi9HhMTAxef/11WFpaYuHChcjKytJSCdUb6r2yePFixMfHK5bnzp074FSkrnF1dcWxY8cGPK7rbdI/4P/0pz8pPf/nP/95wDYHDx7E3r17Fct79uwZ8gvNSGBVwM+bNw88Hm/Az/Xr1xXr9A/4DRs2DAj4K1euKL1ubW3tgIBnGOYV1uT/aVKnL774AocOHRr0NSorKwEANTU1sLS0RGxs7Csvtyrq6lJdXQ2ZTIaenh7s3LkTrq6uA7YPCgoaED5ubm6jWQUFTdrFw8MDDg4Og3671ZV26UuTfWxubj4g4Ovr60etjMPV1taGKVOm4OrVqwOea2lpQVtbG4Debm5TU9PRLp5GhnqvfPjhhwMCPjU1dVTLOBxdXV0wMDBAdXX1gOd0vU1eJOAPHDgwIODVfWaPFFYFvCbY1kUvlUrxxhtvKH3gqqNJV7i29f8Hek6fuuiFQiGmTZuGjo4OjdbXlXZhWxd9d3c37Ozs4O3trdH6unjapD9V7xV966K/fv06PvjgA43W1bU2oS56HdY/4LOyspQG2ZmYmKgcZOfo6Kg0yO7EiROjVmZ1RCIRZs+ePejz7e3taG1tVfw+ffp0iESi0Sqexvp+GB0+fBguLi4D1pFKpTAxMUFJSYliAJiudd8BvW0yadIk1NbWDrqOrraLJvv4+PHjSoPsnJyctFHUIcnlcnz++efYvHnzoOs8ffpUcXCSnJwMQ0NDnTtY0eS9cuvWLaVBdjY2NtooqsZcXFzg7++v8jldb5P+Ab9lyxalQXZbt24dsE1DQwPGjRuHxsZGNDY2Yty4cWhoaHjlZR0zAX/t2jW89dZb+O1vf4s33nhD6VuKh4cHOBwOzMzMEBYWpnh80aJFiq6x4uJi2NjYgMvlwtHREZ2dnaNeB1VWrFgBHx8fpccqKyuxaNEiAL3ltrS0hKWlJczNzeHh4aGNYg5p+fLlYBgGFhYW+OijjxSB37cuQG+X3fjx48HhcHS2LlwuF2+//bZims/zMNSXdlG1j3ft2oWQkBAAgEQigaOjI7hcLmxsbFBcXKzN4g4qPj4ev/rVr2BhYaE0/dLHx0fxP3Ps2DGYm5vD0tISU6dORUJCgpZLPdBg75W+9ZDL5diwYQM4HA4YhtHp8+8dHR14/fXX0dzcrHhMX9pk6dKl+Nvf/obf/OY3eOutt3D27FnU19dj7ty5MDU1xdy5cxXBnZKSonS6y8/PD1wuF1wud9CDm5E2ZgKeEEIIGUso4AkhhBAWooAnhBBCWIgCnhBCCGEhCnhCCCGEhSjgCSGEEBaigCdEj/36178Gn88Hj8eDo6OjxhfWUaXvNcBDQkLU3q2vqalJ6VoQlZWV+OSTT174bxNCRh4FPCF67Pe//73i92XLlg24YptcLkdPT49GrzWcm3wMdrVBQojuoIAnRI/1DXgfHx+sX78eYrEYEydOxPr16zF58mSUlpYiIiIC06ZNg5WVFRwdHRXX+haJRJgwYQJmzJiBTZs2KQJeIBAobjJTXV0NBwcHxcVWEhIS4OLigt/97nfg8/nYsmWLUuBLJBKsXLkSDMNg8uTJiI6OVrzmP/7xDyxYsACmpqaKK37JZDKsWLECPB4PDMPg8OHDo7b/CGEzCnhC9NjzgJdKpbC3t8fJkychFovx2muvISkpCUDv3RBnzZqF9vZ2AMC+ffuwe/duSCQSxW2G5XI5nJycVAa8s7Mzjhw5AqA3jJubmwd8g++7fOjQIaxcuRIAkJubC0NDQ0gkEggEApiYmKC5uRkSiQRGRkYoLy9Hamoq5s+fr3itpqamV7nLCBkzKOAJ0WPPz8Hz+Xy4ubmhq6sLYrEY48aNU6xz8+ZNGBgYKNabNGkSVq1ahfT0dMyaNUuxXkhIiMqA/8tf/jLg0szqAt7BwQFRUVGK52bOnInMzEwIBAKsWbNG8fjChQsRHx+PxsZGcDgcuLm5QSQSaXxKgRCiHgU8IXqsbxf9c/3D98aNG1i6dOmA9dLT05VuVDRSAf/xxx8PGvB97y2/ePFixMTEAOi9peuVK1ewZMkSlbcKJoQMHwU8IXpMk4Cvra2FoaHShk03AAABD0lEQVQhCgsLAfTe7CM/Px8SiQSGhoYoKioC0HsjDVUB7+LiotRF39LSgvr6ehgZGan8m97e3li1ahUAID8/H0ZGRujs7Bw04Ovq6tDS0gKg96CDz+ePzM4hZIyjgCdEj2kS8AAQFRWFd955BxYWFrCwsFDcGa7vILvt27cPOsjO3t4eDMOAz+cr7gX/6aefgsfjqRxkt2LFCpWD7FQFfEZGBqysrBSnEPre0ZEQ8uIo4AkhhBAWooAnhBBCWIgCnhBCCGEhCnhCCCGEhSjgCSGEEBaigCeEEEJYiAKeEEIIYSEKeEIIIYSFKOAJIYQQFqKAJ4QQQliIAp4QQghhIQp4QgghhIUo4AkhhBAW+j+qJ13n2Bs1hQAAAABJRU5ErkJggg==) ``` # 10.3 Custom loss function def huber_loss(y_true,y_pred): error = y_true - y_pred is_small_error = tf.abs(error) < 1.0 squared_loss = tf.square(error) / 2.0 linear_loss = tf.abs(error) - 0.5 return tf.where(is_small_error, squared_loss, linear_loss) # 10.4 Compile and run the model with our loss function model.compile(loss= huber_loss, metrics = "mse", optimizer = "nadam") model.fit(X,y,epochs = 20) ``` ### Generate random tensors Refer this [article](https://www.tensorflow.org/guide/random_numbers) ``` # 11.0 Generate random data using # Generator object g1 = tf.random.Generator.from_seed(1) # 2.0.1 Use object 'g1' g1.normal([2,3]) print() # 2.0.2 g1.uniform([1]) print() # 11.1 Generate random data # directly tf.random.uniform([4]) # shape is (4,) print() tf.random.uniform([4]).shape print() tf.random.uniform([4]).numpy() print() # 11.2 tf.random.normal(shape = (10,4), mean = 3,stddev=1.3) print() tf.random.normal(shape = (10,4), mean = 3,stddev=1.3).shape print() # 2.2 tf.random.uniform([4, 100], maxval=100, dtype=tf.int32) ``` # tf.data.Dataset API Understanding Dataset object. The `tf.data.Dataset` API supports writing descriptive and efficient **input pipelines**. Dataset usage follows a common pattern (see [the right-panel here](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) and a colab page with example [here](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/data.ipynb#scrollTo=k_5N7CdNGYAa) ): > Create a source dataset from your input data. > Write dataset transformations to preprocess the data. > Iterate over the dataset and process the elements. Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory. That is processing is performed batch-wise. The simplest way to create a dataset is to create it from a python list: ## from_tensors() vs from_tensor_slices() See this [stackoverflow answer](https://stackoverflow.com/a/49579995/3282777) Read `from_tensor_slices()` as `to_tensor_slices()` ``` # 1.0 Call libraries import numpy as np import tensorflow_datasets as tfds import tensorflow as tf import matplotlib.pyplot as plt import os # 1.1 More libraries from tensorflow.keras import utils from tensorflow.keras import preprocessing import pathlib from tensorflow.keras.layers.experimental.preprocessing import TextVectorization # 1.2 Set numpy decimal printoptions # Limit display to precision of 3 np.set_printoptions(precision=3) # 1.3 from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" # 2.0 Read from_tensor_slices as to_tensor_slices() # 2.0.1 from_tensor_slices() dataset1 = tf.data.Dataset.from_tensor_slices([[1,2],[3,4],[5,6]]) # 2.0.2 from_tensors dataset2 = tf.data.Dataset.from_tensors([[1,2],[3,4],[5,6]]) # 2.1 Print the two objects dataset1 print() dataset2 # 2.2 Extract each element from # dataset. dataset is iterable. # Four elements. Each has two values for elem in dataset1: print("--") print(elem.numpy()) # 2.2.1 Just one element for elem in dataset2: print("--") print(elem) # 2.3 dataset3 = tf.data.Dataset.from_tensor_slices( ( tf.random.uniform([4]), # shape [4] tf.random.normal( [4, 6], mean=1.5, stddev=2.2 ) ) ) # 2.3.1 Print contents for elem in dataset3: # Four elements. # Each element is a tuple of two tensors # one from uniform and the other from normal dist print(elem) # 2.4 Next same internal contents but using from_tensors dataset4 = tf.data.Dataset.from_tensors( ( tf.random.uniform([4]), # shape [4] tf.random.normal([4, 6], mean=1.5,stddev=2.2) # [4,100] ) ) # 2.4.1 for elem in dataset4: # One element # This elem is a tuple of two tensors print() print(elem) # Two tensors can be combined into one Dataset object. features = tf.constant([[1, 3], [2, 1], [3, 3]]) # ==> 3x2 tensor labels = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor dataset = tf.data.Dataset.from_tensor_slices((features, labels)) # Both the features and the labels tensors can be converted # to a Dataset object separately and combined after. features_dataset = tf.data.Dataset.from_tensor_slices(features) labels_dataset = tf.data.Dataset.from_tensor_slices(labels) dataset = tf.data.Dataset.zip((features_dataset, labels_dataset)) #help(tfds.load) ``` ## Dataset objects from numpy-arrays in memory If all data is in memory, to create a `Dataset` object, use `Dataset.from_tensor_slices()` on numpy arrays (in memory). ``` # 3.0 Download numpy arrays to # two objects (train,test) in memory # 3.0.1 First delete any existing folder !rm -rf /root/.keras/datasets !ls -la /root/.keras # 3.0.2 Next download data: train, test = tf.keras.datasets.fashion_mnist.load_data() # 3.0.3 Check downloaded files ! ls -la /root/.keras/datasets # 4.1 Little more checking: type(train) print() type(train[0]) print() type(test[0]) # 4.2 Separate images and labels: images,labels = train # 4.2.1 Do something with images images = images/255 # 4.3 Create Dataset object. # It is similar to way we did in #3.3 above: dataset = tf.data.Dataset.from_tensor_slices((images, labels)) type(dataset) dataset ``` ## Dataset objects from textfiles on disk Please see this [tutorial](https://www.tensorflow.org/tutorials/load_data/text). We are downloading files from stackoverflow. The file contains questions asked on some subjects. (This section is to be completed.) ### Consuming CSV file See [Loading CSV Files](../tutorials/load_data/csv.ipynb), and [Loading Pandas DataFrames](../tutorials/load_data/pandas.ipynb) for more examples. The CSV file format is a popular format for storing tabular data in plain text. For example: ``` # 5.0 Download titanic file 'train.csv' # from a URL: import pandas as pd titanic_file = tf.keras.utils.get_file( "train.csv", "https://storage.googleapis.com/tf-datasets/titanic/train.csv" ) # 5.1 Read downloaded file: df = pd.read_csv(titanic_file) df.head() # 5.2 Transform pandas dataframe to dictionary dict(df) # 6.0 Create tf.data.Dataset objects: # tensor_slice: Slices along along first dimension. # This operation preserves the structure of the input # tensors, removing the first dimension of each tensor # and using it as the dataset dimension. titanic_slices = tf.data.Dataset.from_tensor_slices(dict(df)) type(titanic_slices) # TensorSliceDataset # 6.1 Just extract one row/batch n = 1 for feature_batch in titanic_slices.take(n): for key, value in feature_batch.items(): print(" {}:{}".format(key, value)) ``` A more scalable approach is to load from disk as necessary. The tf.data module provides methods to extract records from one or more CSV files fron disk. The `experimental.make_csv_dataset` function is the high level interface for reading sets of csv files. It supports column type inference and many other features, like batching and shuffling, to make usage simple. ``` # 6.2 batch_size = 4 titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=batch_size, label_name="survived" ) # 6.3 Get just one batch # Batch size is 4 howManyBatches = 1 for feature_batch, label_batch in titanic_batches.take(howManyBatches): print("'survived': {}".format(label_batch)) print("features:") for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) # 6.4 titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) # 6.5 for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ``` You can use the select_columns argument if you only need a subset of columns. ``` # 6.6 titanic_batches = tf.data.experimental.make_csv_dataset( titanic_file, batch_size=4, label_name="survived", select_columns=['class', 'fare', 'survived']) # 6.7 for feature_batch, label_batch in titanic_batches.take(1): print("'survived': {}".format(label_batch)) for key, value in feature_batch.items(): print(" {!r:20s}: {}".format(key, value)) ``` ### Consuming text files #### Download file from a url How to download a file from a URL. This way any file can also be downloaded from *gdrive* without mounting it. Uses [`get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) utility of `tf.keras.utils`. See this [example](https://www.tensorflow.org/tutorials/load_data/text). ``` # 7.0 Which files are where: !ls -la /root/.keras/ !ls -la /root/.keras/datasets # 7.0.1 Deleting datasets dir !rm -rf /root/.keras/datasets # 7.1 Specify file URL from stackoverflow. # Our file: stack_overflow_16k.tar.gz data_url = 'https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz' # 7.1.1 Use keras get_file() utility dnld = utils.get_file( 'stack_overflow_16k.tar.gz', data_url, untar=True, ) # 7.1.2 Where is 'dnld' object? # Get its path dnld_dir = pathlib.Path(dnld).parent dnld_dir # '/root/.keras/datasets' # 7.1.3 ! ls -la /root/.keras/datasets # 5.1.4 Result after untarring ! ls -la /root/.keras/datasets/train ! ls -la /root/.keras/datasets/test ``` #### Create a dataset of text-lines ``` # 8.0 Files in 'train/python' folder train_dir = "/root/.keras/datasets/train/" python_files = os.listdir(train_dir + "python") python_files[:3] print() # 8.0.1 Join directory path to file-names files_with_path = [os.path.join(train_dir, file) for file in python_files] files_with_path[:4] # 8.1 tf.data.Dataset of just filepaths # Prepare a dataset of all files matching # one or more glob patterns. filepath_dataset_py = tf.data.Dataset.list_files("/root/.keras/datasets/train/python/*.txt") # 8.2 Create a datset comprising lines from one or more text files. data_lines = tf.data.TextLineDataset(filepath_dataset_py) # 8.2.1 Examine 5 files: for line in data_lines.take(5): print(line.numpy) ``` #### Downloading data from gdrive ``` # 8.3 gdrive download: May also download a shared file from gdrive # No need to mount gdrive # Download 'archive.csv.zip' from my gdrive: gdrive_url = "https://drive.google.com/file/d/14gjcWMRJORJ4bQ2QerWaodDpLi5oDcxM/view?usp=sharing" # 8.3.1 Use keras get_file() utility dnld_gdrive = utils.get_file( 'archive.csv.zip', gdrive_url, ) # 8.3.2 Where is 'dataset' object? # Get its path gdrive_dir = pathlib.Path(dnld_gdrive).parent gdrive_dir # PosixPath('/root/.keras/datasets') # 8.3.3 Check folders under 'train' ! ls -la /root/.keras/datasets/train ``` ## preprocessing data The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel. With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own. See this [link](https://www.tensorflow.org/guide/keras/preprocessing_layers) for usage of preprocessing library methods. Pay attention to the panel on the right. ### preprocessing.text_dataset_from_directory Refer this [link](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text_dataset_from_directory). The `preprocessing.text_dataset_from_directory()` expects a directory structure as follows. See an example [here](https://www.tensorflow.org/tutorials/keras/text_classification). main_directory/<br> ...class_a/<br> ......a_text_1.txt<br> ......a_text_2.txt<br> ...class_b/<br> ......b_text_1.txt<br> ......b_text_2.txt<br> When running a machine learning experiment, it is a best practice to divide your dataset into three splits: train, validation, and test. The Stack Overflow dataset has already been divided into train and test, but it lacks a validation set. Create a validation set using an 80:20 split of the training data by using the validation_split argument below. ``` # 9.0 Our stackoverflow training data is here: train_dir = "/root/.keras/datasets/train" # 9.0.1 Some constants batch_size = 32 seed = 42 # 9.1 Create a training dataset object raw_train_ds = preprocessing.text_dataset_from_directory( train_dir, labels = "inferred", batch_size=batch_size, # fraction of data to # reserve for validation. validation_split=0.2, # Return back 'train' subset # but in batches subset='training', seed =2 ) # 9.1.1 type(raw_train_ds) # BatchDataset # 9.2 # Dataset.take(n) object is iterable. # Iterate over the dataset and print out a few examples, # to get a feel for the data. # take(n): Reads n batches for text_batch, label_batch in raw_train_ds.take(1): print(text_batch.numpy().shape) # 32: batch_size # Change batch_size to see how it changes for i in range(10): print("Question: ", text_batch.numpy()[i]) print("Label:", label_batch.numpy()[i]) # 9.3 Get class_names in the dataset: for i, label in enumerate(raw_train_ds.class_names): print("Label", i, "corresponds to", label) # 10.0 From train_dir pick-up remaining validation # dataset but in batches: raw_validation_ds = preprocessing.text_dataset_from_directory( train_dir, batch_size=batch_size, validation_split=0.2, # Return validation data # but in batches subset='validation', seed = 2 ) # 11.0 Test dataset test_dir = "/root/.keras/datasets/test" # 11.1 raw_test_ds = preprocessing.text_dataset_from_directory( test_dir, batch_size=batch_size ) ``` ### Text Vectorization Once our train, test and validation datasets are ready, we proceed to feed textvectorization layer. #### Using `preprocessing.TextVectorization` layer. Its [full syntax](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/TextVectorization) is: `tf.keras.layers.experimental.preprocessing.TextVectorization( max_tokens=None, standardize=LOWER_AND_STRIP_PUNCTUATION, split=SPLIT_ON_WHITESPACE, ngrams=None, output_mode=INT, output_sequence_length=None, pad_to_max_tokens=True, vocabulary=None, **kwargs ) ` `TextVectorization` layer will standardize, tokenize, and vectorize the data using the preprocessing.TextVectorization layer. > Standardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset. > Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words by splitting on whitespace). > Vectorization refers to converting tokens into numbers so they can be fed into a neural network. All of these tasks can be accomplished with this layer. You can learn more about each of these in the API doc. > The default standardization converts text to lowercase and removes punctuation. > The default tokenizer splits on whitespace. > The default vectorization mode is int. This outputs integer indices (one per token). This mode can be used to build models that take word order into account. You can also use other modes, like binary, to build bag-of-word models. ``` from tensorflow.keras.layers.experimental import preprocessing ``` ##### Simple **experiment** ``` data = [ "The title of Rachel Levin’s book, Look Big, is" "just about the best two words of advice one can" " give about how to survive most animal encounters." "In her illustrated service manual, Levin breaks down" "how to handle 50 different kinds of animals common in " "North America, based on expert advice. Let’s look at her" " tips for dealing with five of these creatures and see" " how they stack up with what the experts say—and with" " real-world experience. " ] layer = preprocessing.TextVectorization() layer.adapt(data) vectorized_text = layer(data) print(vectorized_text) data = [ [ "The title of Rachel Levin’s book, Look Big, is" "just about the best two words of advice one can" " give about how to survive most animal encounters." "In her illustrated service manual, Levin breaks down" ], ["how to handle 50 different kinds of animals common in " "North America, based on expert advice. Let’s look at her" " tips for dealing with five of these creatures and see" " how they stack up with what the experts say—and with" " real-world experience. " ] ] layer = preprocessing.TextVectorization() layer.adapt(data) vectorized_text = layer(data) print(vectorized_text) # 12.0 VOCAB_SIZE = 10000 MAX_SEQUENCE_LENGTH = 250 # 12.1 Instantiate TextVectorization class int_vectorize_layer = TextVectorization( max_tokens=VOCAB_SIZE, output_mode='int', output_sequence_length=MAX_SEQUENCE_LENGTH ) #binary_vectorize_layer = TextVectorization( # max_tokens=VOCAB_SIZE, # output_mode='binary' # ) # 12.2 Make a text-only dataset (without labels), then call adapt train_text = raw_train_ds.map(lambda text, labels: text) # 12.3 'adapt' or 'fit' over 'int_vectorize_layer' # object. 'adpat' fits the state of the preprocessing # layer to the dataset. int_vectorize_layer.adapt( # The data to train on. # It can be passed either as # a tf.data Dataset, as a NumPy array train_text # ) #binary_vectorize_layer.adapt(train_text) # 12.3.1 int_vectorize_layer.get_vocabulary()[:4] # 12.4 Retrieve a batch (of 32 reviews and labels) from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_question, first_label = text_batch[0], label_batch[0] print("Question", first_question) print("Label", first_label) # 12.5.1 Expand type(first_question) print("\n--------------\n") # 12.5.2 first_question.get_shape() print("\n--------------\n") # 12.5.3 t = tf.expand_dims(first_question,-1) t.get_shape() print("\n--------------\n") # 12.5.4 print(t) print("\n--------------\n") 12.6 int_vectorize_layer(t) def int_vectorize_text(text, label): text = tf.expand_dims(text, -1) return int_vectorize_layer(text) , label # Retrieve a batch (of 32 reviews and labels) # from the dataset text_batch, label_batch = next(iter(raw_train_ds)) first_question, first_label = text_batch[0], label_batch[0] print("Question", first_question) print("Label", first_label) print("'int' vectorized question:", int_vectorize_text(first_question, first_label)[0]) print("'int' vectorized question:", int_vectorize_text(first_question, first_label)[0]) ######## I am done ############### ```
github_jupyter
# Zebrafish Double coiling model ### Below is the block of code to run double coiling models INITIALIZE - Run before running the models to set the parameters ``` %matplotlib inline ### FIRST, run this code to set the simulation parameters. Then select from the codes below to run different models #Declare the duration of the simulation TIME_END = 100000 SKIP = 200 PLOT_INTERVAL = 10000 PLOT_RESULT = True PRINT_PARAM = False SAVE_CSV = True SAVE_ANIM = True import winsound duration = 1000 # milliseconds freq = 440 # Hz ``` Base Model ``` from Double_coiling_model import Double_coil_base print("Base Model") test = Double_coil_base(dt = 0.1, stim0 = 35, sigma = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) #test.setWeightParameters(V0d_IC_syn_weight = 2.0, V0d_MN_syn_weight = 2.0, V0d_V2a_syn_weight = 2.0) test.setWeightParameters(V0d_IC_syn_weight = 8.0, V0d_MN_syn_weight = 8.0, V0d_V2a_syn_weight = 8.0) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` Glycine Null ``` from Double_coiling_glycine_null import Double_coil_glycine_null print("Glycine Null Model") test = Double_coil_glycine_null(dt = 0.1, stim0 = 35, sigma = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight=0, V0d_MN_syn_weight=0, V0d_V2a_syn_weight=0) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` V0v Overexcitation ``` from Double_coiling_with_Tonic_drive_to_IC_overexcitation_of_V0v import Double_coil_with_Tonic_drive_to_IC_overexcitation_of_V0v print("IC Tonic Drive Model") test = Double_coil_with_Tonic_drive_to_IC_overexcitation_of_V0v(dt = 0.1, stim0 = 35, sigma = 0.1, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 8.0, V0d_MN_syn_weight = 8.0, V0d_V2a_syn_weight = 8.0) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` V2a KO ``` from Double_coiling_V2a_KO import Double_coil_V2a_KO print("V2a Knockout Model") test = Double_coil_V2a_KO(dt = 0.1, stim0 = 35, sigma = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 8, V0d_MN_syn_weight = 8, V0d_V2a_syn_weight = 8) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` No excitatory Synapse ``` from Double_coiling_no_excitatory_syn import Double_coil_no_excitatory_syn print("No Excitatory Synapse Model") test = Double_coil_no_excitatory_syn(dt = 0.1, stim0 = 35, sigma = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 2, V0d_MN_syn_weight = 2, V0d_V2a_syn_weight = 2) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` V0v to IC Null ``` from Double_coiling_V0v_to_IC_null import Double_coil_V0v_to_IC_null print("V0v to IC null Model") test = Double_coil_V0v_to_IC_null(dt = 0.1, stim0 = 35, sigma = 0.001, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 2, V0d_MN_syn_weight = 2, V0d_V2a_syn_weight = 2) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` 30 Somites ``` from Double_coiling_30_somites import Double_coil_30_somites print("30 somites Model") test = Double_coil_30_somites(dt = 0.1, stim0 = 50, sigma = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 30, nV0d = 30, nV0v=30, nV2a=30, nMuscle = 30) test.setWeightParameters(V0d_IC_syn_weight = 2, V0d_MN_syn_weight = 2, V0d_V2a_syn_weight = 2) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ``` Sigma ``` from Double_coiling_with_sigmas import Double_coil_with_sigmas print("Sigma Model") test = Double_coil_with_sigmas(dt = 0.1, stim0 = 35, sigmaD=0.2, sigmaL = 0, sigmaP = 0, sigmaW=0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 2, V0d_MN_syn_weight = 2, V0d_V2a_syn_weight = 2) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) winsound.Beep(freq, duration) ``` Chemical and Gap Sigmas ``` from Double_coiling_with_sigmas_chem_syn_and_gap import Double_coil_with_gap_chem_sigma print("Chemical and Gap Sigmas") test = Double_coil_with_gap_chem_sigma(dt = 0.1, stim0 = 35, sigma_chem = 0, sigma_gap = 0, E_glu = 0, E_gly = -58, cv = 1, nIC = 5, nMN = 10, nV0d = 10, nV0v=10, nV2a=10, nMuscle = 10) test.setWeightParameters(V0d_IC_syn_weight = 2, V0d_MN_syn_weight = 2, V0d_V2a_syn_weight = 2) ((VLIC, VRIC), (VLMN, VRMN), (VLV0d, VRV0d), (VLV0v, VRV0v), (VLV2a, VRV2a), (VLMuscle, VRMuscle), Time) = test.mainLoop(rand=0, tmax = TIME_END, tskip = SKIP, tplot_interval = PLOT_INTERVAL, plotResult = PLOT_RESULT, printParam = PRINT_PARAM, saveCSV = SAVE_CSV, saveAnim = SAVE_ANIM) #The code below produces a beeping sound af the code has been executed winsound.Beep(freq, duration) ```
github_jupyter
``` !pip install pandas-summary import pandas as pd # pip install pandas_summary from pandas_summary import DataFrameSummary ``` # Competencia de Kaggle [ir a Kaggle](https://www.kaggle.com/c/rossmann-store-sales/data) [3er puesto](https://github.com/entron/entity-embedding-rossmann) # Métrica de la competencia $$ \textrm{RMSPE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \left(\frac{\hat{y}_i - y_i}{y_i}\right)^2} $$ donde: - $y_i$ las ventas de un día particular de un store - $\hat{y}_i$ ventas estimadas por el modelo - $n$ es el número de predicciones realizadas ``` from google.colab import drive drive.mount('/content/drive') ``` ## Importamos dataset La competencia permitía agregar datos externos para realizar la predicción The following tables are available in the datasets: | Archivo | Descripción| Origen de Datos| |--------------|--------------------------------------------------------------------|--| | train.csv | training set: información del store día a día, ventas, clientes, si es feriado, etc | Kaggle | | store.csv | Información general del store, por ejemplo datos del competidor | Kaggle | | store_states.csv | Mapea de store a estado - Dato externo| Externos | | state_names.csv | Mapea estados a acronimo de estado | Externos | | googletrend.csv | Tendencias por semana - Dato externo| Externos| | weather.csv | Condiciones meteorológicas por día | Externos| ``` PATH = '/content/drive/My Drive/Colab Notebooks/kaggle-rossmann-master/rossmann/rossmann/' table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather'] train, store, store_states, state_names, googletrend, weather = [pd.read_csv(PATH + fname+'.csv', low_memory=False) for fname in table_names] display(train.head()) display(DataFrameSummary(train).summary()) ``` - Mirar counts que todos tienen la misma cantidad - Ninguno tiene missing - Los tipos tambien es interesante observar ``` train['StateHoliday'].value_counts() display(store.head()) display(DataFrameSummary(store).summary()) ``` Descripción de algunas columnas que quizas no sean tan claras: - `Customers`: La cantidad de clientes por día - `Open`: Indicador si el store estaba abierto o cerrado: 0 = closed, 1 = open - `StateHoliday`: Indica feriado en ese estado. a = public holiday, b = Easter holiday, c = Christmas, 0 = None - `SchoolHoliday`: Inidica si el store fue afectado por el feriado escolar - `StoreType`: Tipos de store: a, b, c, d - `Assortment`: Describe el nivel de surtido de la tienda: a = basic, b = extra, c = extended - `CompetitionDistance`: Distancia en metros al competidor - `CompetitionOpenSince[Month/Year]`: Fecha en que abrío la competencia - `Promo`: Si el store esta corriendo una promoción ese día - `Promo2`: Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating - `Promo2Since[Year/Week]`: describes the year and calendar week when the store started participating in Promo2 - `PromoInterval`: describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store ``` store['StoreType'].value_counts() store['PromoInterval'].value_counts() display(store_states.head(20)) display(DataFrameSummary(store_states).summary()) display(state_names.head(20)) display(DataFrameSummary(state_names).summary()) display(googletrend) display(DataFrameSummary(googletrend).summary()) display(weather) display(DataFrameSummary(weather[['Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h', 'Mean_Wind_SpeedKm_h', 'CloudCover', 'Precipitationmm']]).summary()) weather.columns ```
github_jupyter
<a href="https://colab.research.google.com/github/hlab-repo/purity-and-danger/blob/master/Immigration.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Creating a Model for Immigration and Outsider Language This notebook starts with a baseline system and then provides users the opportunity to attempt to improve performance with their own custom, complete system. ## Set-up ``` %%capture !pip install datasets !pip install transformers import re from collections import Counter import datasets import pandas as pd import torch import torch.nn as nn import torch.optim as optim from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, precision_score, recall_score from sklearn.model_selection import train_test_split from sklearn.naive_bayes import MultinomialNB from torch.utils.data import DataLoader from transformers import BertTokenizer, BertForSequenceClassification ``` ## Getting a test dataset We can start with the Common Crawl news corpus (January 2017 - December 2019). See here for details: https://huggingface.co/datasets/cc_news This will constitute our test dataset. Note that the pseudolabels were generated from the beginning of this dataset but that the dataset (of 708,241 news articles) was in no way exhausted. You could perhaps skip the first 20,000 or so articles to deal only with new data. ``` # this could take several minutes dataset = datasets.load_dataset('cc_news') dataset # look at the first 10 samples for i, s in enumerate(dataset['train']): print(s) if i >= 10: break ``` ## Getting pseudo-labeled data for training `0` represents viral language, `1` immigration language, and `2` a blend of the two. These categorizations are fuzzy and inexact and are not the result of manual annotations. They should be improved upon during the training process (or adjusted manually) when possible. ``` df = pd.read_csv('https://www.dropbox.com/s/kfbja23kisimedm/immigration.csv?dl=1') df.head() X_train, X_valid, y_train, y_valid = train_test_split(df['text'], df['target'], train_size=0.7, random_state=42) X_valid, y_valid ``` # Baseline 1 Let's use Naive Bayes. For the sake of simplicity, I will not add weighting to the classes here (we probably should!), but sklearn wants its weights to correspond to samples in the train dataset (when using the fit method). So you would need to feed in a list of weights the same length as your samples. Think about the weights in a table corresponding to class like this: | Sample | Class | Weight | | --- | --- | --- | | sample 1 | 1 | 0.05 | | sample 2 | 2 | 0.8 | | sample 3 | 1 | 0.05 | | sample 4 | 0 | 0.15 | | sample 5 | 2 | 0.8 | ``` vectorizer = TfidfVectorizer() train_vectorized = vectorizer.fit_transform(X_train) valid_vectorized = vectorizer.transform(X_valid) train_vectorized naive_bayes = MultinomialNB() naive_bayes.fit(train_vectorized, y_train) predictions = naive_bayes.predict(valid_vectorized) predictions print(f'Accuracy: {accuracy_score(y_valid, predictions)}\n' f'Precision: {precision_score(y_valid, predictions, average=None)}\n' f'Recall: {recall_score(y_valid, predictions, average=None)}\n' f'F1 Score: {f1_score(y_valid, predictions, average=None)}\n') # y-axis (rows) == true label and x-axis (columns) == predicted label confusion_matrix(y_valid, predictions) ``` # Baseline 2 ``` model = BertForSequenceClassification.from_pretrained('bert-large-uncased', num_labels=3) tokenizer = BertTokenizer.from_pretrained('bert-large-uncased') device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model = model.to(device) # the classes are extremely unbalanced; let's generate weights that we can feed to loss function unbalanced_weights = 1 / (y_train.value_counts() / len(y_train)).sort_index() weights = unbalanced_weights / unbalanced_weights.sum() weights # I will exclude datasets, dataloaders, etc. for the sake of simplicity criterion = nn.CrossEntropyLoss(weight=torch.tensor(weights.values).float().to(device)) optimizer = optim.AdamW(model.parameters(), lr=1e-5) for epoch in range(1): # make this up to 3! running_loss = 0. for batch_start in range(0, len(X_train), 4): X = X_train[batch_start:batch_start + 4].tolist() y = torch.tensor(y_train[batch_start:batch_start + 4].values).to(device) predictions = model(**tokenizer(X, return_tensors='pt', padding=True).to(device)) loss = criterion(torch.softmax(predictions.logits, dim=-1), y) optimizer.zero_grad() loss.backward() optimizer.step() running_loss += loss.item() print(f'Finished epoch {epoch} with running loss of {running_loss / len(X_train)}') # make predictions on validation set valid_predictions = torch.zeros_like(torch.tensor(y_valid.values)) for batch_start in range(0, len(X_valid), 4): X = X_valid[batch_start:batch_start + 4].tolist() with torch.no_grad(): predictions = model(**tokenizer(X, return_tensors='pt', padding=True).to(device)) indices = torch.argmax(torch.softmax(predictions.logits, dim=-1), dim=-1) valid_predictions[batch_start:batch_start + 4] = indices print(f'Accuracy: {accuracy_score(y_valid, valid_predictions.numpy())}\n' f'Precision: {precision_score(y_valid, valid_predictions.numpy(), average=None)}\n' f'Recall: {recall_score(y_valid, valid_predictions.numpy(), average=None)}\n' f'F1 Score: {f1_score(y_valid, valid_predictions.numpy(), average=None)}\n') # y-axis (rows) == true label and x-axis (columns) == predicted label confusion_matrix(y_valid, valid_predictions.numpy()) ``` # Your Original System Improve upon the baselines above. Feel free to copy cells from one of the baselines above, paste it here, and tweak it for improvements. You have several models to select from from sklearn (both for classification and for vectorization of text). And even just trying different architectures for Basline 2 (such as RoBERTa, distilbert, etc.) would help. ``` ```
github_jupyter
``` """Udacity - Self-Driving Engineer Nanodegree Project 1: Finding Lane Lines on the Road Nikko Sadural""" import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 import math from moviepy.editor import VideoFileClip from IPython.display import HTML %matplotlib inline def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies the Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from 'vertices'. The rest of the image is set to black. 'vertices' should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, top_left_y, top_right_y, color=[255, 0, 0], thickness=10): """ NOTE: this is the function you might want to use as a starting point once you want to average/extrapolate the lines segments you detect to map out the full extent of the lane (going from the result shown in raw-lines-example.mp4 to that shown in P1_example.mp4.) Think about things like separating lines segments by their slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left line vs. the right line. Then, you can average the position of each of the lines and extrapolate to the top and bottom of the lane. This function draws 'lines' with 'color' and 'thickness'. Lines are drawn on the image inplace (mutates the image). If you want to make the lines semi-transparent, think about combining this function with the weighted_img() function below """ #Initialize variables for holding high/low x/y values and counters x1_low1_pos = img.shape[1] y1_low1_pos = img.shape[0] x1_low2_pos = img.shape[1] y1_low2_pos = img.shape[0] x2_high1_pos = 0 y2_high1_pos = 0 x2_high2_pos = 0 y2_high2_pos = 0 x1_low1_neg = img.shape[1] y1_low1_neg = img.shape[0] x1_low2_neg = img.shape[1] y1_low2_neg = img.shape[0] x2_high1_neg = 0 y2_high1_neg = 0 x2_high2_neg = 0 y2_high2_neg = 0 m_sum_pos = 0 sum_count_pos = 0 m_sum_neg = 0 sum_count_neg = 0 for line in lines: for x1,y1,x2,y2 in line: #Get slope and print line variables m = ((y2-y1)/(x2-x1)) #For positive/negative slopes, get two smallest x1's and two largest x2's #(x1_low2 < x1_low1) #(x2_high2 > x2_high1) if m > 0.55 and m < 0.9: #Get two smallest x1's if x1 < x1_low1_pos and x1 < x1_low2_pos: x1_low1_pos = x1_low2_pos y1_low1_pos = y1_low2_pos x1_low2_pos = x1 y1_low2_pos = y1 elif x1 < x1_low1_pos and x1 > x1_low2_pos: x1_low1_pos = x1 y1_low1_pos = y1 #Get two largest x2's if x2 > x2_high1_pos and x2 > x2_high2_pos: x2_high1_pos = x2_high2_pos y2_high1_pos = y2_high2_pos x2_high2_pos = x2 y2_high2_pos = y2 elif x2 > x2_high1_pos and x2 < x2_high2_pos: x2_high1_pos = x2 y2_high1_pos = y2 m_sum_pos = m_sum_pos + m sum_count_pos = sum_count_pos + 1 elif m < -0.55 and m > -0.9: #Get two smallest x1's if x1 < x1_low1_neg and x1 < x1_low2_neg: x1_low1_neg = x1_low2_neg y1_low1_neg = y1_low2_neg x1_low2_neg = x1 y1_low2_neg = y1 elif x1 < x1_low1_neg and x1 > x1_low2_neg: x1_low1_neg = x1 y1_low1_neg = y1 #Get two largest x2's if x2 > x2_high1_neg and x2 > x2_high2_neg: x2_high1_neg = x2_high2_neg y2_high1_neg = y2_high2_neg x2_high2_neg = x2 y2_high2_neg = y2 elif x2 > x2_high1_neg and x2 < x2_high2_neg: x2_high1_neg = x2 y2_high1_neg = y2 m_sum_neg = m_sum_neg + m sum_count_neg = sum_count_neg + 1 #Calculate positive line slope from average of detected line slopes m_pos = round(m_sum_pos / sum_count_pos,2) #Calculate endpoints for extrapolated line with positive slope x2_calc_pos = int(x1_low2_pos + (1/m_pos)*(img.shape[0] - y1_low2_pos)) x1_calc_pos = (math.floor(x2_calc_pos - (1/m_pos)*(img.shape[0] - top_right_y))) x2_calc_pos = (math.floor(x1_calc_pos + (1/m_pos)*(img.shape[0] - top_right_y))) #Calculate negative line slope from average of detected line slopes m_neg = round(m_sum_neg / sum_count_neg,2) #Calculate endpoints for extrapolated line with negative slope x1_calc_neg = int(x2_high2_neg - (1/m_neg)*(y2_high2_neg - img.shape[0])) x2_calc_neg = (math.floor(x1_calc_neg + (1/m_neg)*(top_left_y - img.shape[0]))) x1_calc_neg = (math.floor(x2_calc_neg - (1/m_neg)*(top_left_y - img.shape[0]))) #Draw extrapolated lines with positive and negative slopes onto image new_img = cv2.line(img, (x1_calc_pos, top_right_y), (x2_calc_pos, img.shape[0]), color, thickness) cv2.line(new_img, (x1_calc_neg, img.shape[0]), (x2_calc_neg, top_left_y), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap, top_left_y, top_right_y): """ 'img' should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines, top_left_y, top_right_y) return line_img def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ 'img' is the output of the hough_lines(), an image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. 'initial_img' should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) def process_image(image): #Read in image and convert to grayscale gray = grayscale(image) #Gaussian smoothing to suppress noise/gradients by averaging kernel_size = 3 blur_gray = gaussian_blur(gray, kernel_size) #Canny edge detection to find edges low_threshold = 150 high_threshold = 250 masked_edges = canny(blur_gray, low_threshold, high_threshold) #Define polygon for region masking imshape = masked_edges.shape top_left_x = 440 top_left_y = 330 top_right_x = 530 top_right_y = 330 vertices = np.array([[(152,imshape[0]),(top_left_x, top_left_y),(top_right_x, top_right_y),(imshape[1]-70,imshape[0])]],dtype=np.int32) masked_image = region_of_interest(masked_edges, vertices) #Hough to find line segments on edge-detected Canny image rho = 1 # distance resolution in pixels of Hough grid theta = np.pi/180 # angular resolution in radians of Hough grid threshold = 15 # 15 minimum number of votes (Hough grid cell intersections) min_line_length = 20 # 20 minimum number of pixels making up a line max_line_gap = 1000 # maximum gap between connectable line segments lines_image = hough_lines(masked_image, rho, theta, threshold, min_line_length, max_line_gap, top_left_y, top_right_y) #Draw lines on the edge image lines_edges = weighted_img(lines_image, image) plt.imshow(lines_edges) return lines_edges white_output = 'test_videos_output/solidWhiteRight.mp4' clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) %time white_clip.write_videofile(white_output, audio=False) yellow_output = 'test_videos_output/solidYellowLeft.mp4' clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) image = mpimg.imread('test_images/whiteCarLaneSwitch.jpg') process_image(image) ```
github_jupyter
``` # basic import sys import os # common import numpy as np import pandas as pd import xarray as xr import matplotlib.pyplot as plt from datetime import datetime, timedelta from sklearn.cluster import KMeans import pickle import warnings warnings.filterwarnings('ignore') from IPython.display import Image #lib from lib.validation_methodology_plots import * path_p = r'/home/administrador/Documentos/seasonal/seasonal_forecast/new/' df_2021 = pd.read_pickle(path_p+'df_coordinates_pmin_sst_mld_2021.pkl') xs = xr.open_dataset(path_p+'xs_index_vars_19822019_2deg_new.nc') xds_kma = xr.open_dataset(path_p+'kma_model/xds_kma_index_vars_1b.nc') xs_dwt_counts = xr.open_dataset(path_p+'kma_model/xds_count_tcs8.nc') xs_dwt_counts_964 = xr.open_dataset(path_p+'kma_model/xds_count_tcs8_964.nc') xds_timeM = xr.open_dataset(path_p+'xds_timeM8.nc') xds_PCA = xr.open_dataset(path_p+'xds_PCA.nc') xds_kma_ord = xr.open_dataset(path_p+'xds_kma_ord.nc') ``` <br> <br> <br> # <font color='navy'>**Model Validation** </font> >[Index predictor](#p)<br> <br> >[Cluster comparison](#cc)<br> <br> >[Predictand computation and plotting](#plv)<br> <br> <br> <br> **After analizing the tailor-made predictor along the hindcast data for the calibration period (1982-2019), the performace of the model will be validated for year 2020, which has not been included in the predictor calibration process.** <br> <div style="padding: 15px; border: 1px solid transparent; border-color: transparent; margin-bottom: 20px; border-radius: 4px; color: rgb(0,0,0); background-color: #fcf8e3; border-color: #faebcc; "> **Steps:** * **1.** Download and preprocess (file conversion and resolution interpolation) SST and MLD data for the validation time period. * **2.** Generation of the index predictor based on the index function obtained at the calibration period. * **3.** The fitted Principal Component Analysis for the calibration is used to predict the index principal components in that same temporal-spatial space. * **4.** The predicted PCs are assigned to the best match unit group from the fitted K-means clustering -> based on the index predictor a DWT is assigned to each day. * **5.** From the DWT the expected daily mean number of TCs in 8x8º cells map in the target area is known. </div> <br /> <br /> ## <font color='royalblue'>**Index predictor and DWTs**</font> <a name="p"></a> **Download and preprocess (file conversion and resolution interpolation) SST and MLD data for the validation time period.** ``` path_val = r'/home/administrador/Documentos/seasonal/seasonal_forecast/validation/' year_val = 2020 change_sst_resolution_val(path_val,year_val) ``` <br> **Generation of the index predictor based on the index function obtained at the calibration period.** ``` xs_val = ds_index_over_time_val(path_val,path_p,year_val) xs_val ``` <br> <br> **The fitted Principal Component Analysis for the calibration is used to predict the index principal components in that same temporal-spatial space and the predicted PCs are assigned to the best match unit group from the fitted K-means clustering -> based on the index predictor a DWT is assigned to each day.** ``` val_bmus = PCA_k_means_val(path_p,path_val,xs_val) ``` <br> <br> **Chronology of the DWTs:** ``` fig_bmus = plot_bmus_chronology(xs_val,val_bmus,year_val) ``` <br> **The resulting classification can be seen in the PCs space of the predictor index data. The obtained centroids (black dots), span the wide variability of the data.** ``` fig = plot_scatter_kmeans(xds_kma_ord, val_bmus, xds_kma_ord.cenEOFs.values, size_l=12, size_h=10); ``` <br /> <br /> ## <font color='royalblue'>**Cluster comparison**</font> <a name="cc"></a> ``` fig = plot_bmus_comparison_validation_calibration(xs,xds_kma,xs_val,val_bmus,9,49) ``` <br /> <br /> ## <font color='royalblue'>**Predictand computation and plotting**</font> <a name="plv"></a> **From the DWT the daily expected mean number of TCs in 8x8º cells in the target area is known for each day and thus maps at different time scales can be computed.** **Daily mean expected number of TCs** ``` xds_timeline_val,xs_M_val = ds_monthly_probabilities_val(df_2021,val_bmus,xs_val,xs_dwt_counts,xs_dwt_counts_964) ``` <br> **Monthly aggregated mean expected number of TCs** ``` xs_M_val fig_val_year_8 = plot_validation_year(df_2021,xs_M_val,xds_timeline_val,35) ``` <br> **Whole period aggregated mean expected number of TCs** ``` fig_val_year_8 = plot_validation_full_season(df_2021,xs_M_val,xds_timeline_val,35) ``` <br> <br> <div style="padding: 15px; border: 1px solid transparent; border-color: transparent; margin-bottom: 20px; border-radius: 4px; color: rgb(0,0,0); background-color: #fcf8e3; border-color: #faebcc; "> * **The model performs very well when estimating the expected TC activity (number and intensity of TCs), not understimating the threat.** * **In some cells adjacents to the cells including TC tracks it overstimates TC activity.** </div>
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt ``` # Parte 1: Iteração de Rayleigh Vimos que podemos iterar um vetor $v$ pela matriz $A$, obtendo a sequência de vetores $A^nv$, por multiplicações sucessivas, e que isso permite encontrar um autovetor. ## Questão 1 Implemente uma função `itera(A,v,tol,debug)` que itera o vetor $v$, normalizando a cada iteração, e que retorna $(v_\lambda, \lambda, n)$, respectivamente: - uma estimativa do autovetor - uma estimativa do autovalor correspondente - o número de iterações realizadas até atingir a precisão `tol`. Se `debug == True`, retorne também a lista dos vetores (unitários) produzidos ao longo do processo. ``` def itera(A,v, tol=1e-12, maxiter=1000, debug=False): v = np.array(v) n,m = np.shape(A) assert n==m, 'A must be square' def eigenvector_normalizer(A,v,n=29): answ = [] answ.append(v) for i in range(0,n): v_next = A @ answ[-1] answ.append(v_next/np.linalg.norm(v_next)) return answ,n def eigenvalue_picker(v,u): v_max = vs[-1] print(v_max) return v_max/u vs,it = eigenvector_normalizer(A,v) l = np.linalg.norm(A@vs[-1]) if debug == True: return vs[-1],l,it,vs else: return vs[-1],l,it # Autovetores conhecidos A = [[1,2],[2,1]] alvo = np.array([1,1])/np.sqrt(2) v, l, n = itera(A,[1,2]) assert(abs(l-3) < 1e-15) assert(all(abs(v-alvo) < 1e-12)) assert(n < 30) # Autovetores aleatórios: verificando que satisfaz (aproximadamente) a definição np.random.seed(4444) A = np.random.rand(4,4) v, l, n = itera(A, np.random.rand(4)) err = np.dot(A,v) - l*v assert(np.linalg.norm(err) < 1e-12) assert(n < 30) ``` ## Questão 2: Convergência Temos o número de iterações, mas não vimos como o algoritmo "converge" para o autovetor. Assim, use os vetores intermediários e faça um gráfico da evolução do erro entre os $v$'s produzidos e o autovetor $v_\lambda$. ``` ax = None v,l,n, vs_intermediarios = itera(A, np.random.rand(4), debug=True) tam = len(vs_intermediarios) rng = (0,n) ks = [([v]*tam)/i for i in vs_intermediarios] #plt.plot(ks,rng) ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 1 ys = ax.lines[0].get_ydata() assert min(ys) < 1e-12 assert np.all(ys[:-1] > ys[1:]) ``` O que o último assert quer dizer? Compara se o $ys[i]$ é menor que $ys[i-1]$, ou seja, se a lista está convergindo ## Questão 3: Convergência comparada Para cada um dos vetores `d1` e `d2` abaixo, considere a matriz $A = \operatorname{diag}(d_i)$ correspondente. ``` d1 = [1,10,20,30,31,32] d2 = [1,10,20,29,30,32] ``` Qual é o autovetor com o maior autovalor para $A_1$ e $A_2$? YOUR ANSWER HERE Agora, compare a velocidade de convergência do autovetor usando `itera` para cada uma destas matrizes, fazendo o gráfico do erro entre os vetores gerados para $A_1$ e $A_2$ no mesmo eixo. ``` ax = [] _,_,_,l_1 = itera(np.diag(d1), np.ones_like(d1), debug=True) _,_,_,l_2 = itera(np.diag(d2), np.ones_like(d2), debug=True) # YOUR CODE HERE raise NotImplementedError() ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 2 assert len(ax.legend().texts) == 2 ``` Para qual matriz há convergência mais rápida? Como você explicaria isso? YOUR ANSWER HERE ## Questão 4: Convergência? Sejam $\theta \in [0,2\pi]$ e $\alpha \in \mathbb{R}$, e considere a matriz $$A(\theta, \alpha) = \begin{bmatrix} \cos(\theta) & \sin(\theta) & 0\\ -\sin(\theta) &\cos(\theta) & 0\\ 0 & 0 & \alpha\\ \end{bmatrix}.$$ Qual a interpretação geométrica dessa matriz? YOUR ANSWER HERE Quais são os autovetores de $A$ (em função de $\theta$ e $\alpha$)? YOUR ANSWER HERE Implemente a função abaixo que gera a matriz $A$: ``` def make_matrix(theta,alpha): # YOUR CODE HERE raise NotImplementedError() assert np.allclose(make_matrix(0,1),np.eye(3)) assert np.allclose(make_matrix(np.pi,0.5),[[-1,0,0],[0,-1,0],[0,0,0.5]]) ``` Fixando $\theta = \dfrac{\pi}{4}$, faça um gráfico do número de iterações necessários para calcular o maior autovetor, em função de $\alpha \in [0.5,1.5]$. ``` alphas = np.linspace(0.5,1.5,100) ax = [] # YOUR CODE HERE raise NotImplementedError() ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 1 assert ax.get_xlabel() != "" ys = ax.lines[0].get_ydata() assert 100 > ys.min() > 60 assert ys[55] < 600 assert ys[50] > 900 ``` Agora, faça o gráfico com a estimativa do autovalor, novamente em função de $\alpha$. ``` ax = [] # YOUR CODE HERE raise NotImplementedError() ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 1 assert ax.get_xlabel() != "" ys = ax.lines[0].get_ydata() assert np.all(0.7 <= ys) and np.all(ys <= 1.5) ``` Como explicar a variação no número de iterações? O que isso tem a ver com o autovalor retornado? YOUR ANSWER HERE # Parte 2: Generalizando ## Questão 5: Outra iteração, novos limites Em vez de iterar $A^n v$, é possível iterar $A^{-n} v$. Assim, em vez de "aumentar" os vetores correspondentes aos autovalores de módulo grande, estes serão "diminuídos", e sobra o vetor do "menor" (de novo, em módulo) autovalor. Mostre que $\dfrac{A^{-n}v_0}{\lVert A^{-n}v_0 \rVert} \rightarrow v_{min}$, onde $v_{min}$ é o "menor" autovalor de $A$. YOUR ANSWER HERE Agora, generalize um pouco mais: Seja $\alpha \in C$ um número complexo qualquer. Mostre que $$\frac{(A - \alpha I)^{-n}v_0}{\lVert (A - \alpha I)^{-n}v_0 \rVert} \rightarrow v_{\alpha},$$ onde $v_{\alpha}$ é o autovetor de $A$ com autovalor mais próximo de $\alpha$. Este método é conhecido como "Iteração inversa deslocada". YOUR ANSWER HERE ## Questão 6: Iteração inversa com deslocamento Implemente a iteração inversa com deslocamento, com argumentos semelhantes a função `itera`. ``` def inverse_iteration(A, v, alpha=0, tol=1e-12, maxiter=1000, debug=False): v = np.array(v) n,m = np.shape(A) assert n==m, 'A must be square' # YOUR CODE HERE raise NotImplementedError() A = [[1,2],[2,1]] ans = np.array([-1,1])/np.sqrt(2) v, l, n = inverse_iteration(A,[1,2]) assert np.allclose(np.linalg.norm(v),1) assert np.allclose(v,ans) or np.allclose(v, -ans) assert 20 < n < 40 A = [[1,2],[2,1]] ans = np.array([1,1])/np.sqrt(2) v, l, n = inverse_iteration(A,[1,2], alpha=2, maxiter=50) assert np.allclose(np.linalg.norm(v),1) assert np.allclose(v,ans) or np.allclose(v, -ans) assert 20 < n < 40 A = [[1,2],[2,1]] ans = np.array([1,1])/np.sqrt(2) v, l, n = inverse_iteration(A,[1,2], alpha=2.5, maxiter=50) assert np.allclose(np.linalg.norm(v),1) assert np.allclose(v,ans) or np.allclose(v, -ans) assert 10 < n < 20 ``` ## Questão 7: Convergência comparada Faça o gráfico da velocidade de convergência dos autovetores da iteração inversa aplicada à matriz $A$ acima, para $\alpha \in \{-2,0,2\}$. ``` np.random.seed(1234) ax = [] v0 = np.random.rand(2) # YOUR CODE HERE raise NotImplementedError() plt.ylabel('Distance to eigenvector') ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 3 assert len(ax.legend().texts) == 3 assert ax.get_xlabel() != "" ys = [l.get_ydata() for l in ax.lines] assert np.isclose(max(max(y) for y in ys),2) assert min(min(y) for y in ys) <= 1e-16 ``` Qual valor de $\alpha$ levou à convergência mais rápida? Como você explicaria isso? YOUR ANSWER HERE O que mais você observa neste gráfico? YOUR ANSWER HERE ## Questão 8: Zoom da convergência Agora, repita o mesmo gráfico para $\alpha \in \{2, 2.5, 2.9, 2.99 \}$. ``` np.random.seed(1234) ax = [] v0 = np.random.rand(2) # YOUR CODE HERE raise NotImplementedError() plt.ylabel('Distance to eigenvector') ax = plt.gca() plt.show() assert ax.title.get_text() != "" assert len(ax.lines) == 4 assert len(ax.legend().texts) == 4 assert ax.get_xlabel() != "" ys = [l.get_ydata() for l in ax.lines] assert min(min(y) for y in ys) <= 1e-16 ``` O que este gráfico sugere quanto à velocidade de convergência da iteração inversa? Será que isso já era possível de "ver" no outro gráfico? YOUR ANSWER HERE
github_jupyter
# Homework 1: Classification With Naive Bayes ## Problem 1: Diabetes Classification Points: 40 A famous collection of data on whether a patient has diabetes, known as the Pima Indians dataset, and originally owned by the National Institute of Diabetes and Digestive and Kidney Diseases can be found at Kaggle. Download this dataset from https://www.kaggle.com/kumargh/pimaindiansdiabetescsv. This data has a set of attributes of patients, and a categorical variable telling whether the patient is diabetic or not. For several attributes in this data set, a value of 0 may indicate a missing value of the variable. There are a total of 767 data-points. ## Part 1A Build a simple naive Bayes classifier to classify this data set. You should use a normal distribution to model each of the class-conditional distributions. Compute an estimate of the accuracy of the classifier by averaging over 10 test-train splits. Each split should randomly assign 20% of the data to test, and the rest to train. You should write this classifier and the test-train split code yourself (it's quite straight-forward). Libraries can be used to load & hold the data. ### Answer Part 1A To build a somple naive Bayes classifier to classify Pima Indians dataset. We will be using Python 3 in Google Colab. #### Set up Load required libraries ``` !pip install pandas_profiling import pandas as pd import numpy as np import math from scipy.stats import norm import pandas_profiling import pickle import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings("ignore") ``` #### Load dataset Access https://www.kaggle.com/kumargh/pimaindiansdiabetescsv, download the dataset, it will download pimaindiansdiabetescsv.zip. ##### Dataset dictionary Following are the datset details as per kaggle. About this file This dataset describes the medical records for Pima Indians and whether or not each patient will have an onset of diabetes within ve years. Fields description follow: preg = Number of times pregnant plas = Plasma glucose concentration a 2 hours in an oral glucose tolerance test pres = Diastolic blood pressure (mm Hg) skin = Triceps skin fold thickness (mm) test = 2-Hour serum insulin (mu U/ml) mass = Body mass index (weight in kg/(height in m)^2) pedi = Diabetes pedigree function age = Age (years) class = Class variable (1:tested positive for diabetes, 0: tested negative for diabetes) Columns, first row is sample data, ignoring to obtain dataset of rows 767, as mentioned in home work assignment link. 6 Pregnancies 148 Glucose 72 BloodPressure 35 SkinThickness 0 Insulin 33.6 BMI 0.627 DiabetesPedigreeFunction 50 Age 1 Class To access the dataset in Google Colab you can either use Github or Google Drive. We will be accessing dataset via Google Drive. Unzip the pimaindiansdiabetescsv.zip, add pima-indians-diabetes.csv to a known folder in Google Drive, this folder path in drive will be accessed later to load dataset. We added the pima-indians-diabetes.csv to Google Drive folder /My Drive/UISC-MCS-DS/CS498AML/homework_1/1a/data/. This step is not required if you are using localhost * Mount Google Drive to access data Note: This is not required if you are not using Google colab from google.colab import drive drive.mount('/content/gdrive') Load pima-indians-diabetes.csv Dataset and save it as a pickle object ``` #pima_indias_diabetes_data = pd.read_csv("/content/gdrive/My Drive/UIUC-MCS-DS/CS498AML/homework_1/1a/data/pima-indians-diabetes.csv") #pickle.dump(pima_indias_diabetes_data, open( '/content/gdrive/My Drive/UIUC-MCS-DS/CS498AML/homework_1/1a/data/pima_indias_diabetes_data.pkl','wb')) pima_indias_diabetes_data = pd.read_csv("/MS2/academics/MCS-DS-UIUC/Coursera/CS-498-AML/homework 1/1a/data/pima-indians-diabetes.csv") pickle.dump(pima_indias_diabetes_data, open('/MS2/academics/MCS-DS-UIUC/Coursera/CS-498-AML/homework 1/1a/data/pima_indias_diabetes_data.pkl','wb')) pima_indias_diabetes_data.head() ``` #### Exploratory Data Analysis ##### Validate dataset ``` # rename column names pima_indias_diabetes_data.columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age', 'Class'] pima_indias_diabetes_data.head() # count number of dataset print("There are a total of", len(pima_indias_diabetes_data),"data-points") pima_indias_diabetes_data_features = pima_indias_diabetes_data[['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']] pima_indias_diabetes_data_labels = pima_indias_diabetes_data[['Class']] print(pima_indias_diabetes_data_features.shape) print(pima_indias_diabetes_data_labels.shape) ``` There are 767 observations of 8 different features. ##### Analyse label ``` count_classes = pd.value_counts(pima_indias_diabetes_data['Class'], sort = True).sort_index() count_classes.plot(kind = 'bar') plt.title("Pima Indians diabetes class histogram") plt.xlabel("Class") plt.ylabel("Frequency") # 284315 normal transactions (class 0) # 492 fraud transactions (class 1) pima_indias_diabetes_data.groupby('Class')['Class'].count() ``` There are 500 Pima Indians 0: tested negative for diabetes, 267 :tested positive for diabetes. ##### Data distribution analysis for each feature and class label Plot the data by each feature ``` axarr = [[]]*len(pima_indias_diabetes_data_features.columns) columns = 4 rows = int( np.ceil( len(pima_indias_diabetes_data_features.columns) / columns ) ) f, fig = plt.subplots( figsize=(columns*3.5, rows*2) ) f.suptitle('Data Distributions by Feature and Class', size=16) for i, col in enumerate(pima_indias_diabetes_data_features.columns[:]): axarr[i] = plt.subplot2grid( (int(rows), int(columns)), (int(i//columns), int(i%columns)) ) axarr[i].hist( [ pima_indias_diabetes_data.loc[ pima_indias_diabetes_data.Class == 0, col ], pima_indias_diabetes_data.loc[ pima_indias_diabetes_data.Class == 1, col ] ], label=['tested negative','tested positive'], bins=np.linspace( np.percentile(pima_indias_diabetes_data[col],0.1), np.percentile(pima_indias_diabetes_data[col],99.9), 30 ), normed=True ) axarr[i].set_xlabel(col, size=12) axarr[i].set_ylim([0,0.8]) axarr[i].tick_params(axis='both', labelsize=10) if i == 0: legend = axarr[i].legend() legend.get_frame().set_facecolor('white') if i%4 != 0 : axarr[i].tick_params(axis='y', left='off', labelleft='off') else: axarr[i].set_ylabel('Fraction',size=12) plt.tight_layout(rect=[0,0,1,0.95]) # xmin, ymin, xmax, ymax plt.show() ``` #### Classify Dataset - Build a simple naive Bayes classifier ##### Split data ``` def train_test_split(features, labels, test_size=0.2): np.random.seed(4) id = np.random.rand(len(features))>test_size #print(id) features_train = features[id] labels_train = labels[id] features_test = features[np.invert(id)] labels_test = labels[np.invert(id)] return features_train, labels_train, features_test, labels_test features = pima_indias_diabetes_data.drop('Class', axis = 1) labels = pima_indias_diabetes_data['Class'] ``` #### Gaussian Naive Bayes classifier In Gaussian Naive Bayes, continuous values associated with each feature are assumed to be distributed according to a Gaussian distribution. A Gaussian distribution is also called Normal distribution. The likelihood of the features is assumed to be Gaussian, hence, conditional probability is given by: >$P(x_i | y) = \frac{1}{\sqrt{2\pi\sigma _{y}^{2} }} exp \left (-\frac{(x_i-\mu _{y})^2}{2\sigma _{y}^{2}} \right ) $ we will use norm.pdf to calculate probablity density function. ``` class GaussianNaiveBayes(): """The Gaussian Naive Bayes classifier. """ def fit(self,features, labels): """ for each label and feature combination we need to calculate the std and mean value from the features & labels. """ self.min_std = 0.00000001 self.ntargets= np.unique(labels).shape[0] self.target_labels = np.unique(labels) self.nfeatures = features.shape[1] self.means = np.zeros((self.ntargets,self.nfeatures)) self.stds = np.zeros((self.ntargets,self.nfeatures)) self.priors = np.zeros(self.ntargets) for _index in range(self.ntargets): # Get the boolean vector to filter for labels = i where_label = [label==self.target_labels[_index] for label in labels] self.means[_index] = np.nanmean(features[where_label],axis=0) # To avoid devide by 0/very small value issue, add a min for standard deviation to min_std = 0.00000001 self.stds[_index] = np.clip(np.nanstd(features[where_label],axis=0),self.min_std,None) #print(self.means[_index], self.stds[_index]) #Calculate the prior for given label self.priors[_index] = np.log(np.sum(where_label)/len(labels)) def predict(self,test_features): """ Classification using Bayes Rule P(Y|X) = P(X|Y)*P(Y)/P(X), or Posterior = Likelihood * Prior / Scaling Factor P(Y|X) - The posterior is the probability that sample x is of class y given the feature values of x being distributed according to distribution of y and the prior. P(X|Y) - Likelihood of data X given class distribution Y. Gaussian distribution (given by _calculate_likelihood) P(Y) - Prior (given by _calculate_prior) P(X) - Scales the posterior to make it a proper probability distribution. This term is ignored in this implementation since it doesn't affect which class distribution the sample is most likely to belong to. Classifies the sample as the class that results in the largest P(Y|X) (posterior) """ test_samples = test_features.shape[0] posterior = np.zeros((test_samples,self.ntargets)) # Naive assumption (independence): # P(x1,x2,x3|Y) = P(x1|Y)*P(x2|Y)*P(x3|Y) # Posterior is product of prior and likelihoods (ignoring scaling factor) for target_label in range(self.ntargets): posterior[:,target_label] = self.priors[target_label] + np.nansum(np.log(norm.pdf(test_features,self.means[target_label],self.stds[target_label])),axis=1) label = self.target_labels[np.argmax(posterior, axis=1)] return label def score(self,X_test, y_test): y_predict = self.predict(X_test) return (y_predict == y_test).mean() ``` Compute an estimate of the accuracy of the classifier by averaging over 10 test-train splits. Each split should randomly assign 20% of the data to test, and the rest to train. ``` print(features.shape) test_accuracy_iterations =[] # for 10 iterations for i in range(10): features_train, labels_train, features_test, labels_test = train_test_split(features.values, labels.values, test_size=0.2) nb = GaussianNaiveBayes() nb.fit(features_train, labels_train) test_accuracy = nb.score(features_test, labels_test) test_accuracy_iterations.append(test_accuracy) print(test_accuracy_iterations) print("Average test accuracy", np.mean(test_accuracy_iterations)) ``` Validate using scikit learn naive bayes ``` from sklearn import naive_bayes test_accuracy_iterations =[] # for 10 iterations for i in range(10): features_train, labels_train, features_test, labels_test = train_test_split(features.values, labels.values, test_size=0.2) snb = naive_bayes.GaussianNB() snb.fit(features_train, labels_train) test_accuracy = snb.score(features_test, labels_test) test_accuracy_iterations.append(test_accuracy) print(test_accuracy_iterations) print("Average test accuracy", np.mean(test_accuracy_iterations)) ``` ## Part 1B Now adjust your code so that, for attribute 3 (Diastolic blood pressure), attribute 4 (Triceps skinfold thickness), attribute 6 (Body mass index), and attribute 8 (Age), it regards a value of 0 as a missing value when estimating the class-conditional distributions, and the posterior. * Compute an estimate of the accuracy of the classifier by averaging over 10 test-train splits. ### Answer Part 1B All the above work done for Part 1A will be reused for Part 1B. We will mainly be processing data to remove missing values which are 0. Impute missing values 0 for attributes 3 (Diastolic blood pressure), attribute 4 (Triceps skinfold thickness), attribute 6 (Body mass index), and attribute 8 (Age) as np.NaN. Check how many missing values are present. ``` pima_indias_diabetes_data[['BloodPressure','SkinThickness','BMI','Age']]= pima_indias_diabetes_data[['BloodPressure','SkinThickness','BMI','Age']].replace(0, np.NaN) print(pima_indias_diabetes_data.isnull().sum()) #pima_indias_diabetes_data.dropna(inplace=True) print(pima_indias_diabetes_data.isnull().sum()) pima_indias_diabetes_data.shape ``` After processing the missing data split data and build new model with new train dataset and test he accuracy. ``` processed_features = pima_indias_diabetes_data.drop('Class', axis = 1) processed_labels = pima_indias_diabetes_data['Class'] #scaler = StandardScaler() #normal_processed_features = scaler.fit_transform(processed_features) processed_test_accuracy_iterations =[] # for 10 iterations for i in range(10): p_features_train, p_labels_train, p_features_test, p_labels_test = train_test_split(processed_features.values, processed_labels.values, test_size=0.2) p_nb = GaussianNaiveBayes() p_nb.fit(p_features_train, p_labels_train) p_test_accuracy = p_nb.score(p_features_test, p_labels_test) processed_test_accuracy_iterations.append(p_test_accuracy) print(processed_test_accuracy_iterations) print("Average test accuracy for processed data", np.mean(processed_test_accuracy_iterations)) ``` ## References Following are various resources referred while writing this solution * Github projects https://github.com/sriharshams/mlnd/ * Code of codyznash https://github.com/codyznash/GANs_for_Credit_Card_Data * Tutorials of https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/ * [Naive Bayes in Scikit-Learn](https://github.com/scikit-learn/scikit-learn/blob/7389dba/sklearn/naive_bayes.py#L107): Implementation of naive bayes in the scikit-learn library. * [ ML from scratch] (https://github.com/eriklindernoren/ML-From-Scratch) * [Naive Bayes documentation](https://scikit-learn.org/stable/modules/naive_bayes.html): Scikit-Learn documentation and sample code for Naive Bayes * Naive Bayes Classifiers https://www.geeksforgeeks.org/naive-bayes-classifiers/ * [Naive Bayes Classifier From Scratch](https://chrisalbon.com/machine_learning/naive_bayes/naive_bayes_classifier_from_scratch/) * https://chrisalbon.com/machine_learning/naive_bayes/naive_bayes_classifier_from_scratch/ * Naive Bayes from scratch in python http://kenzotakahashi.github.io/naive-bayes-from-scratch-in-python.html * [Applied Machine Learning, D.A. Forsyth, (approximate 18'th draft)](http://luthuli.cs.uiuc.edu/~daf/courses/AML-18-Fall/AMLbook-3-Dec-18.pdf) * Piazza & Slack discussions on CS-498 Spring 2019
github_jupyter
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title"><b>The Knapsack Problem</b></span> by <a xmlns:cc="http://creativecommons.org/ns#" href="http://mate.unipv.it/gualandi" property="cc:attributionName" rel="cc:attributionURL">Stefano Gualandi</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.<br />Based on a work at <a xmlns:dct="http://purl.org/dc/terms/" href="https://github.com/mathcoding/opt4ds" rel="dct:source">https://github.com/mathcoding/opt4ds</a>. **NOTE:** Run the following script whenever running this script on a Google Colab. ``` import shutil import sys import os.path if not shutil.which("pyomo"): !pip install -q pyomo assert(shutil.which("pyomo")) if not (shutil.which("glpk") or os.path.isfile("glpk")): if "google.colab" in sys.modules: !apt-get install -y -qq glpk-utils else: try: !conda install -c conda-forge glpk except: pass ``` # $n$-Queens Problem The $n$-Queens puzzle is the problem of placing eight chess queens on an $n \times n$ chessboard so that no two queens threaten each other; thus, a solution requires that no two queens share the same row, column, or diagonal (source: [wikipedia](https://en.wikipedia.org/wiki/Eight_queens_puzzle)). A solution exists for all natural numbers n with the exception of $n = 2$ and $n = 3$. **Example:** For $n=8$, we have the following solution: ``` 1 . . . . . Q . . 2 . . . Q . . . . 3 . . . . . . Q . 4 Q . . . . . . . 5 . . . . . . . Q 6 . Q . . . . . . 7 . . . . Q . . . 8 . . Q . . . . . a b c d e f g h ``` ## Integer Linear Programming Model The $n$-Queens problem can be formalized with the following **ILP** model. **Data:** Size of the board $n\times n$. Let $I=:\{1,\dots,n\}$ a set of indices. **Decision Variables:** The variable $x_{ij} \in \{0,1\}$ is equal to 1 if we place a queen in position $(i,j)$ on the chessboard. **Objective function:** Since the problem is a feasibility problem, we can set the objective function equal to any constant value. **Constraints:** We need the following linear constraints, which encode the puzzle rules: 1. Each queen appears once per row: $$ \sum_{j \in I} x_{ij} = 1, \forall i \in I $$ 2. Each queen appears once per column: $$ \sum_{i \in I} x_{ij} = 1, \forall j \in I $$ 3. Each queen appears once per main diagonals: $$ \sum_{(i,j) \in D_k} x_{ij} \leq 1, D_k \mbox{ main diagonals} $$ 4. Each queen appears once per off-diagonals: $$ \sum_{(i,j) \in O_k} x_{ij} \leq 1, O_k \mbox{ off diagonals} $$ ### Main Diagonals $D_k$ Since we need to specify the pairs of indices that define as a function of $n$, we first defined the following nested loop: ``` n = 5 for j in range(-n+2,n-1): for i in range(1, n+1): if 0 < j+i <= n: print(i, j+i, end='\t') else: print(' ', end='\t') print() ``` ### Off Diagonals $_k$ Similarly, we can define the off diagonals as follows: ``` for i in reversed(range(-n+3, n)): for j in range(1, n): if 0 < n - j+i <= n: print(j, n-j+i, end='\t') else: print(' ', end='\t') print() ``` ### Full Model defined in Pyomo If we put all the definitions together, we can solve the $n$-Queens problem with the script below. Please, note the following Pyomo syntax used to define variable $x_{ij}$ over the [RangeSet](https://pyomo.readthedocs.io/en/stable/library_reference/aml/index.html#pyomo.environ.RangeSet) $I$ and $J$: ``` model.I = RangeSet(1, n) model.J = RangeSet(1, n) model.x = Var(model.I, model.J, within=Binary) ``` Notice also the syntax used to define the row and column constraints, which uses `lambda` function to define constraint rules: ``` model.row = Constraint(model.I, rule = lambda mod, i: sum(mod.x[i,j] for j in mod.J) == 1) ``` Finally, to define the main and of diagonals, we use the [ConstraintList](https://pyomo.readthedocs.io/en/stable/working_models.html) class: ``` model.mainD = ConstraintList() #... model.mainD.add( expr <= 1 ) ``` The complete Pyomo script is as follows. ``` # Import the libraries from pyomo.environ import ConcreteModel, Var, Objective, Constraint, SolverFactory from pyomo.environ import maximize, Binary, RangeSet, ConstraintList n = 8 # Create concrete model model = ConcreteModel() model.I = RangeSet(1, n) model.J = RangeSet(1, n) # Variables model.x = Var(model.I, model.J, within=Binary) # Objective Function: Maximize Profit model.obj = Objective(expr = n, sense = maximize) # 1. Row constraints def VincoloRighe(mod, i): return sum(mod.x[i,j] for j in mod.J) == 1 model.row = Constraint(model.I, rule = VincoloRighe) # 2. Column constraints model.column = Constraint(model.J, rule = lambda mod, j: sum(mod.x[i,j] for i in mod.I) == 1) # 3. Main Diagonal constraints model.mainD = ConstraintList() # Build the list of possible pairs for j in range(-n+2,n-1): expr = 0 for i in model.I: if 0 < j+i <= n: expr += model.x[i, j+i] model.mainD.add( expr <= 1 ) # 4. Off Diagonal constraints model.offD = ConstraintList() # Build the list of possible pairs for i in range(-n+3,n+1): expr = 0 for j in model.J: if 0 < n-j+i <= n: expr += model.x[j, n-j+i] model.offD.add( expr <= 1 ) ``` To solve the script, we use a solver factory, specifying the GLPK solver, and we inspect the Solver **status** (infeasible, unbounded, or optimal). ``` # Solve the model sol = SolverFactory('glpk').solve(model) # Basic info about the solution process for info in sol['Solver']: print(info) ``` We aspect the optimal decision variables (only the positive variables). ``` # Report solution value print("Optimal solution value: z =", model.obj()) print("Decision variables:") for i in model.I: for j in model.J: if model.x[i,j]() > 0: print("x({},{}) = {}".format(i, j, model.x[i,j]())) ``` And finally, we print a solution on a simplified chessboard $n\times n$. ``` print('\nChessboard Solution:') for i in model.I: for j in model.J: if model.x[i,j]() > 0: print('Q', end=' ') else: print('.', end=' ') print() ``` ## Plotting a solution with a Chessboard ``` # CREDIT: Solution original appeared on Stackoverflow at: # https://stackoverflow.com/questions/60608055/insert-queen-on-a-chessboard-with-pyplot def PlotSolution(n, x, size=6): import matplotlib.pyplot as plt import numpy as np chessboard = np.zeros((n, n)) chessboard[1::2,0::2] = 1 chessboard[0::2,1::2] = 1 plt.figure(figsize=(size, size)) plt.imshow(chessboard, cmap='binary') for i, j in x: if x[i,j]() > 0: plt.text(i-1, j-1, '♕', color='darkorange', fontsize=56*size/n, fontweight='bold', ha='center', va='center') plt.xticks([]) plt.yticks([]) plt.show() PlotSolution(n, model.x) ```
github_jupyter
**Title**: Upload kaggle chest X-Ray. **Date**: 12-Oct-2020 **Description**: Pneumonia accounts for over 15% of all deaths of children under 5 years old internationally. Advanced detection of pneumonia could save thousands of lives a year. In 2018 the RSNA Pneumonia Detection Challenge was posted on Kaggle, an organization for machine learning training and purpose-driven competitions in Data Science. This notebook downloads the entire RSNA Pneumonia Detection Challenge Dataset (3.6 GB) and incorporates it into a Flywheel instance specified by the supplied API-Key. A Data Use Agreement (DUA) is required to download this dataset. Reference: * https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data # Data Use Aggreement Before downloading this data, or any data, from kaggle, you must agree to the rules of this competition: * https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/rules ``` %reload_ext autoreload %autoreload 2 %matplotlib inline ``` # Requirements: - **Python** (Preferably >= 3.6): - Have admin permissions to create Flywheel Groups and Projects. # Install and import dependencies ``` !pip install pandas pydicom flywheel-sdk tqdm kaggle jupyter ipywidgets import json import logging import os import re import time import zipfile from getpass import getpass from pathlib import Path import flywheel import pandas as pd import pydicom from tqdm.notebook import tqdm # Instantiate a logger logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s') log = logging.getLogger('root') ``` # Download kaggle dataset This requires that you have stored your Kaggle credentials in ~/.kaggle/kaggle.json. These can be acquired by creating a kaggle account at kaggle.com and using "Create New API Token" on the user account page. This dataset is currently 3.7 GB and may change in the future. Depending on the bandwidth of your internet connection, this may take some time to download. ``` !kaggle competitions download -c rsna-pneumonia-detection-challenge ``` # Initialize Constants Initialize path to dowload directory, default session label, and default acquisition label. ``` ROOT_KAGGLE_DATA = '/path/to/repository/rsna-pneumonia-detection-challenge' DEFAULT_SESSION_LABEL = 'NA' DEFAULT_ACQ_LABEL = 'Chest XR' ``` # Flywheel API Key and Client Get an API_KEY. More on this in the Flywheel SDK doc [here](https://flywheel-io.gitlab.io/product/backend/sdk/branches/master/python/getting_started.html#api-key). ``` API_KEY = getpass('Enter API_KEY here: ') ``` Instantiate the Flywheel API client ``` fw_client = flywheel.Client(API_KEY if 'API_KEY' in locals() else os.environ.get('FW_KEY')) ``` Show Flywheel logging information ``` log.info('You are now logged in as %s to %s', fw_client.get_current_user()['email'], fw_client.get_config()['site']['api_url']) ``` # Read the csv The CSV file consists of the patient id, whether the pnemonia was diagnosed (Target 0/1), and the rectangular region of the image it was found in (x,y,width,height). ``` patientId,x,y,width,height,Target 0004cfab-14fd-4e49-80ba-63a80b6bddd6,,,,,0 00436515-870c-4b36-a041-de91049b9ab4,264.0,152.0,213.0,379.0,1 ``` ``` df = pd.read_csv(Path(ROOT_KAGGLE_DATA) / 'stage_2_train_labels.csv') ``` # Container helpers Import container helper functions to find existing or create new containers. ``` from container_helpers import ( find_or_create_group, find_or_create_project, find_or_create_subject, find_or_create_session, find_or_create_acquisition, upload_file_to_acquisition ) ``` # Create the project ``` # Initialize the group public_data_group = find_or_create_group(fw_client, 'public_data', 'public_data') # Initialize the project project_label = 'kaggle-rsna-pneumonia-detection-challenge' readme = 'https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data' chestxray_project = find_or_create_project(project_label, public_data_group) if chestxray_project: chestxray_project.update(description=readme) ``` # Iterate through dataframe and upload Iterate through the training data csv to create the container hierarchy for this project: 1. find or create each subject encountered a. Encode presence/absence of pneumonia (Target=0/1) and the rectangular region it was found in (box) into a dictionary. 2. find or create each session (with `DEFAULT_SESSION_LABEL`) encountered 3. find or create each acquisition (with 'SeriesDescription' or `DEFAULT_ACQ_LABEL`) and add enclosed files. a. Incorporate presence/absence of pneumonia (Target) and--if found--the rectangular region it was found in (box) into the metadata of the acquisition file. ``` for i, row in tqdm(df.iterrows(), total=len(df)): log.info('Processing Subject %s.', row['patientId']) # (1) Find or create subject subject = find_or_create_subject(row['patientId'], chestxray_project) # (1a) Encode pneumonia status and rectangular region of positive status in dictionary. if row['Target']: row_dict = { 'box': { 'x': row['x'], 'y': row['y'], 'width': row['width'], 'height': row['height'] }, 'Target': row['Target'] } else: row_dict = {'Target': row['Target']} if subject: log.info('Processing Session %s.', DEFAULT_SESSION_LABEL) # (2) Find or create session session = find_or_create_session(DEFAULT_SESSION_LABEL, subject) if session: filepath = str(Path(ROOT_KAGGLE_DATA) / 'stage_2_train_images' / f"{row['patientId']}.dcm") dcm = pydicom.read_file(filepath, stop_before_pixels=True, force=True) # Pack dicoms into zip file with zipfile.ZipFile(f'/tmp/{row["patientId"]}.zip', 'w') as myzip: myzip.write(filepath) acq_label = dcm.get('SeriesDescription', DEFAULT_ACQ_LABEL) log.info('Processing Acquisition %s.', acq_label) # (3) Find or create acquisition acq = find_or_create_acquisition(acq_label, session) log.info( 'Uploading file, %s, to acquisition, %s', f'/tmp/{row["patientId"]}.zip', acq.label ) kwarg_dict = {"type": "dicom", "modality": "X-ray"} kwarg_dict["info"] = row_dict # Upload file to acquisition and # (3a) incorporate Target and box into file metadata upload_file_to_acquisition(acq, f'/tmp/{row["patientId"]}.zip', **kwarg_dict) # remove temporary zipped dicom file os.remove(f'/tmp/{row["patientId"]}.zip') ```
github_jupyter
``` #import libraries import requests from datetime import datetime as dt from datetime import timedelta import pandas as pd # get events from n days ago iso = 887 #Pike I changed this to Yemen limit = 400 api_url = 'https://api.acleddata.com/acled/read?terms=accept&iso={}'.format(iso) print (api_url, type(api_url)) #creates request according to ACLED format specifications - p. 13 response = requests.get(api_url) data = response.json() data.keys() data['count'] ``` ### From the documentation we know this is the max return --- How can we get all the results? ``` # Let's mkae a function that updates our search to get the new pages def ping_acled(api_url): ''' Takes one parameter search term for API ''' response = requests.get(api_url) data = response.json() return data results = [] # empty data strcture to store results num_results = 500 # condition to continue adding pages count = 0 # tracker of results page = 1 #Per the documentation each page will give us more results while num_results == 500: #if less 500 or 0 we know we have all the results print ("starting ", page, " ", num_results) #just to see our progress api_url = 'https://api.acleddata.com/acled/read?terms=accept&iso={}&page={}'.format(iso,page) #the search data = ping_acled(api_url) #call the previous function results.append(data['data']) #store in our results count += data['count'] #Track number of results num_results = data['count'] #update our condition page += 1 #update our page variable print ("Total Results ", count) #Track our progress #Pike this is a bit intimidating I know Yemen is a mess but damn #I was worried that the 58000 entries would break the code Does that ever happen #Now I want to put them together into one giant result super_list = [] for res in results: super_list += res print (len(super_list)) #This is a giant result #Pike changed the name to reflect the proper country yemen_res = pd.DataFrame(super_list) yemen_res.head() ``` ### Do the right thing, take some time to look at the codebook and see what these columns are ``` yemen_res.columns ``` ### Homework --- Make a map of some ACLED Data (absolutely use the code from the Global Terrorism Database exercise) ``` from bokeh.tile_providers import get_provider, Vendors from pyproj import Transformer tile_provider =get_provider('STAMEN_TERRAIN') import math from bokeh.plotting import figure, output_notebook, show #Pike got help from my coworker Brandon Mohr on this #Will be borrowing this technique for other mapping projects #as I like this better than the GTD method yemen_map = yemen_res[["latitude", 'longitude', 'data_id']] yemen_map['latitude'] = yemen_map['latitude'].astype(float) yemen_map['longitude'] = yemen_map['longitude'].astype(float) yemen_map['data_id'] = yemen_map['data_id'].astype(float) #Pike the conversion fix you described in class transformer = Transformer.from_crs('epsg:4326','epsg:3857') map_dict = {} nan_count = {} for idx, row in yemen_map.iterrows(): if row['data_id'] in map_dict.keys(): if math.isnan(row["latitude"]): if row['data_id'] in nan_count.keys(): nan_count[row['data_id']] += 1 else: nan_count[row['data_id']] = 1 else: point = transformer.transform(row["latitude"],row["longitude"]) map_dict[row['data_id']].append([point[0],point[1]]) else: if math.isnan(row["latitude"]): nan_count[row['data_id']] = 1 else: point = transformer.transform(row["latitude"],row["longitude"]) map_dict[row['data_id']] =[[point[0],point[1]]] #Pike this portion of the code is essentially unchanged save for data id for gname nan_count ``` for idx, row in yemen_map.iterrows(): if row['data_id'] in map_dict.keys(): if math.isnan(row["latitude"]): if row['data_id'] in nan_count.keys(): nan_count[row['data_id']] += 1 else: nan_count[row['data_id']] = 1 else: point - transformer.transform(row["latitude"],row{"longitude"]) map_dict[row['data_id']].append([point[0],point[1]]) ``` pts = [(19.07, 41.68), (12.01, 54.87)] bbox = [] for pt in transformer.itransform(pts): bbox.append(pt) #Pike included the Red Sea Bab al Mandab strait and island of Socotra for completeness sake NPA_x = [] NPA_y = [] for k, v in map_dict.items(): for pt in v: NPA_x.append(pt[0]) NPA_y.append(pt[1]) p = figure(x_range=(bbox[0][0], bbox[1][0]),y_range=(bbox[0][1], bbox[1][1]), x_axis_type="mercator", y_axis_type="mercator") p.add_tile(tile_provider) p.circle(x = NPA_x, y = NPA_y, color= "firebrick") show(p) #Pike as my old ops chief would say Yemen is a show ```
github_jupyter
``` from lenslikelihood.power_spectra import * mass_function_model = 'shethTormen' normalization = 'As' pivot_string = '01' pivot = 0.1 structure_formation_interp_As = load_interpolated_mapping(mass_function_model, pivot_string) import numpy as np import matplotlib.pyplot as plt import matplotlib.cm as cm import os plt.rcParams['axes.linewidth'] = 2.5 plt.rcParams['xtick.major.width'] = 2.5 plt.rcParams['xtick.major.size'] = 8 plt.rcParams['xtick.minor.size'] = 5 plt.rcParams['ytick.major.width'] = 2.5 plt.rcParams['ytick.major.size'] = 8 plt.rcParams['ytick.minor.size'] = 4 plt.rcParams['ytick.labelsize'] = 15 plt.rcParams['xtick.labelsize'] = 15 from lenslikelihood.measurements import * from lenslikelihood.sampling import InterpolatedLikelihood import dill as pickle from trikde.pdfs import DensitySamples, IndepdendentLikelihoods, MultivariateNormalPriorHyperCube, CustomPriorHyperCube nbins = 20 param_names = ['LOS_normalization', 'beta', 'log10c0', 'delta_power_law_index', 'sigma_sub'] param_ranges = [all_param_ranges_version2[name] for name in param_names] load_from_pickle = True save_to_pickle = False filename_extension = '_joint_logprior' base_path = './../lenslikelihood/precomputed_likelihoods/' likelihoods = [] for lens in all_lens_names: fname = base_path + lens + filename_extension print('loading joint likelihoods for lens '+lens+' ...') f = open(fname, 'rb') single_lens_likelihood = pickle.load(f) f.close() likelihoods.append(single_lens_likelihood) likelihood_noprior = IndepdendentLikelihoods(likelihoods) ``` ## Priors on the subhalo and field halo mass functions A reasonable assumption to impose on the inference is that the number of subhalos varies proportionally with the number of field halos, since subhalos are accreted from the field. We can enforce this by choosing an expected amplitude for the subhalo mass function in $\Lambda$CDM, and then coupling variations to $\Sigma_{\rm{sub}}$ around this value to $\delta_{\rm{LOS}}$. ``` def couple_mass_functions(samples, sigma_sub_theory=0.025, coupling_strength=0.2): delta_los_samples = samples[:, 0] sigma_sub_samples = samples[:, -1] delta_sigma_sub = sigma_sub_samples/sigma_sub_theory chi2 = (delta_sigma_sub - delta_los_samples)**2/coupling_strength**2 return chi2 extrapolate_likelihood = True sigma_sub_theory = 0.05 kwargs_prior = {'sigma_sub_theory': sigma_sub_theory} prior_on_mass_functions = CustomPriorHyperCube(couple_mass_functions, param_names, param_ranges, nbins, kwargs_prior) likelihood = IndepdendentLikelihoods(likelihoods + [prior_on_mass_functions]) interpolated_lens_likelihood = InterpolatedLikelihood(likelihood, param_names, param_ranges, extrapolate=extrapolate_likelihood) ``` ### Plot the likelihood First we show the likelihood as inferred from the lenses with no additional modeling assumptions ``` from trikde.triangleplot import TrianglePlot fig = plt.figure() cmap = 'jet' triangle_plot = TrianglePlot([likelihood_noprior]) triangle_plot.set_cmap(cmap, marginal_col='k') triangle_plot.truth_color = 'k' truths = {'sigma_sub': 1.05, 'LOS_normalization': 1., 'beta': 0.85, 'log10c0': np.log10(18.5), 'delta_power_law_index': 0.} axes = triangle_plot.make_triplot(filled_contours=False, show_intervals=False, contour_alpha=1., contour_colors=['k', 'k'], show_contours=True, contour_levels=[0.32], truths=truths) beta = r'$\beta$' beta_ticks = [-0.2, 3, 6, 9, 12, 15] c0 = r'$\log_{10} c_8$' c0_ticks = [0., 1.0, 2.0, 3.0, 4.0] delta_power_law_index = r'$\Delta \alpha$' dpli_ticks = [-0.6, -0.3, 0., 0.3, 0.6, 0.9] sigma_sub = r'$\Sigma_{\rm{sub}} \ \left[\rm{kpc^{-2}}\right]$' sigma_sub_ticks = [0., 0.025, 0.05, 0.075, 0.1] delta_LOS = r'$\delta_{\rm{LOS}}$' dlos_ticks = [0.0, 0.5, 1., 1.5, 2., 2.5] ticksize = 14 labelsize = 18 rotation = 40 axes[5].set_ylabel(beta, fontsize=labelsize) axes[5].set_yticks(beta_ticks) axes[5].set_yticklabels(beta_ticks, fontsize=ticksize) axes[10].set_ylabel(c0, fontsize=labelsize) axes[10].set_yticks(c0_ticks) axes[10].set_yticklabels(c0_ticks, fontsize=ticksize) axes[15].set_ylabel(delta_power_law_index, fontsize=labelsize) axes[15].set_yticks(dpli_ticks) axes[15].set_yticklabels(dpli_ticks, fontsize=ticksize) axes[20].set_ylabel(sigma_sub, fontsize=labelsize) axes[20].set_yticks(sigma_sub_ticks) axes[20].set_yticklabels(sigma_sub_ticks, fontsize=ticksize) axes[20].set_xlabel(delta_LOS, fontsize=labelsize) axes[20].set_xticks(dlos_ticks) axes[20].set_xticklabels(dlos_ticks, fontsize=ticksize, rotation=rotation) axes[21].set_xlabel(beta, fontsize=labelsize) axes[21].set_xticks(beta_ticks) axes[21].set_xticklabels(beta_ticks, fontsize=ticksize, rotation=rotation) axes[22].set_xlabel(c0, fontsize=labelsize) axes[22].set_xticks(c0_ticks) axes[22].set_xticklabels(c0_ticks, fontsize=ticksize, rotation=rotation) axes[23].set_xlabel(delta_power_law_index, fontsize=labelsize) axes[23].set_xticks(dpli_ticks) axes[23].set_xticklabels(dpli_ticks, fontsize=ticksize, rotation=rotation) axes[24].set_xlabel(sigma_sub, fontsize=labelsize) axes[24].set_xticks(sigma_sub_ticks) axes[24].set_xticklabels(sigma_sub_ticks, fontsize=ticksize, rotation=rotation) from mpl_toolkits.axes_grid1 import make_axes_locatable from mpl_toolkits.axes_grid1.inset_locator import inset_axes ax_idx = 9 axins1 = inset_axes(axes[ax_idx], width="300%", # width = 50% of parent_bbox width height="15%", # height : 5% loc='upper right') empty = np.zeros((20, 20)) empty[0,0] = 1 im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap) cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1]) axes[ax_idx].set_visible(False) cb.set_label('probability', fontsize=15) #plt.savefig('./figures/lensing_likelihood.pdf') ``` ### Likelihood with a prior Now we show the likelihood after adding the prior coupling $\Sigma_{\rm{sub}}$ to $\delta_{LOS}$, assuming $\Sigma_{\rm{sub}} = 0.05 \rm{kpc^{-1}}$ in $\Lambda$CDM, corresponding to doubly efficient tidal disruption of halos between in the Milky Way relative to massive ellipticals ``` fig = plt.figure() triangle_plot = TrianglePlot([likelihood]) triangle_plot.set_cmap(cmap, marginal_col='k') triangle_plot.truth_color = 'k' truths= {'sigma_sub': 1.05, 'LOS_normalization': 1., 'beta': 0.85, 'log10c0': np.log10(18.5), 'delta_power_law_index': 0.} axes = triangle_plot.make_triplot(filled_contours=False, show_intervals=False, show_contours=True, contour_levels=[0.32], contour_colors=['k', 'k'], display_params=['LOS_normalization', 'beta', 'log10c0', 'delta_power_law_index'], truths=truths) axes[4].set_ylabel(beta, fontsize=labelsize) axes[4].set_yticks(beta_ticks) axes[4].set_yticklabels(beta_ticks, fontsize=ticksize) axes[8].set_ylabel(c0, fontsize=labelsize) axes[8].set_yticks(c0_ticks) axes[8].set_yticklabels(c0_ticks, fontsize=ticksize) axes[12].set_ylabel(delta_power_law_index, fontsize=labelsize) axes[12].set_yticks(dpli_ticks) axes[12].set_yticklabels(dpli_ticks, fontsize=ticksize) axes[12].set_xlabel(delta_LOS, fontsize=labelsize) axes[12].set_xticks(dlos_ticks) axes[12].set_xticklabels(dlos_ticks, fontsize=ticksize, rotation=rotation) axes[13].set_xlabel(beta, fontsize=labelsize) axes[13].set_xticks(beta_ticks) axes[13].set_xticklabels(beta_ticks, fontsize=ticksize, rotation=rotation) axes[14].set_xlabel(c0, fontsize=labelsize) axes[14].set_xticks(c0_ticks) axes[14].set_xticklabels(c0_ticks, fontsize=ticksize, rotation=rotation) axes[15].set_xlabel(delta_power_law_index, fontsize=labelsize) axes[15].set_xticks(dpli_ticks) axes[15].set_xticklabels(dpli_ticks, fontsize=ticksize, rotation=rotation) axes[2].annotate(r'$\Sigma_{\rm{sub(predicted)}} = 0.05 \rm{kpc^{-2}}$', fontsize=22, xy=(0.26, 0.1), xycoords='axes fraction') ax_idx = 7 axins1 = inset_axes(axes[ax_idx], width="200%", # width = 50% of parent_bbox width height="10%", # height : 5% loc='upper right') empty = np.zeros((20, 20)) empty[0,0] = 1 im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap) cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1]) axes[ax_idx].set_visible(False) cb.set_label('probability', fontsize=15) #plt.savefig('./figures/lensing_likelihood_w.pdf') ``` ## Systematic modeling errors We allow for systematic errors in the model by changing the internal mapping between the parameters describing the mass function and concentration-mass relation ``` error_type = 'INTERPOLATED_GRID' if error_type == 'INTERPOLATED_GRID': f = open('./systematic_error_interpolations/systematic_error_interpolation_lowfit_'+mass_function_model+'_pivot'+pivot_string+'_3D', 'rb') systematic_interp_lowfit = pickle.load(f) f.close() f = open('./systematic_error_interpolations/systematic_error_interpolation_highfit_'+mass_function_model+'_pivot'+pivot_string+'_3D', 'rb') systematic_interp_highfit = pickle.load(f) f.close() elif error_type == 'RELATIVE': delta_delta_los = 0.1 delta_beta = 0.2 delta_c8 = 0.2 delta_delta_alpha = 0.05 ``` ## Final setup ``` delta_los_range = [0., 2.5] beta_range = [-0.2, 15.] log10c0_range = [0., 4.] delta_alpha_range = [-0.6, 0.9] sigma_sub_range = [0., 0.125] param_ranges_lensing = [delta_los_range, beta_range, log10c0_range, delta_alpha_range, sigma_sub_range] n_draw = 50000 extrapolate_ranges = [[0., 2.5], [-0.2, 15.], [0., 4.0], delta_alpha_range, sigma_sub_range] param_ranges_pk = [[0.6645, 1.2645], [-0.1, 0.1], [-0.01, 0.01]] arun_ticks = [-0.10, -0.05, 0.00, 0.05, 0.10] brun_ticks = [-0.010, -0.005, 0.000, 0.005, 0.01] ns_ticks = [0.7645, 0.9645, 1.1645] ``` ## Compute the likelihood of the power spectrum parameters We can compute the likelihood the parameters describing $P\left(k\right)$, adding systematic models errors by hand ``` if error_type == 'INTERPOLATED_GRID': samples_no_sys, like_no_sys = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False, delta_alpha_sys=False, beta_sys=False, three_D=True) samples_sys1, like_sys1 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, three_D=True) samples_sys2, like_sys2 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, three_D=True) samples_sys_noamp_1, like_sys_noamp_1 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False, three_D=True) samples_sys_noamp_2, like_sys_noamp_2 = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_highfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, log10c8_sys=False, delta_los_sys=False, three_D=True) samples_sys_noslope, like_sys_noslope = sample_power_spectra_with_systematic_interp(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, systematic_interp_lowfit, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges, delta_alpha_sys=False, beta_sys=False, three_D=True) elif error_type == 'RELATIVE': samples_sys1, like_sys1 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, delta_c8, delta_beta, delta_delta_los, delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_sys2, like_sys2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, -delta_c8, -delta_beta, -delta_delta_los, -delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_no_sys, like_no_sys = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, 0., 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_sys_noamp_1, like_sys_noamp_1 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, 0., delta_beta, 0., delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_sys_noamp_2, like_sys_noamp_2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, 0., -delta_beta, 0., delta_delta_alpha, extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_sys_noslope, like_sys_noslope = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, -delta_c8, 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) samples_sys_noslope_2, like_sys_noslope_2 = sample_power_spectra(n_draw, param_ranges_pk, param_ranges_lensing, structure_formation_interp_As, interpolated_lens_likelihood, delta_c8, 0., 0., 0., extrapolate=extrapolate_likelihood, extrapolate_ranges=extrapolate_ranges) ``` ## Plot the likelihood of the parameters describing the power spectrum ``` nbins = 20 param_names_pk = [r'$n_s$', r'$a_{\rm{run}}$', r'$b_{\rm{run}}$'] samples_marginalized = np.vstack((np.vstack((np.vstack((np.vstack((np.vstack((samples_no_sys, samples_sys1)), samples_sys2)), samples_sys_noamp_1)), samples_sys_noamp_2)), samples_sys_noslope)) likelihood_marginalized = np.append(np.append(np.append(np.append(np.append(like_no_sys, like_sys1), like_sys2), like_sys_noamp_1), like_sys_noamp_2), like_sys_noslope) # samples_marginalized = samples_no_sys # likelihood_marginalized = like_no_sys density_marginalized = DensitySamples(samples_marginalized, param_names_pk, likelihood_marginalized, param_ranges_pk, nbins=nbins, use_kde=False, bandwidth_scale=1.) pk_likelihood_marginalized = IndepdendentLikelihoods([density_marginalized]) triplot = TrianglePlot([pk_likelihood_marginalized]) cmap = 'jet' triplot.set_cmap(cmap, marginal_col='k') triplot.truth_color = 'k' truths= {r'$n_s$': 0.9645, r'$a_{\rm{run}}$': 0., r'$b_{\rm{run}}$': 0.} axes = triplot.make_triplot(filled_contours=False, show_intervals=False, show_contours=True, contour_levels=[0.32], contour_colors=['k', 'k']) axes[3].set_yticks(arun_ticks) axes[3].set_yticklabels(arun_ticks, fontsize=ticksize) axes[6].set_yticks(brun_ticks) axes[6].set_yticklabels(brun_ticks, fontsize=ticksize) axes[6].set_xticks(ns_ticks) axes[6].set_xticklabels(ns_ticks, fontsize=ticksize) axes[7].set_xticks(arun_ticks) axes[7].set_xticklabels(arun_ticks, fontsize=ticksize) axes[8].set_xticks(brun_ticks) axes[8].set_xticklabels(brun_ticks, fontsize=ticksize) ax_idx = 1 axins1 = inset_axes(axes[ax_idx], width="200%", # width = 50% of parent_bbox width height="10%", # height : 5% loc=6) empty = np.zeros((20, 20)) empty[0,0] = 1 im1 = axes[ax_idx].imshow(empty, interpolation='None', cmap=cmap) cb = fig.colorbar(im1, cax=axins1, orientation="horizontal", ticks=[0, 0.25, 0.5, 0.75, 1]) axes[ax_idx].set_visible(False) cb.set_label('probability', fontsize=15) plt.savefig('./figures/qP_likelihood_'+mass_function_model+'_pivot'+pivot_string+'.pdf') import pickle f = open('./interpolated_pq_likelihoods/Pk_likelihood_'+mass_function_model+'_pivot'+pivot_string, 'wb') pk_likelihood_marginalized_interp = InterpolatedLikelihood(pk_likelihood_marginalized, param_names_pk, param_ranges_pk) pickle.dump(pk_likelihood_marginalized_interp, f) ```
github_jupyter
# Content: 1. [Definitions](#1.-Definitions) 2. [The root finding problem](#2.-The-root-finding-problem) 3. [Fixed point iteration](#3.-Fixed-point-iteration) >3.1 [The cobweb diagram](#3.1-The-cobweb-diagram) >3.2 [Fixed point iteration theorem](#3.2-Fixed-point-iteration-theorem) >3.3 [The code](#3.3-The-code) 4. [Bisection method](#4.-Bisection-method) # 1. Definitions ![board%20work%20-32.jpg](../boardwork/board%20work%20-32.jpg) [Weierstrass function](https://en.wikipedia.org/wiki/Weierstrass_function) is a peculiar function. It is continuous on the real number line but not differentiable anywhere. ``` import numpy as np import matplotlib.pyplot as plt def weierstrass(a,b,M,x): val = 0.0 for n in range(0,M): val = val + a**n * np.cos(b**n*np.pi*x) return val x = np.linspace(-2,2,1000) # 1000 points between -2 and +2 a=0.5 b=3.0 N=x.size y=np.zeros(N) M=1 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=1') plt.title('Weierstrass function, M=1') plt.legend() plt.show() M=3 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=3') plt.title('Weierstrass function, M=3') plt.legend() plt.show() M=10 for i in range(N): y[i]=weierstrass(a,b,M,x[i]) plt.plot(x, y, 'b-', label='M=10') plt.title('Weierstrass function, M=10') plt.legend() plt.show() ``` --- Homework-16: Find examples for polynomial, rational, trigonometric, exponential and logarithmic functions that are in $C^\infty[{\bf R}],~{\rm where}~{\bf R}$ is the real number line. --- ## 2. The root-finding problem ![board%20work%20-33.jpg](../boardwork/board%20work%20-33.jpg) ## 3. Fixed point iteration ![board%20work%20-34.jpg](../boardwork/board%20work%20-34.jpg) ``` import numpy as np def f(x): val=x-np.sqrt(10.0/x) return val def g(x): val=np.sqrt(10.0/x) return val x=1 # initial guess, x0 dx=x i=0 while dx > 1e-3: dx=np.abs(x-g(x)) print('Iteration: ',i,' x:',x,' g(x):',g(x),' f(x): ', f(x)) x=g(x) i=i+1 print('Exact root is x:',np.power(10.0,1.0/3.0)) ``` Here is another elegant way to print the output ``` x=1.0 for i in range(0,15): gx=g(x) fx=f(x) fstring=(f'''Iteration={i:5d} x={x:10.4f} g(x)={gx:10.4f} f(x)={np.abs(fx):10.4f}''') # using f-string print(fstring) x=g(x) out=(f'''Exact root is x={np.power(10.0,1.0/3.0):10.4f}''') print(out) #other way for formatted print #mynumber=3.14 #print('{:10.8f}'.format(mynumber)) ``` ### 3.1 The cobweb diagram ![board%20work%20-35.jpg](../boardwork/board%20work%20-35.jpg) ![board%20work%20-36.jpg](../boardwork/board%20work%20-36.jpg) ![board%20work%20-37.jpg](../boardwork/board%20work%20-37.jpg) ``` def g_fn(x): val=np.sqrt(10.0/x) return val N=15 x=np.zeros(N,float) g=np.zeros(N,float) x0=1.0 # initial guess Ni=10 for i in range(0,Ni): x[i]=x0 g[i]=g_fn(x0) x0=g[i] #print(x,g) import numpy as np import matplotlib.pyplot as plt fig = plt.figure() # comment if square plot is not needed ax = fig.add_subplot(111) # comment if square plot is not needed plt.xlim(0, 4) plt.ylim(0, 4) x_grids = np.linspace(0,4,100) N=x_grids.size g_grids=np.zeros(N) for i in range(N): g_grids[i]=g_fn(x_grids[i]) plt.plot(x_grids,x_grids,'k-',label='x') plt.plot(x_grids,g_grids,'b-',label='g(x)') xval=[x[0],x[0]] gval=[x[0],g[0]] plt.plot(xval,gval) plt.grid() for i in range(0,6): # horizontal line, same y-value xval=[x[i],g[i]] gval=[g[i],g[i]] plt.plot(xval,gval) # vertical line, same x-value xval=[g[i],x[i+1]] gval=[g[i],g[i+1]] plt.plot(xval,gval) ax.set_aspect('equal', adjustable='box') # comment if square plot is not needed plt.title('Cobweb diagram for $x=\sqrt{10/x}$') plt.legend() plt.show() ``` ### Let's try another problem: $x - 1/x^2 = 0;~g(x)=1/x^2;~x_0 = 0.1$ ``` import numpy as np def g_fn(x): val=1.0/x**2 return val def f_fn(x): val=x-1.0/x**2 return val x=0.1 for i in range(0,4): gx=g_fn(x) fx=f_fn(x) fstring=(f'''Iteration={i:5d} x={x:10.4f} g(x)={gx:10.4f} f(x)={np.abs(fx):10.4f}''') # using f-string print(fstring) x=g_fn(x) ``` Diverges! ### 3.2 Fixed point iteration theorem ![board%20work%20-38.jpg](../boardwork/board%20work%20-38.jpg) ![board%20work%20-39.jpg](../boardwork/board%20work%20-39.jpg) ![board%20work%20-40.jpg](../boardwork/board%20work%20-40.jpg) ![board%20work%20-41.jpg](../boardwork/board%20work%20-41.jpg) ![board%20work%20-42.jpg](../boardwork/board%20work%20-42.jpg) ![board%20work%20-43.jpg](../boardwork/board%20work%20-43.jpg) ### 3.3 The code ``` import numpy as np # fn is the g(x) in x = g(x) that we want to solve # x is the initial guess, x0 # xthresh is convergence thershold # maxeval - maximum number of evaluation of fn # iprint control printing, iprint = 1 for extra output def fixedpoint(fn, x, xthresh, maxeval, iprint): if iprint == 1: print('#iter x g(x) dx') ieval=0 g=fn(x) ieval=ieval+1 dx=np.abs(x-g) iiter=0 while dx > xthresh: g=fn(x) ieval=ieval+1 dx=np.abs(x-g) if iprint == 1: print('{:5d}{:15.6e}{:15.6e}{:15.6e}'.format(iiter,x, g, dx)) if ieval >= maxeval: print('Exiting fixed-point iteration, maximum function evaluations reached') break x=g iiter=iiter+1 return x print('Exiting fixed-point iteration, convergence reached') def fn_g(x): val=np.sqrt(10.0/x) return val x0 = 1.0 xthresh = 1E-5 maxeval = 100 iprint=1 x = fixedpoint(fn_g, x0, xthresh, maxeval,iprint) print('The solution is: ',x) ``` ### Let's try another problem: $\exp(-x) + x/5 - 1 = 0$ Let's look at the graphical solution by plotting the function $f(x)$ and see where it takes the value zero. ``` import numpy as np import matplotlib.pyplot as plt def f(x): val=np.exp(-x)+x/5.0-1 return val xmin=-5.0 xmax=10.0 plt.xlim(xmin, xmax) plt.ylim(-3, 10) x = np.linspace(xmin,xmax,100) N=x_grids.size y=np.zeros(N) for i in range(N): y[i]=f(x[i]) plt.plot(x,x*0,'k-') plt.plot(x,y,'b-') plt.grid() plt.show() ``` There are two roots for this equation. One at 0.0 and another near 5.0. There are two ways of rearranging the equation to apply the fixed-point iteration $x_{n+1}=g(x_n)$. * Option-1: $g_1(x)=5\left[ 1- \exp(-x) \right]$ * Option-2: $g_2(x)=-\log\left[ 1 - x/5 \right]$ ``` def g1(x): val=5 * ( 1 - np.exp(-x) ) return val x0 = 2 # somewhere in between both the solutions maxeval = 20 xthresh = 0.0001 iprint=1 x = fixedpoint(g1, x0, xthresh, maxeval,iprint) print('The solution is: ',x) def g2(x): val=-np.log(1-x/5.0) return val x0 = 2.0 # somewhere in between both the solutions maxeval = 10 xthresh = 0.0001 iprint=1 x = fixedpoint(g2, x0, xthresh, maxeval,iprint) print('The solution is: ',x) ``` --- Homework-17: $For~the~above~example,~using~the~fixed~point~convergence~relation~explain~why~using~g_1(x)~results~in~the~solution~x^*=4.965~while~g_2(x)~results~in~x^*=0.0.~In~both~cases,~use~x_0=2.0~as~the~initial~guess.$ --- ## 4. Bisection method ![board%20work%20-44.jpg](../boardwork/board%20work%20-44.jpg) ![board%20work%20-45.jpg](../boardwork/board%20work%20-45.jpg) ![board%20work%20-46.jpg](../boardwork/board%20work%20-46.jpg) ``` import numpy as np def bisection(fn, a0, b0, xthresh, maxeval, iprint): if iprint == 1: print('#iter a b x dx') ieval=0 iiter=1 a=a0 b=b0 dx = abs(a-b) while dx > xthresh or iiter < 10: x = (a+b)/2.0 dx = abs(a-b) fx = fn(x) fb = fn(b) if (fb < xthresh): # handle an exception print('The upper limit seems to be a root. Stopping program.') x=b break if iprint == 1: print('{:5d}{:15.6e}{:15.6e}{:15.6e}{:15.6e}{:15.6e}{:15.6e}'.format(iiter, a, b, x, dx,fx,fb)) if fx*fb > 0: b = x else: a = x ieval=ieval+2 if ieval >= maxeval: print('Exiting fixed-point iteration, maximum function evaluations reached') break iiter=iiter+1 print('Exiting fixed-point iteration, convergence reached') return x def fn_f(x): val=np.exp(-x)+x/5.0-1 return val a = -25.0 b = 30.0 maxeval = 100 xthresh = 0.0001 iprint=1 x = bisection(fn_f, a, b, xthresh, maxeval,iprint) print('The solution is: ',x) def fn_f(x): val=(x-1)**2 return val a = -10 b = 1 maxeval = 100 xthresh = 0.0001 iprint=1 x = bisection(fn_f, a, b, xthresh, maxeval,iprint) print('The solution is: ',x) ```
github_jupyter
# 第二十三讲 微分方程和$e^{At}$ ## 微分方程$\frac{du}{dt} = Au$ 现有一阶(First-order)微分方程组:$\left\{\begin{matrix} \frac{du_1}{dt} & = & -u_1 & + 2u_2\\ \frac{du_2}{dt} & = & u_1 & -2u_2 \end{matrix}\right.$,其中初始状态 $u(0) = \begin{bmatrix}u_1 \\ u_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$,现在我们需要求解方程的一般形式 $u(t)$。 首先,通过微分方程组可以得到系数矩阵 $A = \begin{bmatrix} -1 & 2 \\ 1 & -2 \end{bmatrix}$,并且该矩阵的特征值为 $\left\{\begin{matrix} \lambda_1 & = & 0\\ \lambda_2 & = & -3 \end{matrix}\right.$,特征向量为 $\left\{\begin{matrix} x_1 & = & \begin{bmatrix} 2 \\ 1 \end{bmatrix} \\ x_2 & = & \begin{bmatrix} 1 \\ -1 \end{bmatrix} \end{matrix}\right.$。此时,解可以写成 $u(t) = c_1 e^{\lambda_1 t} x_1 + c_2 e^{\lambda_2 t}x_2$,这里可以进行检验,取 $u = \lambda_1 e^{\lambda_1 t}x_1$,有 $\frac{du}{dt} = \lambda_1 e^{\lambda_1 t}x_1 = A e^{\lambda_1 t}x_1 = Au$。 接下来,通过 $u(0)$ 解出 $c_1 = \frac{1}{3}, c_2=\frac{1}{3}$。于是最终的解形式为 $u(t) = \frac{1}{3}\begin{bmatrix}2 \\ 1\end{bmatrix} + \frac{1}{3}e^{-3t}\begin{bmatrix}1 \\ -1\end{bmatrix}$,并且方程有一个稳定的状态 $u(\infty) = \frac{1}{3}\begin{bmatrix}2 \\ 1\end{bmatrix}$,因为当 $t \rightarrow \infty$ 时,$e^{-3t} \rightarrow 0$。 这里引入方程趋势和特征向之间的关系: * 稳定态(Stability):方程的值最终会趋向于 $0$,即 $u(t) \rightarrow 0$,这就要求 $e^{\lambda t} \rightarrow 0$,所以当所有的特征值的实数部分(虚数部分只是噪音)均小于 $0$ 时,方程具有稳定性。 * 收敛态(Steady State):方程最终收敛于某个值,此时需要有一个特征值为 $0$,而剩余的特征值全小于 $0$。 * 发散态(Blowup):如果存在一个特征值的实数部分大于零,那么函数不会收敛。 小技巧:直接判断任意二阶矩阵的特征值是否均小于零。对于二阶矩阵 $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$,特征值具有性质即 $\lambda_1 + \lambda_2 = a + d, \lambda_1 * \lambda_2 = detA = a * d - b * c$。如果特征向量均小于零,那么 $a + d < 0$ 且 $detA > 0$。 总结:原方程组有两个相互耦合的未知函数,即 $u1,u2$ 相互耦合,而特征值和特征向量的作则就是解耦,也就是对角化。回到原方程 $\frac{du}{dt} = Au$,如果将 $u$ 表示为特征向量的线性组合 $u=Sv$,那么有 $\frac{du}{dt} = Au \Rightarrow S\frac{dv}{dt} = ASv \Rightarrow \frac{dv}{dt} = S^{-1}ASv = \Lambda v$。此时新方程组线性无关 $\left\{\begin{matrix} \frac{dv_1}{dt} & = & \lambda_1v1 \\ \frac{dv_2}{dt} & = & \lambda_2v_2 \\ & \vdots & \\ \frac{dv_n}{dt} & = & \lambda_nv_n \end{matrix}\right.$,分向量的解形式为 $v(t) = e^{\lambda t}v(0)$,而原方程的解形式为 $u(t)=Se^{\Lambda t}S^{-1}u(0)$。 ## 指数矩阵$e^{At}$ 指数矩阵(Exponential Matrix)为指数部分的矩阵,例如 $e^{At}$。上个部分中用到结论 $e^{At} = Se^{\Lambda t}S^{-1}$,这里给出证明。 根据泰勒级数 $e^x = \sum ^{\infty}_{0} \frac{x^n}{n!}$ 将 $e^{At}$ 展开,得 $$e^{At} = I + At + \frac{(At)^2}{2} + \dots + \frac{(At)^n}{n!}$$ $$e^{At} = I + At + \frac{A^2}{2}t^2 + \dots + \frac{A^n}{n!}t^n$$ $$e^{At} = SS^{-1} + S\Lambda S^{1}t + S\frac{\Lambda^2}{2}S^{-1}t^2 + \dots + S\frac{\Lambda^n}{n!}S^{-1}t^n$$ $$e^{At} = S(I + \Lambda t + \frac{\Lambda^2}{2}t^2 + \dots + \frac{\Lambda^n}{n!}t^n)S^{-1}$$ $$e^{At} = Se^{\Lambda t}S^{-1}$$ 拓展: * 几何级数:$\frac{1}{1-x} = \sum^{\infty}_{0} x^n$ * 第二个泰勒级数:$ (I - At)^{-1} = I + At + (At)^2 + \dots (At)^n)$ ## 高阶微分方程 对于一个二阶(Second-order)微分方程 ${y}''+b{y}'+ky=0$,可以构造方程组 $\left\{\begin{matrix}{y}'' & = & -b{y}' & -ky\\ {y}' & = & {y}' & \end{matrix}\right.$,写成矩阵形式有 $\begin{bmatrix}{y}'' \\ {y}' \end{bmatrix} = \begin{bmatrix} -b & -k \\ 1 & 0 \end{bmatrix}\begin{bmatrix}{y}' \\ {y} \end{bmatrix}$。 拓展到五阶 ${y}'''''+b{y}''''+c{y}'''+d{y}''+ey′+fy=0$,矩阵形式有 $\begin{bmatrix} {y}''''' \\ {y}'''' \\ {y}''' \\ {y}'' \\ {y}' \end{bmatrix} = \begin{bmatrix} -b & -c & -d & -e & -f \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0\end{bmatrix}\begin{bmatrix} {y}'''' \\ {y}''' \\ {y}'' \\ {y}' \\ y \end{bmatrix}$。
github_jupyter
Project No.: 3 Time Taken: 3 days Difficulty: Intermediate. This is the toughest dataset I've worked with. Learnt a lot. Still a long way to go... Would love it if you left a comment with advice on where I could have improved, what you liked/disliked about my work, or any thing else. And if you like it, please give it an upvote! ``` # This Python 3 environment comes with many helpful analytics libraries installed # It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python # For example, here's several helpful packages to load in import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) # Input data files are available in the "../input/" directory. # For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory import os print(os.listdir("../input")) # Any results you write to the current directory are saved as output. ``` # 1. The Problem **What is the problem?** Task(T): Predicting the cost of a used car in India. Experience(E): Data collected from various sources and distributed across various locations in India. Performance(P): Mean Absolute Error **My plan of action:** * Clean the data (missing values and categorical variables).'. * Build the model and check the MAE. * Try to improve the model. * Brand matters too! I could select the brand name of the car and treat them as categorical data. * Filling the missing values in New_Price might help. I should get all the available values for each brand, get their avg, and fill that brand's missing values. For example, I could get all the available New_Price values for Honda, take their average and use that number for other Honda cars whose New_Price is missing. * Try converting Engine, Power and New_Price to numbers. * I'll try scaling in the end. Although, I don't think it has much effect on xgboost. ``` df_full = pd.read_excel("../input/Data_Train.xlsx") df_test = pd.read_excel("../input/Data_Test.xlsx") df_full.head(10) df_full.shape ``` Mileage contains kmp/kg and kmpl, Engine contains CC, Power contains bhp and New_Price contains Lakh. By removing them I can convert them from 'object' to 'int'/'float'. ``` df_full.info() df_full.isnull().sum() ``` # 2. Data Preparation Let's first modify the 'Name' of the car and extract just the brand name. ``` df_full['Name'] = df_full.Name.str.split().str.get(0) df_test['Name'] = df_test.Name.str.split().str.get(0) df_full.head() df_full['Name'].value_counts().sum() ``` df_full.shape = (6019,13). So I guess all rows have been modified. Now I gotta modify 'Mileage', 'Power', 'Engine' and 'New_Price'. But first, I have to deal with missing values. # 2.1 Missing Values ``` # Get names of columns with missing values cols_with_missing = [col for col in df_full.columns if df_full[col].isnull().any()] print("Columns with missing values:") print(cols_with_missing) # Let's deal with them one by one. df_full['Seats'].fillna(df_full['Seats'].mean(),inplace=True) df_test['Seats'].fillna(df_test['Seats'].mean(),inplace=True) ``` NOTE: To get more accurate values, we need more data. So I'll combine df_train and df_test data. ``` data = pd.concat([df_full,df_test], sort=False) import matplotlib.pyplot as plt plt.figure(figsize=(20,5)) data['Mileage'].value_counts().head(100).plot.bar() plt.show() df_full['Mileage'] = df_full['Mileage'].fillna('17.0 kmpl') df_test['Mileage'] = df_test['Mileage'].fillna('17.0 kmpl') #I noticed the 14th entry (and others) have 0.0 kmpl. Let's replace that too. df_full['Mileage'] = df_full['Mileage'].replace("0.0 kmpl", "17.0 kmpl") df_test['Mileage'] = df_test['Mileage'].replace("0.0 kmpl", "17.0 kmpl") plt.figure(figsize=(20,5)) data['Engine'].value_counts().head(100).plot.bar() plt.show() df_full['Engine'] = df_full['Engine'].fillna('1197 CC') df_test['Engine'] = df_test['Engine'].fillna('1197 CC') plt.figure(figsize=(20,5)) data['Power'].value_counts().head(100).plot.bar() plt.show() df_full['Power'] = df_full['Power'].fillna('74 bhp') df_test['Power'] = df_test['Power'].fillna('74 bhp') #I noticed the 76th entry (and others) have null bhp. Let's replace that too. #This was creating problems during LabelEncoding. df_full['Power'] = df_full['Power'].replace("null bhp", "74 bhp") df_test['Power'] = df_test['Power'].replace("null bhp", "74 bhp") ``` Now let's deal with 'New_Price'. **Appoach 1:** Fill the missing values with the value which occurs the most. ``` plt.figure(figsize=(20,5)) data['New_Price'].value_counts().head(100).plot.bar() plt.show() # # I'll select 4.78 cuz the others are way too high. # df_full['New_Price'] = df_full['New_Price'].fillna('4.78 Lakh') # df_test['New_Price'] = df_test['New_Price'].fillna('4.78 Lakh') # # Run the method get_number() defined below first. # # Converting it to float. # df_full['New_Price'] = df_full['New_Price'].apply(get_number).astype('float') # df_test['New_Price'] = df_test['New_Price'].apply(get_number).astype('float') ``` **Approach 2:** Group by Brand names and get the mean of the available values for 'New_Price'. Use these to fill the missing values for the respective brands. First of all I'll have to convert it into numeric data (or else mean() won't work). For that, I'll have to first deal with missing values. So! Here's what we're gonna do: First fill it with 0.0 Lakh, convert the column into float, group and mean, then finally replace all 0.0 values with their respective values. Capiche? **NOTE: TURNS OUT THIS WAS A COMPLETE AND UTTER WASTE OF MY TIME AND EFFORT. </3** ``` # Method to extract 'float' from 'object' import re def get_number(name): title_search = re.search('([\d+\.+\d]+\W)', name) if title_search: return title_search.group(1) return "" ``` I got the code for the above step from [here](https://www.kaggle.com/funxexcel/titanic-basic-solution-with-logistic-regression) and modified it. ``` data['New_Price'] = data['New_Price'].fillna('0.0 Lakh') # dealt with missing values. data['New_Price'] = data['New_Price'].apply(get_number).astype('float') #converted to float total = data['New_Price'].groupby(data['Name']) print(total.mean().round(2)) ``` We got avg 'New_Price' values for more than half the brands. There are still 6 Brands whose values are not given. For that, another plan! First of all, deal with the brands that we have values for. After that, use a bar chart to get the value of the most occurring 'New_Price' value. Use that to fill the rest of them. ``` df_full['New_Price'] = df_full['New_Price'].fillna('0.0 Lakh') # dealt with missing values. df_full['New_Price'] = df_full['New_Price'].apply(get_number).astype('float') #converted to float df_full.loc[df_full['Name']=="Audi", 'New_Price'] = df_full.loc[df_full['Name']=="Audi", 'New_Price'].replace(0.0,5.02) df_full.loc[df_full['Name']=="BMW", 'New_Price'] = df_full.loc[df_full['Name']=="BMW", 'New_Price'].replace(0.0,11.14) df_full.loc[df_full['Name']=="Bentley", 'New_Price'] = df_full.loc[df_full['Name']=="Bentley", 'New_Price'].replace(0.0,1.88) df_full.loc[df_full['Name']=="Datsun", 'New_Price'] = df_full.loc[df_full['Name']=="Datsun", 'New_Price'].replace(0.0,3.14) df_full.loc[df_full['Name']=="Fiat", 'New_Price'] = df_full.loc[df_full['Name']=="Fiat", 'New_Price'].replace(0.0,0.95) df_full.loc[df_full['Name']=="Ford", 'New_Price'] = df_full.loc[df_full['Name']=="Ford", 'New_Price'].replace(0.0,1.16) df_full.loc[df_full['Name']=="Honda", 'New_Price'] = df_full.loc[df_full['Name']=="Honda", 'New_Price'].replace(0.0,1.30) df_full.loc[df_full['Name']=="Hyundai", 'New_Price'] = df_full.loc[df_full['Name']=="Hyundai", 'New_Price'].replace(0.0,1.03) df_full.loc[df_full['Name']=="Isuzu", 'New_Price'] = df_full.loc[df_full['Name']=="Isuzu", 'New_Price'].replace(0.0,16.84) df_full.loc[df_full['Name']=="ISUZU", 'New_Price'] = df_full.loc[df_full['Name']=="ISUZU", 'New_Price'].replace(0.0,16.84) df_full.loc[df_full['Name']=="Jaguar", 'New_Price'] = df_full.loc[df_full['Name']=="Jaguar", 'New_Price'].replace(0.0,8.52) df_full.loc[df_full['Name']=="Jeep", 'New_Price'] = df_full.loc[df_full['Name']=="Jeep", 'New_Price'].replace(0.0,22.75) df_full.loc[df_full['Name']=="Land", 'New_Price'] = df_full.loc[df_full['Name']=="Land", 'New_Price'].replace(0.0,4.39) df_full.loc[df_full['Name']=="Mahindra", 'New_Price'] = df_full.loc[df_full['Name']=="Mahindra", 'New_Price'].replace(0.0,1.20) df_full.loc[df_full['Name']=="Maruti", 'New_Price'] = df_full.loc[df_full['Name']=="Maruti", 'New_Price'].replace(0.0,1.29) df_full.loc[df_full['Name']=="Mercedes-Benz", 'New_Price'] = df_full.loc[df_full['Name']=="Mercedes-Benz", 'New_Price'].replace(0.0,7.97) df_full.loc[df_full['Name']=="Mini", 'New_Price'] = df_full.loc[df_full['Name']=="Mini", 'New_Price'].replace(0.0,25.06) df_full.loc[df_full['Name']=="Mitsubishi", 'New_Price'] = df_full.loc[df_full['Name']=="Mitsubishi", 'New_Price'].replace(0.0,12.03) df_full.loc[df_full['Name']=="Nissan", 'New_Price'] = df_full.loc[df_full['Name']=="Nissan", 'New_Price'].replace(0.0,1.89) df_full.loc[df_full['Name']=="Porsche", 'New_Price'] = df_full.loc[df_full['Name']=="Porsche", 'New_Price'].replace(0.0,0.07) df_full.loc[df_full['Name']=="Renault", 'New_Price'] = df_full.loc[df_full['Name']=="Renault", 'New_Price'].replace(0.0,1.49) df_full.loc[df_full['Name']=="Skoda", 'New_Price'] = df_full.loc[df_full['Name']=="Skoda", 'New_Price'].replace(0.0,3.63) df_full.loc[df_full['Name']=="Tata", 'New_Price'] = df_full.loc[df_full['Name']=="Tata", 'New_Price'].replace(0.0,2.00) df_full.loc[df_full['Name']=="Toyota", 'New_Price'] = df_full.loc[df_full['Name']=="Toyota", 'New_Price'].replace(0.0,4.38) df_full.loc[df_full['Name']=="Volksvagen", 'New_Price'] = df_full.loc[df_full['Name']=="Volksvagen", 'New_Price'].replace(0.0,1.53) df_full.loc[df_full['Name']=="Volvo", 'New_Price'] = df_full.loc[df_full['Name']=="Volvo", 'New_Price'].replace(0.0,4.62) df_test['New_Price'] = df_test['New_Price'].fillna('0.0 Lakh') # dealt with missing values. df_test['New_Price'] = df_test['New_Price'].apply(get_number).astype('float') #converted to float # Modify df_test too... df_test.loc[df_full['Name']=="Audi", 'New_Price'] = df_test.loc[df_test['Name']=="Audi", 'New_Price'].replace(0.0,5.02) df_test.loc[df_full['Name']=="BMW", 'New_Price'] = df_test.loc[df_test['Name']=="BMW", 'New_Price'].replace(0.0,11.14) df_test.loc[df_full['Name']=="Bentley", 'New_Price'] = df_test.loc[df_test['Name']=="Bentley", 'New_Price'].replace(0.0,1.88) df_test.loc[df_full['Name']=="Datsun", 'New_Price'] = df_test.loc[df_test['Name']=="Datsun", 'New_Price'].replace(0.0,3.14) df_test.loc[df_full['Name']=="Fiat", 'New_Price'] = df_test.loc[df_test['Name']=="Fiat", 'New_Price'].replace(0.0,0.95) df_test.loc[df_full['Name']=="Ford", 'New_Price'] = df_test.loc[df_test['Name']=="Ford", 'New_Price'].replace(0.0,1.16) df_test.loc[df_full['Name']=="Honda", 'New_Price'] = df_test.loc[df_test['Name']=="Honda", 'New_Price'].replace(0.0,1.30) df_test.loc[df_full['Name']=="Hyundai", 'New_Price'] = df_test.loc[df_test['Name']=="Hyundai", 'New_Price'].replace(0.0,1.03) df_test.loc[df_full['Name']=="Isuzu", 'New_Price'] = df_test.loc[df_test['Name']=="Isuzu", 'New_Price'].replace(0.0,16.84) df_test.loc[df_full['Name']=="ISUZU", 'New_Price'] = df_test.loc[df_test['Name']=="ISUZU", 'New_Price'].replace(0.0,16.84) df_test.loc[df_full['Name']=="Jaguar", 'New_Price'] = df_test.loc[df_test['Name']=="Jaguar", 'New_Price'].replace(0.0,8.52) df_test.loc[df_full['Name']=="Jeep", 'New_Price'] = df_test.loc[df_test['Name']=="Jeep", 'New_Price'].replace(0.0,22.75) df_test.loc[df_full['Name']=="Land", 'New_Price'] = df_test.loc[df_test['Name']=="Land", 'New_Price'].replace(0.0,4.39) df_test.loc[df_full['Name']=="Mahindra", 'New_Price'] = df_test.loc[df_test['Name']=="Mahindra", 'New_Price'].replace(0.0,1.20) df_test.loc[df_full['Name']=="Maruti", 'New_Price'] = df_test.loc[df_test['Name']=="Maruti", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="Mercedes-Benz", 'New_Price'] = df_test.loc[df_test['Name']=="Mercedes-Benz", 'New_Price'].replace(0.0,7.97) df_test.loc[df_full['Name']=="Mini", 'New_Price'] = df_test.loc[df_test['Name']=="Mini", 'New_Price'].replace(0.0,25.06) df_test.loc[df_full['Name']=="Mitsubishi", 'New_Price'] = df_test.loc[df_test['Name']=="Mitsubishi", 'New_Price'].replace(0.0,12.03) df_test.loc[df_full['Name']=="Nissan", 'New_Price'] = df_test.loc[df_test['Name']=="Nissan", 'New_Price'].replace(0.0,1.89) df_test.loc[df_full['Name']=="Porsche", 'New_Price'] = df_test.loc[df_test['Name']=="Porsche", 'New_Price'].replace(0.0,0.07) df_test.loc[df_full['Name']=="Renault", 'New_Price'] = df_test.loc[df_test['Name']=="Renault", 'New_Price'].replace(0.0,1.49) df_test.loc[df_full['Name']=="Skoda", 'New_Price'] = df_test.loc[df_test['Name']=="Skoda", 'New_Price'].replace(0.0,3.63) df_test.loc[df_full['Name']=="Tata", 'New_Price'] = df_test.loc[df_test['Name']=="Tata", 'New_Price'].replace(0.0,2.00) df_test.loc[df_full['Name']=="Toyota", 'New_Price'] = df_test.loc[df_test['Name']=="Toyota", 'New_Price'].replace(0.0,4.38) df_test.loc[df_full['Name']=="Volksvagen", 'New_Price'] = df_test.loc[df_test['Name']=="Volksvagen", 'New_Price'].replace(0.0,1.53) df_test.loc[df_full['Name']=="Volvo", 'New_Price'] = df_test.loc[df_test['Name']=="Volvo", 'New_Price'].replace(0.0,4.62) ``` I must have filled most of the missing values. Now let's use a bar chart to get the most occurring values and fill the rest of them. ``` plt.figure(figsize=(20,5)) df_full['New_Price'].value_counts().head(100).plot.bar() plt.show() plt.figure(figsize=(20,5)) df_test['New_Price'].value_counts().head(100).plot.bar() plt.show() df_full.loc[df_full['Name']=="Ambassador", 'New_Price'] = df_full.loc[df_full['Name']=="Ambassador", 'New_Price'].replace(0.0,1.29) df_full.loc[df_full['Name']=="Chevrolet", 'New_Price'] = df_full.loc[df_full['Name']=="Chevrolet", 'New_Price'].replace(0.0,1.29) df_full.loc[df_full['Name']=="Force", 'New_Price'] = df_full.loc[df_full['Name']=="Force", 'New_Price'].replace(0.0,1.29) df_full.loc[df_full['Name']=="Lamborghini", 'New_Price'] = df_full.loc[df_full['Name']=="Lamborghini", 'New_Price'].replace(0.0,1.29) df_full.loc[df_full['Name']=="OpelCorsa", 'New_Price'] = df_full.loc[df_full['Name']=="OpelCorsa", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="Ambassador", 'New_Price'] = df_test.loc[df_test['Name']=="Ambassador", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="Chevrolet", 'New_Price'] = df_test.loc[df_test['Name']=="Chevrolet", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="Force", 'New_Price'] = df_test.loc[df_test['Name']=="Force", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="Lamborghini", 'New_Price'] = df_test.loc[df_test['Name']=="Lamborghini", 'New_Price'].replace(0.0,1.29) df_test.loc[df_full['Name']=="OpelCorsa", 'New_Price'] = df_test.loc[df_test['Name']=="OpelCorsa", 'New_Price'].replace(0.0,1.29) df_full.isnull().sum() df_full.head(10) df_full.info() ``` Now let's convert 'Mileage', 'Engine' and 'Power' into numbers. ``` #Using the above defined method get_number() df_full['Mileage'] = df_full['Mileage'].apply(get_number).astype('float') df_full['Engine'] = df_full['Engine'].apply(get_number).astype('int') df_full['Power'] = df_full['Power'].apply(get_number).astype('float') df_test['Mileage'] = df_test['Mileage'].apply(get_number).astype('float') df_test['Engine'] = df_test['Engine'].apply(get_number).astype('int') df_test['Power'] = df_test['Power'].apply(get_number).astype('float') df_full.info() help(re) # This baby was realy helpful! df_test.info() df_full.head() ``` Looks good!! # 2.2 Categorical Variables ``` from sklearn.model_selection import train_test_split y = df_full.Price X = df_full.drop(['Price'],axis=1) # df_test = df_test.drop('New_Price',axis=1) X_train, X_valid, y_train, y_valid = train_test_split(X,y,train_size=0.82,test_size=0.18,random_state=0) from sklearn.preprocessing import LabelEncoder label_encoder = LabelEncoder() # X_train[object_cols] = label_encoder.fit_transform(X_train[object_cols]) # X_valid[object_cols] = label_encoder.transform(X_valid[object_cols]) # df_test[object_cols] = label_encoder.fit_transform(df_test[object_cols]) # ValueError: bad input shape (4815, 5) # That's why I did it manually. X_train['Name'] = label_encoder.fit_transform(X_train['Name']) X_valid['Name'] = label_encoder.transform(X_valid['Name']) df_test['Name'] = label_encoder.fit_transform(df_test['Name']) X_train['Location'] = label_encoder.fit_transform(X_train['Location']) X_valid['Location'] = label_encoder.transform(X_valid['Location']) df_test['Location'] = label_encoder.fit_transform(df_test['Location']) X_train['Fuel_Type'] = label_encoder.fit_transform(X_train['Fuel_Type']) X_valid['Fuel_Type'] = label_encoder.transform(X_valid['Fuel_Type']) df_test['Fuel_Type'] = label_encoder.fit_transform(df_test['Fuel_Type']) X_train['Transmission'] = label_encoder.fit_transform(X_train['Transmission']) X_valid['Transmission'] = label_encoder.transform(X_valid['Transmission']) df_test['Transmission'] = label_encoder.fit_transform(df_test['Transmission']) X_train['Owner_Type'] = label_encoder.fit_transform(X_train['Owner_Type']) X_valid['Owner_Type'] = label_encoder.transform(X_valid['Owner_Type']) df_test['Owner_Type'] = label_encoder.fit_transform(df_test['Owner_Type']) X_train.head() X_train.info() ``` Ah finally!! After 3 days! Quickly tried scaling too. Not a cool move. ``` # # Let's try scaling too. # from sklearn.preprocessing import StandardScaler # scaler = StandardScaler().fit(X_train) # rescaled_X_train = scaler.transform(X_train) # scaler = StandardScaler().fit(X_valid) # rescaled_X_valid = scaler.transform(X_valid) # scaler = StandardScaler().fit(df_test) # rescaled_df_test = scaler.transform(df_test) # from xgboost import XGBRegressor # from sklearn.metrics import mean_absolute_error,mean_squared_error,mean_squared_log_error # my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05) # my_model.fit(rescaled_X_train, y_train, # early_stopping_rounds=5, # eval_set=[(rescaled_X_valid, y_valid)], # verbose=False) # predictions = my_model.predict(rescaled_X_valid) # print("MAE: " + str(mean_absolute_error(predictions, y_valid))) # print("MSE: " + str(mean_squared_error(predictions, y_valid))) # print("MSLE: " + str(mean_squared_log_error(predictions, y_valid))) # # MAE: 2.115451765105513 # # MSE: 17.56415019000094 # # MSLE: 0.058881434868999126 ``` # 3. Model I will use XGBRegressor to build the model and MAE to check the performance. I will also check out mean_squared_error and mean_squared_log_error. ``` from xgboost import XGBRegressor from sklearn.metrics import mean_absolute_error,mean_squared_error,mean_squared_log_error my_model = XGBRegressor(n_estimators=1000, learning_rate=0.05) my_model.fit(X_train, y_train, early_stopping_rounds=5, eval_set=[(X_valid, y_valid)], verbose=False) predictions = my_model.predict(X_valid) print("MAE: " + str(mean_absolute_error(predictions, y_valid))) print("MSE: " + str(mean_squared_error(predictions, y_valid))) print("MSLE: " + str(mean_squared_log_error(predictions, y_valid))) ``` # 4. Predictions ``` preds_test = my_model.predict(df_test) print(preds_test) # The Price is in the format xx.xx So let's round off and submit. preds_test = preds_test.round(2) print(preds_test) output = pd.DataFrame({'Price': preds_test}) output.to_excel('submission.xlsx', index=False) ``` # Notes * Treating 'Mileage' and the others as categorical variables was a mistake. Eg.: Mileage went up from 23.6 to around 338! Converting it to numbers fixed it. * LabelEncoder won't work if there are missing values. * ValueError: y contains previously unseen label 'Bentley'. Fixed it by increasing training_size in train_test_split. * Scaling all the columns made the model worse (as expected). * With 'New_Price' (33.36L) - MAE: 1.841521016220765 MSE: 14.468386600963221 MSLE: 0.05295155300850892 * With 'New_Price' (4.78L) - MAE: 1.9925125514537205 MSE: 15.974590365346188 MSLE: 0.0599331113483451 * Without 'New_Price' - MAE: 1.7999142406259514 MSE: 12.915820113678437 MSLE: 0.05128357937155652 * After manually modifying 'New_Price' MAE: 1.8252445468636458 Higher! Ugh! MSE: 13.293730579850678 MSLE: 0.048714052000441106 This is less though...
github_jupyter
``` !git clone https://github.com/huggingface/transformers.git %cd transformers !pwd !git reset --hard 52f44dd !cp ./examples/token-classification/run_ner.py ../ %cd .. #!wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/token-classification/run_ner.py !wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/token-classification/utils_ner.py !wget https://raw.githubusercontent.com/huggingface/transformers/master/examples/token-classification/tasks.py !git clone https://github.com/huggingface/transformers %cd transformers !pip install . !pip install -r ./examples/requirements.txt %cd .. !pip install pyarrow --upgrade import transformers !mkdir data # aynı dizinde data klasöründe train,test,dev tsv dosyaları yükleniyor. # eğer text dosyaları varsa hazır, direk run_ner.py çalıştırılabilir. blankLineIndicator = "BlankLineIndicator" blank = "" firstColumnIndex = 0 secondColumnIndex = 1 !cp -r ./NCBI-disease/ ./data/ !mv ./data/NCBI-disease/devel.tsv ./data/NCBI-disease/dev.tsv !mv ./dev.tsv ./NCBI-disease/ !pwd #!unzip NERdata.zip -d data #!ls data/ train_dev_tsv = [] with open('./data/NCBI-disease/train_dev.tsv', 'r') as f: train_dev_pd = f.readlines() for row in train_dev_pd: row = row.split('\n')[0].split('\t') #Token Sütun İsmi if row: pass if row[firstColumnIndex] == '': train_dev_tsv.append(blank) else: #Token Sütun İsmi #print(row) train_dev_tsv.append(row[firstColumnIndex] + " " + row[secondColumnIndex]) train_dev_tsv[0] test_tsv = [] with open('./data/NCBI-disease/test.tsv', 'r') as f: test_pd = f.readlines() for row in test_pd: row = row.split('\n')[0].split('\t') #Token Sütun İsmi if row: pass if row[firstColumnIndex] == '': test_tsv.append(blank) else: #Token Sütun İsmi #print(row) test_tsv.append(row[firstColumnIndex] + " " + row[secondColumnIndex]) dev_tsv = [] with open('./data/NCBI-disease/dev.tsv', 'r') as f: test_pd = f.readlines() for row in test_pd: row = row.split('\n')[0].split('\t') #Token Sütun İsmi if row: pass if row[firstColumnIndex] == '': dev_tsv.append(blank) else: #Token Sütun İsmi #print(row) dev_tsv.append(row[firstColumnIndex] + " " + row[secondColumnIndex]) print(len(train_dev_pd)) print(len(train_dev_tsv)) train_dev_tsv[12288].split()[0] train_dev_pd[0], train_dev_tsv[0] l = [] for item in train_dev_tsv: try: item = item.split()[1] if item != 'B' and item != 'I' and item != 'O': print(item) l.append(item) except: pass l = set(l) print(l) with open('labels.txt', 'w') as f: for item in list(l): f.write(item + '\n') #!cut -f2 BC2GM/train.tsv | sort | uniq dev_tsv[0] def create_txt(file_name, lines): file = open(file_name, 'w') for line in lines: file.write(line + "\n") file.close() #create_txt("./data/train.txt",train_tsv) create_txt("data/train.txt", train_dev_tsv) create_txt("data/test.txt", test_tsv) create_txt("data/dev.txt", dev_tsv) # txt file'a çeviriyoruz # !cat data/NCBI-disease/train.tsv | tr "\t" " " | head -10 #labels.txt -> unique varlık ismi sınıflarının olduğu text dosyası OUTPUT_DIR = "electra-ner" !cd data !ls ./data !python3 run_ner.py --data_dir ./data/ \ --labels ./labels.txt \ --model_name_or_path enelpi/med-electra-small-discriminator \ --output_dir $OUTPUT_DIR \ --max_seq_length 128 \ --num_train_epochs 3 \ --per_device_train_batch_size 16 \ --overwrite_output_dir \ --save_steps 10000 \ --seed 41 \ --do_train \ --do_eval \ --do_predict import torch print(torch.__version__) print(torch.cuda.is_available()) !lshw -c video !nvcc --version !modinfo nvidia # 11/06/2020 20:45:35 - INFO - __main__ - eval_accuracy_score = 0.9825284728742295 # 11/06/2020 20:45:35 - INFO - __main__ - eval_precision = 0.8032166508987701 # 11/06/2020 20:45:35 - INFO - __main__ - eval_recall = 0.884375 # 11/06/2020 20:45:35 - INFO - __main__ - eval_f1 = 0.8418443232523549 !nvidia-smi Seq length 256 distilbert-base-uncased torch.version.cuda !export CUDA_VISIBLE_DEVICES=0 ```
github_jupyter
# Artificial Intelligence Nanodegree ## Convolutional Neural Networks --- In this notebook, we train a CNN on augmented images from the CIFAR-10 database. ### 1. Load CIFAR-10 Database ``` import keras from keras.datasets import cifar10 # load the pre-shuffled train and test data (x_train, y_train), (x_test, y_test) = cifar10.load_data() ``` ### 2. Visualize the First 24 Training Images ``` import numpy as np import matplotlib.pyplot as plt %matplotlib inline fig = plt.figure(figsize=(20,5)) for i in range(36): ax = fig.add_subplot(3, 12, i + 1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_train[i])) ``` ### 3. Rescale the Images by Dividing Every Pixel in Every Image by 255 ``` # rescale [0,255] --> [0,1] x_train = x_train.astype('float32')/255 x_test = x_test.astype('float32')/255 ``` ### 4. Break Dataset into Training, Testing, and Validation Sets ``` from keras.utils import np_utils # break training set into training and validation sets (x_train, x_valid) = x_train[5000:], x_train[:5000] (y_train, y_valid) = y_train[5000:], y_train[:5000] # one-hot encode the labels num_classes = len(np.unique(y_train)) y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) y_valid = keras.utils.to_categorical(y_valid, num_classes) # print shape of training set print('x_train shape:', x_train.shape) # print number of training, validation, and test images print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') print(x_valid.shape[0], 'validation samples') ``` ### 5. Create and Configure Augmented Image Generator ``` from keras.preprocessing.image import ImageDataGenerator datagen_train = ImageDataGenerator(width_shift_range=.1, height_shift_range=.1, horizontal_flip=True) datagen_train.fit(x_train) ``` ### 6. Visualize Original and Augmented Images ``` fig = plt.figure(figsize=(20,2)) fig.suptitle('Actual images', fontsize=20) for i in range(10): ax = fig.add_subplot(1, 10, i+1, xticks=[], yticks=[]) ax.imshow(np.squeeze(x_train[i])) fig = plt.figure(figsize=(20,2)) fig.suptitle('Augmented images') for x_batch in datagen_train.flow(x_train[:10]): for i in range(10): ax = fig.add_subplot(1, 10, i+1, xticks=[], yticks=[]) ax.imshow(x_batch[i]) break; ``` ### 7. Define the Model Architecture ``` from keras.models import Sequential from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout model = Sequential() model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(32, 32, 3))) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu')) model.add(MaxPooling2D(pool_size=2)) model.add(Dropout(0.3)) model.add(Flatten()) model.add(Dense(500, activation='relu')) model.add(Dropout(0.4)) model.add(Dense(10, activation='softmax')) model.summary() ``` ### 8. Compile the Model ``` # compile the model model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy']) ``` ### 9. Train the Model ``` from keras.callbacks import ModelCheckpoint batch_size = 32 epochs = 100 # train the model checkpointer = ModelCheckpoint(filepath='aug_model.weights.best.mine.hdf5', verbose=1, save_best_only=True) model.fit_generator(datagen_train.flow(x_train, y_train, batch_size=batch_size), steps_per_epoch=x_train.shape[0] // batch_size, epochs=epochs, verbose=2, callbacks=[checkpointer], validation_data=(x_valid, y_valid), validation_steps=x_valid.shape[0] // batch_size) ``` ### 10. Load the Model with the Best Validation Accuracy ``` # load the weights that yielded the best validation accuracy model.load_weights('aug_model.weights.best.mine.hdf5') ``` ### 11. Calculate Classification Accuracy on Test Set ``` # evaluate and print test accuracy score = model.evaluate(x_test, y_test, verbose=0) print('\n', 'Test accuracy:', score[1]) ```
github_jupyter
``` import os, json, sys, time, random import numpy as np import torch from easydict import EasyDict from math import floor from easydict import EasyDict from steves_utils.vanilla_train_eval_test_jig import Vanilla_Train_Eval_Test_Jig from steves_utils.torch_utils import get_dataset_metrics, independent_accuracy_assesment from steves_models.configurable_vanilla import Configurable_Vanilla from steves_utils.torch_sequential_builder import build_sequential from steves_utils.lazy_map import Lazy_Map from steves_utils.sequence_aggregator import Sequence_Aggregator from steves_utils.stratified_dataset.traditional_accessor import Traditional_Accessor_Factory from steves_utils.cnn_do_report import ( get_loss_curve, get_results_table, get_parameters_table, get_domain_accuracies, ) from steves_utils.torch_utils import ( confusion_by_domain_over_dataloader, independent_accuracy_assesment ) from steves_utils.utils_v2 import ( per_domain_accuracy_from_confusion, get_datasets_base_path ) # from steves_utils.ptn_do_report import TBD required_parameters = { "experiment_name", "lr", "device", "dataset_seed", "seed", "labels", "domains_target", "domains_source", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "batch_size", "n_epoch", "patience", "criteria_for_best", "normalize_source", "normalize_target", "x_net", "NUM_LOGS_PER_EPOCH", "BEST_MODEL_PATH", "pickle_name_source", "pickle_name_target", "torch_default_dtype", } from steves_utils.ORACLE.utils_v2 import ( ALL_SERIAL_NUMBERS, ALL_DISTANCES_FEET_NARROWED, ) standalone_parameters = {} standalone_parameters["experiment_name"] = "MANUAL CORES CNN" standalone_parameters["lr"] = 0.0001 standalone_parameters["device"] = "cuda" standalone_parameters["dataset_seed"] = 1337 standalone_parameters["seed"] = 1337 standalone_parameters["labels"] = ALL_SERIAL_NUMBERS standalone_parameters["domains_source"] = [8,32,50] standalone_parameters["domains_target"] = [14,20,26,38,44,] standalone_parameters["num_examples_per_domain_per_label_source"]=-1 standalone_parameters["num_examples_per_domain_per_label_target"]=-1 standalone_parameters["pickle_name_source"] = "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl" standalone_parameters["pickle_name_target"] = "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl" standalone_parameters["torch_default_dtype"] = "torch.float32" standalone_parameters["batch_size"]=128 standalone_parameters["n_epoch"] = 3 standalone_parameters["patience"] = 10 standalone_parameters["criteria_for_best"] = "target_accuracy" standalone_parameters["normalize_source"] = False standalone_parameters["normalize_target"] = False standalone_parameters["x_net"] = [ {"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}}, {"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":256}}, {"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features":80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features":256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": len(standalone_parameters["labels"])}}, ] standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10 standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth" # Parameters parameters = { "experiment_name": "cnn_1:oracle.run1.framed", "labels": [ "3123D52", "3123D65", "3123D79", "3123D80", "3123D54", "3123D70", "3123D7B", "3123D89", "3123D58", "3123D76", "3123D7D", "3123EFE", "3123D64", "3123D78", "3123D7E", "3124E4A", ], "domains_source": [8, 32, 50], "domains_target": [14, 20, 26, 38, 44], "pickle_name_source": "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl", "pickle_name_target": "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl", "device": "cuda", "lr": 0.0001, "batch_size": 128, "normalize_source": False, "normalize_target": False, "num_examples_per_domain_per_label_source": -1, "num_examples_per_domain_per_label_target": -1, "torch_default_dtype": "torch.float32", "n_epoch": 50, "patience": 3, "criteria_for_best": "target_accuracy", "x_net": [ {"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}}, { "class": "Conv2d", "kargs": { "in_channels": 1, "out_channels": 256, "kernel_size": [1, 7], "bias": False, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 256}}, { "class": "Conv2d", "kargs": { "in_channels": 256, "out_channels": 80, "kernel_size": [2, 7], "bias": True, "padding": [0, 3], }, }, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm2d", "kargs": {"num_features": 80}}, {"class": "Flatten", "kargs": {}}, {"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}}, {"class": "ReLU", "kargs": {"inplace": True}}, {"class": "BatchNorm1d", "kargs": {"num_features": 256}}, {"class": "Linear", "kargs": {"in_features": 256, "out_features": 16}}, ], "NUM_LOGS_PER_EPOCH": 10, "BEST_MODEL_PATH": "./best_model.pth", "dataset_seed": 1337, "seed": 1337, } # Set this to True if you want to run this template directly STANDALONE = False if STANDALONE: print("parameters not injected, running with standalone_parameters") parameters = standalone_parameters if not 'parameters' in locals() and not 'parameters' in globals(): raise Exception("Parameter injection failed") #Use an easy dict for all the parameters p = EasyDict(parameters) supplied_keys = set(p.keys()) if supplied_keys != required_parameters: print("Parameters are incorrect") if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters)) if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys)) raise RuntimeError("Parameters are incorrect") ################################### # Set the RNGs and make it all deterministic ################################### np.random.seed(p.seed) random.seed(p.seed) torch.manual_seed(p.seed) torch.use_deterministic_algorithms(True) torch.set_default_dtype(eval(p.torch_default_dtype)) ################################### # Build the network(s) # Note: It's critical to do this AFTER setting the RNG ################################### x_net = build_sequential(p.x_net) start_time_secs = time.time() def wrap_in_dataloader(p, ds): return torch.utils.data.DataLoader( ds, batch_size=p.batch_size, shuffle=True, num_workers=1, persistent_workers=True, prefetch_factor=50, pin_memory=True ) taf_source = Traditional_Accessor_Factory( labels=p.labels, domains=p.domains_source, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source, pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_source), seed=p.dataset_seed ) train_original_source, val_original_source, test_original_source = \ taf_source.get_train(), taf_source.get_val(), taf_source.get_test() taf_target = Traditional_Accessor_Factory( labels=p.labels, domains=p.domains_target, num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source, pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name_target), seed=p.dataset_seed ) train_original_target, val_original_target, test_original_target = \ taf_target.get_train(), taf_target.get_val(), taf_target.get_test() # For CNN We only use X and Y. And we only train on the source. # Properly form the data using a transform lambda and Lazy_Map. Finally wrap them in a dataloader transform_lambda = lambda ex: ex[:2] # Strip the tuple to just (x,y) train_processed_source = wrap_in_dataloader( p, Lazy_Map(train_original_source, transform_lambda) ) val_processed_source = wrap_in_dataloader( p, Lazy_Map(val_original_source, transform_lambda) ) test_processed_source = wrap_in_dataloader( p, Lazy_Map(test_original_source, transform_lambda) ) train_processed_target = wrap_in_dataloader( p, Lazy_Map(train_original_target, transform_lambda) ) val_processed_target = wrap_in_dataloader( p, Lazy_Map(val_original_target, transform_lambda) ) test_processed_target = wrap_in_dataloader( p, Lazy_Map(test_original_target, transform_lambda) ) datasets = EasyDict({ "source": { "original": {"train":train_original_source, "val":val_original_source, "test":test_original_source}, "processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source} }, "target": { "original": {"train":train_original_target, "val":val_original_target, "test":test_original_target}, "processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target} }, }) ep = next(iter(test_processed_target)) ep[0].dtype model = Configurable_Vanilla( x_net=x_net, label_loss_object=torch.nn.NLLLoss(), learning_rate=p.lr ) jig = Vanilla_Train_Eval_Test_Jig( model=model, path_to_best_model=p.BEST_MODEL_PATH, device=p.device, label_loss_object=torch.nn.NLLLoss(), ) jig.train( train_iterable=datasets.source.processed.train, source_val_iterable=datasets.source.processed.val, target_val_iterable=datasets.target.processed.val, patience=p.patience, num_epochs=p.n_epoch, num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH, criteria_for_best=p.criteria_for_best ) total_experiment_time_secs = time.time() - start_time_secs source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test) target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test) source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val) target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val) history = jig.get_history() total_epochs_trained = len(history["epoch_indices"]) val_dl = wrap_in_dataloader(p, Sequence_Aggregator((datasets.source.original.val, datasets.target.original.val))) confusion = confusion_by_domain_over_dataloader(model, p.device, val_dl, forward_uses_domain=False) per_domain_accuracy = per_domain_accuracy_from_confusion(confusion) # Add a key to per_domain_accuracy for if it was a source domain for domain, accuracy in per_domain_accuracy.items(): per_domain_accuracy[domain] = { "accuracy": accuracy, "source?": domain in p.domains_source } # Do an independent accuracy assesment JUST TO BE SURE! # _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device) # _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device) # _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device) # _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device) # assert(_source_test_label_accuracy == source_test_label_accuracy) # assert(_target_test_label_accuracy == target_test_label_accuracy) # assert(_source_val_label_accuracy == source_val_label_accuracy) # assert(_target_val_label_accuracy == target_val_label_accuracy) ################################### # Write out the results ################################### experiment = { "experiment_name": p.experiment_name, "parameters": p, "results": { "source_test_label_accuracy": source_test_label_accuracy, "source_test_label_loss": source_test_label_loss, "target_test_label_accuracy": target_test_label_accuracy, "target_test_label_loss": target_test_label_loss, "source_val_label_accuracy": source_val_label_accuracy, "source_val_label_loss": source_val_label_loss, "target_val_label_accuracy": target_val_label_accuracy, "target_val_label_loss": target_val_label_loss, "total_epochs_trained": total_epochs_trained, "total_experiment_time_secs": total_experiment_time_secs, "confusion": confusion, "per_domain_accuracy": per_domain_accuracy, }, "history": history, "dataset_metrics": get_dataset_metrics(datasets, "cnn"), } get_loss_curve(experiment) get_results_table(experiment) get_domain_accuracies(experiment) print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"]) print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"]) json.dumps(experiment) ```
github_jupyter
# Introducing Scikit-Learn There are several Python libraries which provide solid implementations of a range of machine learning algorithms. One of the best known is [Scikit-Learn](http://scikit-learn.org), a package that provides efficient versions of a large number of common algorithms. Scikit-Learn is characterized by a clean, uniform, and streamlined API, as well as by very useful and complete [online documentation](https://scikit-learn.org/stable/documentation.html). A benefit of this uniformity is that once you understand the basic use and syntax of Scikit-Learn for one type of model, switching to a new model or algorithm is very straightforward. This section provides an overview of the Scikit-Learn API. We will start by covering *data representation* in Scikit-Learn, followed by covering the *Estimator* API, and finally go through a couple examples. ## Data Representation in Scikit-Learn Machine learning is about creating models from data: for that reason, we'll start by discussing how data can be represented in order to be understood by the computer. The best way to think about data within Scikit-Learn is in terms of tables of data. ### Data as table A basic table is a two-dimensional grid of data, in which the rows represent individual elements of the dataset, and the columns represent quantities related to each of these elements. For example, consider the [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), famously analyzed by Ronald Fisher in 1936. We can download this dataset in the form of a Pandas ``DataFrame`` using the [seaborn](http://seaborn.pydata.org/) library: ``` import pandas as pd import numpy as np from IPython.display import Pretty as disp hint = 'https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/docs/hints/' # path to hints on GitHub import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline sns.set(rc={'figure.figsize':(10,8)}) # Figure size iris = sns.load_dataset('iris') iris.head() ``` <img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/11-01-Petal-sepal.jpg?raw=true" width="300" align="center"/> ``` iris.species.unique() ``` Here each row of the data refers to a single observed flower, and the number of rows is the total number of flowers in the dataset. In general, we will refer to the rows of the matrix as *samples*, and the number of rows as ``n_samples``. Likewise, each column of the data refers to a particular quantitative piece of information that describes each sample. In general, we will refer to the columns of the matrix as *features*, and the number of columns as ``n_features``. #### Features matrix This table layout makes clear that the information can be thought of as a two-dimensional numerical array or matrix, which we will call the *features matrix*. By convention, this features matrix is often stored in a variable named ``X``. The features matrix is assumed to be two-dimensional, with shape ``[n_samples, n_features]``, and is most often contained in a NumPy array or a Pandas ``DataFrame``. The samples (i.e., rows) always refer to the individual objects described by the dataset. For example, the sample might be a flower, a person, a document, an image, a sound file, a video, an astronomical object, or anything else you can describe with a set of quantitative measurements. The features (i.e., columns) always refer to the distinct observations that describe each sample in a quantitative manner. Features are generally real-valued, but may be Boolean or discrete-valued in some cases. #### Target array In addition to the feature matrix ``X``, we also generally work with a *label* or *target* array, which by convention we will usually call ``y``. The target array is usually one dimensional, with length ``n_samples``, and is generally contained in a NumPy array or Pandas ``Series``. The target array may have continuous numerical values, or discrete classes/labels. Often one point of confusion is how the target array differs from the other features columns. The distinguishing feature of the target array is that it is usually the quantity we want to *predict from the data*: in statistical terms, it is the dependent variable. For example, in the preceding data we may wish to construct a model that can predict the species of flower based on the other measurements; in this case, the ``species`` column would be considered the target array. With this target array in mind, we can use Seaborn to conveniently visualize the data: ``` sns.pairplot(iris, hue='species', height=2.5); ``` For use in Scikit-Learn, we will extract the features matrix and target array from the ``DataFrame``, which we can do using some of the Pandas ``DataFrame`` operations we've learned: ``` X_iris = iris.drop('species', axis=1) X_iris.shape y_iris = iris['species'] y_iris.shape ``` To summarize, the expected layout of features and target values is visualized in the following diagram: <img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/11-01-samples-features.png?raw=true" width="700" align="center"/> With this data properly formatted, we can move on to consider the *estimator* API of Scikit-Learn: ## Scikit-Learn's Estimator API The Scikit-Learn API is designed with the following guiding principles in mind, as outlined in the [Scikit-Learn API paper(2013)](http://arxiv.org/abs/1309.0238): - *Consistency*: All objects share a common interface drawn from a limited set of methods, with consistent documentation. - *Inspection*: All specified parameter values are exposed as public attributes. - *Limited object hierarchy*: Only algorithms are represented by Python classes; datasets are represented in standard formats (NumPy arrays, Pandas ``DataFrame``s) and parameter names use standard Python strings. - *Composition*: Many machine learning tasks can be expressed as sequences of more fundamental algorithms, and Scikit-Learn makes use of this wherever possible. - *Sensible defaults*: When models require user-specified parameters, the library defines an appropriate default value. In practice, these principles make Scikit-Learn very easy to use, once the basic principles are understood. Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications. ### Basics of the API Most commonly, the steps in using the Scikit-Learn estimator API are as follows (we will step through a couple of detailed examples in the sections that follow). 1. Choose a class of model by importing the appropriate estimator class from Scikit-Learn. 2. Choose model hyperparameters by instantiating this class with desired values. 3. Arrange data into a features matrix and target vector following the discussion above. 4. Fit the model to your data by calling the ``fit()`` method of the model instance. 5. Apply the Model to new data: - For supervised learning, often we predict labels for unknown data using the ``predict()`` method. - For unsupervised learning, we often transform or infer properties of the data using the ``transform()`` or ``predict()`` method. We will now step through simple examples of applying supervised learning methods. ### Supervised learning example: Simple linear regression As an example of this process, let's consider a simple linear regression—that is, the common case of fitting a line to $(x, y)$ data. We will use the following columns from `iris` for our regression example: `petal_width` & `petal_length` ``` X = iris[['petal_width']] y = iris[['petal_length']] plt.scatter(X, y); ``` With this data in place, we can use the recipe outlined earlier. Let's walk through the process: #### 1. Choose a class of model In Scikit-Learn, every class of model is represented by a Python class. So, for example, if we would like to compute a simple linear regression model, we can import the linear regression class: ``` from sklearn.linear_model import LinearRegression ``` Note that other more general linear regression models exist as well; you can read more about them in the [``sklearn.linear_model`` module documentation](http://Scikit-Learn.org/stable/modules/linear_model.html). #### 2. Choose model hyperparameters An important point is that *a class of model is not the same as an instance of a model*. Once we have decided on our model class, there are still some options open to us. Depending on the model class we are working with, we might need to answer one or more questions like the following: - Would we like to fit for the offset (i.e., *y*-intercept)? - Would we like the model to be normalized? - Would we like to preprocess our features to add model flexibility? - What degree of regularization would we like to use in our model? - How many model components would we like to use? These are examples of the important choices that must be made *once the model class is selected*. These choices are often represented as *hyperparameters*, or parameters that must be set before the model is fit to data. In Scikit-Learn, hyperparameters are chosen by passing values at model instantiation. For our linear regression example, we can instantiate the ``LinearRegression`` class and specify that we would like to fit the intercept using the ``fit_intercept`` hyperparameter: ``` model = LinearRegression(fit_intercept=True) model ``` Keep in mind that when the model is instantiated, the only action is the storing of these hyperparameter values. In particular, we have not yet applied the model to any data: the Scikit-Learn API makes very clear the distinction between *choice of model* and *application of model to data*. #### 3. Arrange data into a features matrix and target vector Previously we detailed the Scikit-Learn data representation, which requires a two-dimensional features matrix and a one-dimensional target array. Here our target variable ``y`` is already in the correct form (a length-``n_samples`` array). Our features matrix is also in the right shape since we only have 1 feature it is a matrix of size ``[n_samples, n_features]``. Let's check the shapes: ``` print(X.shape) print(y.shape) ``` #### 4. Fit the model to your data Now it is time to apply our model to data. This can be done with the ``fit()`` method of the model: ``` model.fit(X, y) ``` This ``fit()`` command causes a number of model-dependent internal computations to take place, and the results of these computations are stored in model-specific attributes that the user can explore. In Scikit-Learn, by convention all model parameters that were learned during the ``fit()`` process have trailing underscores; for example in this linear model, we have the following: ``` model.coef_ model.intercept_ ``` These two parameters represent the slope and intercept of the simple linear fit to the data. Comparing to the data definition, we see that they are very close to the input slope of 2.2 and intercept of 1. One question that frequently comes up regards the uncertainty in such internal model parameters. In general, Scikit-Learn does not provide tools to draw conclusions from internal model parameters themselves: interpreting model parameters is much more a *statistical modeling* question than a *machine learning* question. Machine learning rather focuses on what the model *predicts*. If you would like to dive into the meaning of fit parameters within the model, other tools are available, including the [Statsmodels Python package](http://statsmodels.sourceforge.net/). #### 5. Predict labels for unknown data Once the model is trained, the main task of supervised machine learning is to evaluate it based on what it says about new data that was not part of the training set. In Scikit-Learn, this can be done using the ``predict()`` method. For the sake of this example, our "new data" will be a grid of `x` values, and we will ask what `y` values the model predicts: ``` xfit = np.linspace(0, 2.5) xfit = pd.DataFrame(xfit) xfit.shape ``` We have coerced these *x* values into a ``[n_samples, n_features]`` features matrix, after which we can feed it to the model: ``` yfit = model.predict(xfit) ``` Finally, let's visualize the results by plotting first the raw data, and then this model fit: ``` plt.scatter(X, y) plt.plot(xfit, yfit, c='gray') plt.xlabel('petal_width') plt.ylabel('petal_length'); ``` Typically the efficacy of the model is evaluated by comparing its results to some known baseline, as we will see in the next example ### Supervised learning example: Iris classification Let's take a look at another example of this process, using the Iris dataset we discussed earlier. Our question will be this: given a model trained on a portion of the Iris data, how well can we predict the remaining labels? For this task, we will use an extremely simple generative model known as Gaussian naive Bayes, which proceeds by assuming each class is drawn from an axis-aligned Gaussian distribution. Because it is so fast and has no hyperparameters to choose, Gaussian naive Bayes is often a good model to use as a baseline classification, before exploring whether improvements can be found through more sophisticated models. We would like to evaluate the model on data it has not seen before, and so we will split the data into a *training set* and a *testing set*. This could be done by hand, but it is more convenient to use the ``train_test_split`` utility function: ``` from sklearn.model_selection import train_test_split Xtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, test_size=0.3, random_state=833) ``` With the data arranged, we can follow our recipe to predict the labels: ``` from sklearn.naive_bayes import GaussianNB # 1. choose model class model = GaussianNB() # 2. instantiate model model.fit(Xtrain, ytrain) # 3. fit model to data y_model = model.predict(Xtest) # 4. predict on new data ``` Finally, we can use the ``accuracy_score`` utility to see the fraction of predicted labels that match their true value: ``` from sklearn.metrics import accuracy_score accuracy_score(ytest, y_model) ``` With an accuracy topping 93%, we see that even this very naive classification algorithm is effective for this particular dataset! To learn more about Gaussian naive Bayes check out [this YouTube video](https://www.youtube.com/watch?v=r1in0YNetG8). # Your Turn For this exercise we are going to make some predictions using Telco customer churn data. Our goal is to make a simple model that can predict whether a customer will churn or not based on the historical data. We will follow the steps above. But first, let's load the data: ``` df = pd.read_csv('https://raw.githubusercontent.com/soltaniehha/Business-Analytics/master/data/Telco-Customer-Churn.csv') df.head(3) ``` Column `TotalCharges` has 11 rows with an empty string (" "). Replace these values by `0` as they represent new customers that haven't received a bill yet. Once you replaced the values, convert that column to a `float32` data type. ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-TotalCharges') ``` If we check the `df.info()` now we should see that `TotalCharges` is now a `float32`: ``` df.info() ``` Before we get to splitting our dataset into train/test we would have to make sure all of our values are numerical as most of ML algorithms work with numerical values only. ### How to Convert Categorical Data to Numerical Data? In order to do this we have to convert all of the categorical variables to numerical values. This can be done with a process called one-hot encoding. Take the "Churn" column as an example. We have two unique values: "Yes"/"No". One-hot encoding will create two variables, one called `Churn_Yes` and the other one `Churn_No`. We will go from | Churn | |--| |Yes| |No| |Yes| |...| to |Churn_Yes | Churn_No | |--|--| |1|0| |0|1| |1|0| |...|...| As you can see, having two variables is redundant since they mirror each other. So, in any one-hot encoding scenario, we need `n-1` variables for a categorical variable that had `n` categories. Below, we will use a *pandas* function called `get_dummies()`. If we want *pandas* to automcatically drop one of the extra variables for us we use the `drop_first=True` argument. ``` churn_df = df.drop('customerID', axis=1) # dropping customerID as it doesn't have any predictive power df_dummified = pd.get_dummies(churn_df, drop_first=True) # One-hot encoding df_dummified.rename(columns={'Churn_Yes': 'Churn'}, inplace=True) # renaming Churn_Yes to Churn df_dummified.head() ``` Using `df_dummified` dataframe, create two dataframes for features matrix and target vector and call them `X_df` and `y_df` respectively: ``` # Your answer goes here # X_df # Your answer goes here # y_df # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-X_df-y_df') ``` Check their shape: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-shape') ``` From `X_df` and `y_df` create the train/test splits. * Set 30% of the data to test and the remainder to train. * Use `random_state=833`, so you get the same result as in the notebook. * Name the resulting objects: `Xtrain`, `Xtest`, `ytrain`, `ytest`. ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-split') ``` From `sklearn.naive_bayes` import `GaussianNB`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-import') ``` Instantiate a `GaussianNB` model and call it `model`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-model') ``` Fit model to data: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-fit') ``` Now let's make some predictions. Use the `Xtest` feature dataframe to predict whether these customers are churning or not. Call the outcome (predictions) `y_model`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-predict') ``` Accuracy is not the most reliable metric when it comes to evaluating classification algorithms, but it's one of the most simple ones. Let's calculate it below: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-accuracy') ``` ### Is this model any good? If we had said no one will churn, what the accuracy would have been? We would have identified all the ones who didn't churn correctly, but missed all the ones who did actually churn. Let's calculate the accuracy for this simplistic model below: hint: all you need to work with is `ytest` ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-accuracy-base') ``` So, after all our ML model is not that great. But the scope of this exercise is not to fine-tune this model, but understand the pipeline and how to read the outcome. Let's continue with some simple questions. How many people did we have in the test dataset? Save it to a variable called `n_test`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-test-size') ``` How many of these customers churned? Save it to a variable and call it `P`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-churned') ``` How many true positives did the model generate? (True positive = correctly identified in the "positive" class. Take "Yes" as the positive class). Save it to a variable and call it `TP`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-TP') ``` What is the true positive rate (or sensitivity)? **Sensitivity** (also called the **true positive rate (TPR)**, the recall, or probability of detection in some fields) measures the proportion of actual positives that are correctly identified as such (e.g., the percentage of sick people who are correctly identified as having the condition). ~Wikipedia $TPR=\frac{TP}{P}$, where $TP$ is ture positives and $P$ is count of all positives. ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-sensitivity') ``` How many of the customers in the test set didn't churn? Save it to `N`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-N') ``` How many true negatives did the model generate? (True negative = correctly rejected). Save a to a variable and call it `TN`: ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-TN') ``` What is the true negative rate (or specificity)? **Specificity** (also called the **true negative rate**) measures the proportion of actual negatives that are correctly identified as such (e.g., the percentage of healthy people who are correctly identified as not having the condition). ~Wikipedia $TNR=\frac{TN}{N}$, where $TN$ is ture negatives and $N$ is count of all negatives. ``` # Your answer goes here # SOLUTION: Uncomment and execute the cell below to get help #disp(hint + '09-02-specificity') ``` You may check out [this Wikipedia page](https://en.wikipedia.org/wiki/Sensitivity_and_specificity) for more info on sensitivity and specificity. <img src="https://github.com/soltaniehha/Business-Analytics/blob/master/figs/09-02-sensitivity.png?raw=true" width="400" align="center"/>
github_jupyter
``` # Import Dependencies import numpy as np import pandas as pd from pathlib import Path # Processing Libraries from sklearn.preprocessing import StandardScaler, LabelEncoder # Models from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier ``` ### Predictions I believe the Logistic Regression model will perform better than the Random Forest Classifier as the data is not randomized. How about after scaling the data? I expect the Logistic Regression to perform better on the scaled data. In this particular case, the Logistic Regression is the appropriate regression analysis to conduct as the dependent variable are binary. Conclusion As expected, the Logistic Regression performed better than the Random Forest Classifier. ``` # Retrieving the datasets train_df = pd.read_csv("Resources/2019loans.csv") test_df = pd.read_csv("Resources/2020Q1loans.csv") # Processing the Data X_train_dummies = pd.get_dummies(train_df) print(X_train_dummies.columns) X_train_dummies train_df test_df # Convert categorical data to numeric and separate target feature for training data X_train = train_df.drop(["loan_status"], axis=1) X_train = pd.get_dummies(X_train) y_train = LabelEncoder().fit_transform(train_df["loan_status"]) print(X_train.columns) y_train # Convert categorical data to numeric and separate target feature for testing data X_test = test_df.drop(["loan_status"], axis=1) X_test = pd.get_dummies(X_test) y_test = LabelEncoder().fit_transform(test_df["loan_status"]) print(X_test.columns) y_test # add missing dummy variables to testing set X_test["debt_settlement_flag_Y"] = X_test.apply(lambda row: round(abs(row["debt_settlement_flag_N"] - 1),0), axis=1) X_test = X_test.convert_dtypes() X_test # Train the Logistic Regression model on the unscaled data and print the model score classifier = LogisticRegression(max_iter=10000) classifier.fit(X_train, y_train) train_score = classifier.score(X_train, y_train) test_score = classifier.score(X_test, y_test) print(f"Train Score: {train_score:.3f}") print(f"Test Score: {test_score:.3f}") # Train a Random Forest Classifier model and print the model score rfc = RandomForestClassifier(random_state=42) rfc.fit(X_train, y_train) train_score = rfc.score(X_train, y_train) test_score = rfc.score(X_test, y_test) print(f"Train Score: {train_score:.3f}") print(f"Test Score: {test_score:.3f}") # Scale the data scaler = StandardScaler().fit(X_train) X_train_scaled = scaler.transform(X_train) X_test_scaled = scaler.transform(X_test) # Train the Logistic Regression model on the scaled data and print the model score classifier = LogisticRegression(max_iter=10000) classifier.fit(X_train_scaled, y_train) train_score = classifier.score(X_train_scaled, y_train) test_score = classifier.score(X_test_scaled, y_test) print(f"Train Score: {train_score:.3f}") print(f"Test Score: {test_score:.3f}") # Train a Random Forest Classifier model on the scaled data and print the model score rfc = RandomForestClassifier(random_state=42) rfc.fit(X_train_scaled, y_train) train_score = rfc.score(X_train_scaled, y_train) test_score = rfc.score(X_test_scaled, y_test) print(f"Train Score: {train_score:.3f}") print(f"Test Score: {test_score:.3f}") ```
github_jupyter
``` given = """ Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 Grey LEFT LEFT 2 BLACK RIGHT RIGHT 2 Grey LEFT LEFT 2 BLACK RIGHT RIGHT 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey RIGHT RIGHT 5 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey TOP TOP 5 BLACK RIGHT RIGHT 3 Grey LEFT LEFT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 BLACK LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 5 Grey LEFT LEFT 5 Grey LEFT LEFT 2 BLACK RIGHT RIGHT 5 Grey RIGHT RIGHT 5 BLACK RIGHT RIGHT 5 Grey RIGHT RIGHT 5 Grey LEFT LEFT 4 BLACK TOP TOP 2 BLACK RIGHT RIGHT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 3 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 3 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 MAP CIRCLE Grey LEFT LEFT 3 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 3 BLACK EMPTY EMPTY 1 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 Grey RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 BLACK RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 3 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey BOTTOM BOTTOM 2 Grey TOP TOP 5 Grey LEFT LEFT 5 BLACK TOP TOP 4 Grey LEFT LEFT 2 BLACK LEFT LEFT 2 BLACK BOTTOM BOTTOM 5 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey TOP TOP 4 Grey TOP TOP 2 BLACK TOP TOP 5 BLACK TOP TOP 4 Grey LEFT LEFT 2 Grey BOTTOM BOTTOM 3 Grey RIGHT RIGHT 2 Grey TOP TOP 5 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 4 Grey LEFT LEFT 2 Grey LEFT LEFT 3 BLACK LEFT LEFT 2 BLACK BOTTOM BOTTOM 5 BLACK RIGHT RIGHT 4 Grey LEFT LEFT 2 Grey LEFT LEFT 3 BLACK LEFT LEFT 2 BLACK BOTTOM BOTTOM 5 BLACK LEFT LEFT 2 BLACK BOTTOM BOTTOM 5 Grey TOP TOP 5 Grey LEFT LEFT 2 Grey RIGHT RIGHT 5 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey LEFT LEFT 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 MAP CIRCLE BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey TOP TOP 4 Grey TOP TOP 2 BLACK TOP TOP 5 Grey LEFT LEFT 2 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 MAP CIRCLE BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 1 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK TOP TOP 3 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK TOP TOP 3 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey LEFT LEFT 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK TOP TOP 4 Grey EMPTY EMPTY 3 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey LEFT LEFT 5 Grey LEFT LEFT 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK LEFT LEFT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 BLACK TOP TOP 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK EMPTY EMPTY 1 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 1 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 3 Grey EMPTY EMPTY 5 BLACK EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey TOP TOP 5 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 MAP CIRCLE BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK TOP TOP 4 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey BOTTOM BOTTOM 2 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey TOP TOP 4 Grey TOP TOP 2 BLACK TOP TOP 5 Grey RIGHT RIGHT 2 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 BLACK TOP TOP 3 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 Grey EMPTY EMPTY 1 Grey EMPTY EMPTY 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey TOP TOP 4 Grey TOP TOP 2 BLACK TOP TOP 5 Grey LEFT LEFT 2 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 BLACK LEFT LEFT 5 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 BLACK RIGHT RIGHT 4 Grey LEFT LEFT 2 Grey LEFT LEFT 5 BLACK TOP TOP 4 Grey LEFT LEFT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 BLACK RIGHT RIGHT 4 Grey LEFT LEFT 2 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 MAP CIRCLE BLACK TOP TOP 2 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey TOP TOP 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey LEFT LEFT 2 BLACK LEFT LEFT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 Grey EMPTY EMPTY 3 BLACK LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 Grey EMPTY EMPTY 3 Grey LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 BLACK LEFT LEFT 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey RIGHT RIGHT 4 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK EMPTY EMPTY 4 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 MAP CIRCLE BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 BLACK BOTTOM BOTTOM 1 Grey EMPTY EMPTY 2 BLACK BOTTOM BOTTOM 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 5 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 BLACK EMPTY EMPTY 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 BLACK BOTTOM BOTTOM 4 BLACK BOTTOM BOTTOM 2 BLACK RIGHT RIGHT 3 Grey RIGHT RIGHT 3 BLACK LEFT LEFT 3 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey LEFT LEFT 2 BLACK BOTTOM BOTTOM 4 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK LEFT LEFT 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey EMPTY EMPTY 4 Grey LEFT LEFT 3 BLACK TOP TOP 1 BLACK EMPTY EMPTY 5 Grey TOP TOP 3 Grey LEFT LEFT 2 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK BOTTOM BOTTOM 2 Grey TOP TOP 3 Grey EMPTY EMPTY 3 Grey BOTTOM BOTTOM 5 Grey BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 BLACK BOTTOM BOTTOM 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 Grey RIGHT RIGHT 2 Grey RIGHT RIGHT 2 Grey BOTTOM BOTTOM 4 Grey BOTTOM BOTTOM 2 """ transcript = """ I AM SAM. I AM SAM. SAM I AM. THAT SAM-I-AM! THAT SAM-I-AM! I DO NOT LIKE THAT SAM-I-AM! DO WOULD YOU LIKE GREEN EGGS AND HAM? I DO NOT LIKE THEM,SAM-I-AM. I DO NOT LIKE GREEN EGGS AND HAM. WOULD YOU LIKE THEM HERE OR THERE? I WOULD NOT LIKE THEM HERE OR THERE. I WOULD NOT LIKE THEM ANYWHERE. I DO NOT LIKE GREEN EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. WOULD YOU LIKE THEM IN A HOUSE? WOULD YOU LIKE THEN WITH A MOUSE? I DO NOT LIKE THEM IN A HOUSE. I DO NOT LIKE THEM WITH A MOUSE. I DO NOT LIKE THEM HERE OR THERE. I DO NOT LIKE THEM ANYWHERE. I DO NOT LIKE GREEN EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. WOULD YOU EAT THEM IN A BOX? WOULD YOU EAT THEM WITH A FOX? NOT IN A BOX. NOT WITH A FOX. NOT IN A HOUSE. NOT WITH A MOUSE. I WOULD NOT EAT THEM HERE OR THERE. I WOULD NOT EAT THEM ANYWHERE. I WOULD NOT EAT GREEN EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. WOULD YOU? COULD YOU? IN A CAR? EAT THEM! EAT THEM! HERE THEY ARE. I WOULD NOT, COULD NOT, IN A CAR. YOU MAY LIKE THEM. YOU WILL SEE. YOU MAY LIKE THEM IN A TREE! I WOULD NOT, COULD NOT IN A TREE. NOT IN A CAR! YOU LET ME BE. I DO NOT LIKE THEM IN A BOX. I DO NOT LIKE THEM WITH A FOX. I DO NOT LIKE THEM IN A HOUSE. I DO NOT LIKE THEM WITH A MOUSE. I DO NOT LIKE THEM HERE OR THERE. I DO NOT LIKE THEM ANYWHERE. I DO NOT LIKE GREEN EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. A TRAIN! A TRAIN! A TRAIN! A TRAIN! COULD YOU, WOULD YOU ON A TRAIN? NOT ON TRAIN! NOT IN A TREE! NOT IN A CAR! SAM! LET ME BE! I WOULD NOT, COULD NOT, IN A BOX. I WOULD NOT, COULD NOT, WITH A FOX. I WILL NOT EAT THEM IN A HOUSE. I WILL NOT EAT THEM HERE OR THERE. I WILL NOT EAT THEM ANYWHERE. I DO NOT EAT GREEM EGGS AND HAM. I DO NOT LIKE THEM, SAM-I-AM. SAY! IN THE DARK? HERE IN THE DARK! WOULD YOU, COULD YOU, IN THE DARK? I WOULD NOT, COULD NOT, IN THE DARK. WOULD YOU COULD YOU IN THE RAIN? I WOULD NOT, COULD NOT IN THE RAIN. NOT IN THE DARK. NOT ON A TRAIN. NOT IN A CAR. NOT IN A TREE. I DO NOT LIKE THEM, SAM, YOU SEE. NOT IN A HOUSE. NOT IN A BOX. NOT WITH A MOUSE. NOT WITH A FOX. I WILL NOT EAT THEM HERE OR THERE. I DO NOT LIKE THEM ANYWHERE! YOU DO NOT LIKE GREEN EGGS AND HAM? I DO NOT LIKE THEM, SAM-I-AM. COULD YOU, WOULD YOU, WITH A GOAT? I WOULD NOT, COULD NOT WITH A GOAT! WOULD YOU, COULD YOU, ON A BOAT? I COULD NOT, WOULD NOT, ON A BOAT. I WILL NOT, WILL NOT, WITH A GOAT. I WILL NOT EAT THEM IN THE RAIN. NOT IN THE DARK! NOT IN A TREE! NOT IN A CAR! YOU LET ME BE! I DO NOT LIKE THEM IN A BOX. I DO NOT LIKE THEM WITH A FOX. I WILL NOT EAT THEM IN A HOUSE. I DO NOT LIKE THEM WITH A MOUSE. I DO NOT LIKE THEM HERE OR THERE. I DO NOT LIKE THEM ANYWHERE! I DO NOT LIKE GREEN EGGS AND HAM! I DO NOT LIKE THEM, SAM-I-AM. YOU DO NOT LIKE THEM. SO YOU SAY. TRY THEM! TRY THEM! AND YOU MAY. TRY THEM AND YOU MAY, I SAY. sAM! IF YOU LET ME BE, I WILL TRY THEM. YOU WILL SEE. (... and he tries them ...) SAY! I LIKE GREEN EGGS AND HAM! I DO! I LIKE THEM, SAM-I-AM! AND I WOULD EAT THEM IN A BOAT. AND I WOULD EAT THEM WITH A GOAT... AND I WILL EAT THEM, IN THE RAIN. AND IN THE DARK. AND ON A TRAIN. AND IN A CAR. AND IN A TREE. THEY ARE SO GOOD, SO GOOD, YOU SEE! SO I WILL EAT THEM IN A BOX. AND I WILL EAT THEM WITH A FOX. AND I WILL EAT THEM IN A HOUSE. AND I WILL EAT THEM WITH A MOUSE. AND I WILL EAT THEM HERE AND THERE. SAY! I WILL EAT THEM ANYWHERE! I DO SO LIKE GREEN EGGS AND HAM! THANK YOU! THANK YOU, SAM I AM. """ VALUES = { 'GREY': 0, 'BLACK': 0, 'EMPTY': 0, 'TOP': 1, 'RIGHT': 2, 'BOTTOM': 3, 'LEFT': 4, '1': 0, '2': 1, '3': 2, '4': 3, '5': 4, } def parse(txt): chunk = [] result = [chunk] for line in txt.strip('\n').split('\n'): if 'MAP CIRCLE' in line: print('chunk length', len(chunk)) chunk = [] result.append(chunk) continue color, _, position, fill = line.upper().split('\t') chunk.append((color, position, fill)) #chunk.append((VALUES[color], VALUES[position], VALUES[fill])) return result import collections parsed = parse(given) ngram_qs = [ collections.deque([], maxlen=i) for i in range(1, 10) ] counter = collections.Counter() for chunk in parsed: for ngram_q in ngram_qs: ngram_q.clear() for piece in chunk: for ngram_q in ngram_qs: ngram_q.append(''.join(map(str, piece))) if len(ngram_q) == ngram_q.maxlen: # print(ngram_q.maxlen, '\n', ngram_q) counter[','.join(ngram_q)] += 1 for s, freq in sorted(counter.items(), key=lambda x: x[1], reverse=True): print(str(freq).rjust(8), s) if freq == 1: break N = 8 # ⇦ ⇨ ⇧ ⇩ # ⬅ ➡ ⬆ ⬇ VIZ = { 'BLACKTOP': '\t⬆', 'BLACKRIGHT': '\t➡', 'BLACKBOTTOM': '\t⬇', 'BLACKLEFT': '\t⬅', 'BLACKEMPTY': '\t⬤', 'GREYTOP': '\t⇧', 'GREYRIGHT': '\t⇨', 'GREYBOTTOM': '\t⇩', 'GREYLEFT': '\t⇦', 'GREYEMPTY': '\t◯', # '1': '▕', # '2': '▕▎', # '3': '▕▍', # '4': '▕▋', # '5': '▕▉', } def print_lines(n_start, lines): for offset, line in enumerate(lines): n = n_start + offset if n % N == 0: nth = '*' else: nth = ' ' #if len(line) > 2: # for k, v in VIZ.items(): # line = line.replace(k, v) print(str(n).ljust(3), nth, line) for chunk in parsed: buffer.clear() print('-' * 20) for i, piece in enumerate(chunk): if len(buffer) == buffer.maxlen: word = ','.join(buffer) if word in SEQUENCES: print_lines(i - buffer.maxlen, ['%s = %s' % (word, SEQUENCES[word])] * 5) buffer.clear() buffer.append(''.join(map(str, piece))) # Don't lose this piece. continue print_lines(i - buffer.maxlen, [buffer[0]]) buffer.append(''.join(map(str, piece))) """ 1BOTTOMBLACK 1BOTTOMGrey 1EMPTYBLACK 1EMPTYGrey 1LEFTBLACK 1LEFTGrey 1RIGHTBLACK 1RIGHTGrey 1TOPBLACK EGGS 1TOPGrey 2BOTTOMBLACK THAT 2BOTTOMGrey I 2EMPTYBLACK 2EMPTYGrey 2LEFTBLACK 2LEFTGrey YOU 2RIGHTBLACK THANK 2RIGHTGrey SAM 2TOPBLACK 2TOPGrey 3BOTTOMBLACK 3BOTTOMGrey 3EMPTYBLACK 3EMPTYGrey 3LEFTBLACK 3LEFTGrey AND 3RIGHTBLACK 3RIGHTGrey 3TOPBLACK 3TOPGrey 4BOTTOMBLACK 4BOTTOMGrey AM 4EMPTYBLACK 4EMPTYGrey HAM 4LEFTBLACK 4LEFTGrey 4RIGHTBLACK 4RIGHTGrey 4TOPBLACK 4TOPGrey 5BOTTOMBLACK 5BOTTOMGrey 5EMPTYBLACK 5EMPTYGrey 5LEFTBLACK 5LEFTGrey 5RIGHTBLACK 5RIGHTGrey 5TOPBLACK 5TOPGrey """ WORDS = { 'BLACKTOP1': 'EGGS', 'GREYLEFT3': 'AND', 'GREYEMPTY4': 'HAM', 'GREYEMPTY2': 'A', 'GREYEMPTY1': 'TRAIN', 'BLACKLEFT4': 'COULD', 'GREYLEFT2': 'YOU', 'BLACKBOTTOM4': 'WOULD', 'BLACKTOP3': 'ON', 'GREYEMPTY3': 'NOT', 'BLACKBOTTOM3': 'IN', 'BLACKLEFT5': 'TREE', 'GREYTOP1': 'CAR', 'GREYRIGHT2': 'SAM', 'BLACKTOP5': 'LET', 'GREYTOP2': 'ME', 'GREYTOP4': 'BE', 'GREYBOTTOM2': 'I', 'BLACKBOTTOM1': 'BOX', 'BLACKEMPTY4': 'IN', 'GREYLEFT1': 'FOX', 'BLACKTOP4': 'WILL', 'GREYRIGHT4': 'EAT', 'BLACKLEFT2': 'THEM', 'GREYBOTTOM1': 'HOUSE', 'BLACKLEFT3': 'HERE', 'GREYRIGHT3': 'OR', 'BLACKRIGHT3': 'THERE', 'BLACKEMPTY3': '' } for chunk in reversed(parsed): for i, piece in enumerate(reversed(chunk)): word = ''.join(map(str, piece)) if word in WORDS: print(WORDS[word]) else: print(word) ```
github_jupyter
# Demonstration of integrating POI Points to OSM road network 1. Use anyway you like to get the sample [POI data](https://assets.onemap.sg/shp/supermarkets.zip) consisting of supermarkets from [OneMap SG](https://www.onemap.sg/). 2. Use [OSMnx](https://osmnx.readthedocs.io/en/stable/index.html) to download the pedestrian network from [OpenStreetMap](https://openstreetmap.org), we use a bounding box of Toa Payoh for the demo. 3. Save the network as `.shp` and read in as two `GeoDataFrame`s: junction as `nodes` and road segment as `edges`. 4. Integrate POIs into the network using the `connect_poi` function. ``` import os import wget import osmnx as ox import geopandas as gpd from toolbox import connect_poi ``` ## 1. Prepare POIs ``` # get POI data url = "https://assets.onemap.sg/shp/supermarkets.zip" PATH = 'data/supermarkets.zip' if os.path.exists(PATH): print('File existed.') else: PATH = wget.download(url, PATH) print('File downloaded.') # load and subset the POI based on a bounding box bbox = (103.8427, 1.3308, 103.8601, 1.3416) # set bbox of Toa Payoh pois = gpd.read_file('supermarkets', vfs='zip://{}'.format(PATH), crs='epsg:3857') pois = pois.to_crs(epsg=4326) pois['lon'] = pois['geometry'].apply(lambda p: p.x) pois['lat'] = pois['geometry'].apply(lambda p: p.y) pois = pois[(pois['lon'] >= bbox[0]) & (pois['lon'] <= bbox[2]) & (pois['lat'] >= bbox[1]) & (pois['lat'] <= bbox[3])] pois['key'] = pois.index # set a primary key column pois.head(3) ``` [NOTE] For use in pandana, you may want to ensure the key column for the input is numeric-only to avoid processing errors. Preferably use unique integers (int or str) only, and be aware not to intersect with the node key, which is 'osmid' if you use OSM data, in the nodes gdf. ## 2. Prepare network ``` # get road network and save as .shp G = ox.graph_from_bbox(bbox[3], bbox[1], bbox[2], bbox[0], network_type='walk') ox.save_graph_shapefile(G, filepath='data/sample/', encoding='utf-8') # load as GeoDataFrame nodes = gpd.read_file('data/sample/nodes.shp') edges = gpd.read_file('data/sample/edges.shp') ``` ## 3. Integrate POIs and network ``` connect_poi? # it's a one-liner, but is still at beta at the moment new_nodes, new_edges = connect_poi(pois, nodes, edges, key_col='key', path=None) ``` ## 4. Check output 1. First is an example of how edges will be broken into segments when there is a POI to be linked onto it. This process accommodates for multiple POIs. E.g., for 2 POIs projecting onto a same edge (but not overlapping nor on either vertices of the edge), the edge will be replaced with 3 segments. 2. Then a figure illustrating how the new network looks like after the update. [NOTE] Be noted that the aggregated length of segments will not equal exactly to the length of the original edge for some reasons that are not handled at the moment. ``` # original edge edges[edges['from'] == 3370311549][['from', 'to', 'length']] # new edges replacing the original (953, 954) and connecting the poi (964) new_edges[(new_edges['from'] == 3370311549) | (new_edges['from'] == 9990000005) | (new_edges['to'] == 9990000005) ][['from', 'to', 'length']] # output poi_links = new_edges[new_edges['highway'] == 'projected_footway'] ax = edges.plot(linewidth=0.8, figsize=(18,10), label='Original Road Edges') poi_links.plot(color='indianred', linewidth=2, ax=ax, label='New Connection Edges') pois.plot(color='indianred', marker='.', markersize=200, ax=ax, label='POI') ax.legend(loc=2, fontsize=18) ax.set_title('The integrated network of supermarkets and road network at Toa Payoh', fontsize=22); ```
github_jupyter
``` import numpy as np import pandas as pd import scipy import scipy.stats import verdict import matplotlib.pyplot as plt ``` # Parameters ``` sample_number = 0 ``` # Load Mock Data ``` # Load using my preferred wrapper for hdf5 files. data = verdict.Dict.from_hdf5( './data/synthetic_data/sample{}/observers_file.h5'.format( sample_number ) ) ``` # Assign Errors Our goal is to create semi-realistic errors. We won't put too much effort into this, so we'll roughly fit a gamma function to the distribution of errors from COS-Halos data, and sample from the distribution. We do provide a bound on 1.5 the maximum observed error, however, to prevent getting anything too crazy. ## Derive Errors from Data ``` df = pd.read_fwf( './data/cos_halos_data.txt', skiprows = 31, names = [ 'ID', 'z', 'Ion', 'Vel', 'e_Vel', 'b', 'e_b', 'logN', 'elogN', 'l_logNA', ',logNA', 'e_logNA' ], ) # Find ions used ions = list( set( df['Ion'] ) ) ions.remove( np.nan ) ions.append( 'H I' ) # Get out errors from data, mildly processed elogNs = {} for ion in ions: if ion != 'H I': # Locate ion_df = df[df['Ion'] == ion] elogN = [] for e in ion_df['elogN'].values: try: elogN.append( float( e ) ) except ValueError: continue else: # Manual H I input from COS-Halos hydrogen paper elogN = [ 0.03, 0.05, 0.05, 0.03, 0.09, 0.03, 0.15, 0.15, 0.08, 0.06, 0.03, 0.06, 0.03, 0.04, 0.03, 0.13, 0.06, 0.04, 0.05, 0.06, 0.02, 0.04, 0.06, 0.04, 0.04, 0.03, 0.07, 0.06, 0.10, 0.12, 0.06, 0.02, 0.03, 0.09, 0.04, 0.03, 0.04, ] elogNs[ion] = np.array( elogN ) # Stats of the errors errs = {} for ion in ions: elogN = elogNs[ion] # Extract errs[ion] = { 'mean': np.mean( elogN ), 'std': np.std( elogN ), } # Create a gamma fn, *very* roughly fitted beta = errs[ion]['mean'] / errs[ion]['std'] alpha = errs[ion]['mean'] * beta dist = scipy.stats.gamma( a=alpha, scale=1/beta ) # Plot and store fig = plt.figure() ax = plt.gca() ax.hist( elogN, bins = np.linspace( 0, elogN.max(), 32 ) ) ax.plot( np.linspace( 0, elogN.max(), 128 ), dist.pdf( np.linspace( 0, elogN.max(), 128 ) ), ) ax.annotate( s = ion, xy = ( 1, 1 ), xytext = ( -5, -5 ), va = 'top', ha = 'right', xycoords = 'axes fraction', textcoords = 'offset points', fontsize = 20, ) errs[ion]['dist'] = dist ``` ## Apply to Data ``` data_out = {} for ion in data.keys(): dist = errs[ion]['dist'] ion_errs = [] modified_columns = [] for i, column in enumerate( data[ion] ): ion_err = np.inf while ( ion_err > elogNs[ion].max()*1.5 ) or ( ion_err < elogNs[ion].min() / 1.5 ): ion_err = dist.rvs() # Assume error is conservative, apply modified_column = column * 10.**np.random.uniform( -ion_err, ion_err ) # Round and store ion_errs.append( np.round( ion_err, decimals=3 ) ) modified_columns.append( np.round( np.log10( modified_column ), decimals=3 ) ) data_out[ion] = { 'logN': np.array( modified_columns ), 'elogN': np.array( ion_errs ), } data_out = verdict.Dict( data_out ) ``` # Save Output ``` data_out.to_hdf5( './data/sample{}.h5'.format( sample_number ) ) ``` # Reload for Checking ``` data_out = verdict.Dict.from_hdf5( './data/synthetic_data_samples/sample{}.h5'.format( sample_number ) ) ```
github_jupyter
``` from google.colab import drive drive.mount('/content/drive') # from google.colab import drive # drive.mount('/content/drive') !pwd path = '/content/drive/MyDrive/Research/AAAI/cifar_new/k_001/sixth_run1_' import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader from torchvision import transforms, utils from matplotlib import pyplot as plt import copy # Ignore warnings import warnings warnings.filterwarnings("ignore") n_seed = 0 k = 0.001 torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark= False transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=10, shuffle=False) testloader = torch.utils.data.DataLoader(testset, batch_size=10, shuffle=False) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') foreground_classes = {'plane', 'car', 'bird'} #foreground_classes = {'bird', 'cat', 'deer'} background_classes = {'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'} #background_classes = {'plane', 'car', 'dog', 'frog', 'horse','ship', 'truck'} fg1,fg2,fg3 = 0,1,2 dataiter = iter(trainloader) background_data=[] background_label=[] foreground_data=[] foreground_label=[] batch_size=10 for i in range(5000): images, labels = dataiter.next() for j in range(batch_size): if(classes[labels[j]] in background_classes): img = images[j].tolist() background_data.append(img) background_label.append(labels[j]) else: img = images[j].tolist() foreground_data.append(img) foreground_label.append(labels[j]) foreground_data = torch.tensor(foreground_data) foreground_label = torch.tensor(foreground_label) background_data = torch.tensor(background_data) background_label = torch.tensor(background_label) def create_mosaic_img(bg_idx,fg_idx,fg): """ bg_idx : list of indexes of background_data[] to be used as background images in mosaic fg_idx : index of image to be used as foreground image from foreground data fg : at what position/index foreground image has to be stored out of 0-8 """ image_list=[] j=0 for i in range(9): if i != fg: image_list.append(background_data[bg_idx[j]])#.type("torch.DoubleTensor")) j+=1 else: image_list.append(foreground_data[fg_idx])#.type("torch.DoubleTensor")) label = foreground_label[fg_idx]- fg1 # minus fg1 because our fore ground classes are fg1,fg2,fg3 but we have to store it as 0,1,2 #image_list = np.concatenate(image_list ,axis=0) image_list = torch.stack(image_list) return image_list,label desired_num = 30000 mosaic_list_of_images =[] # list of mosaic images, each mosaic image is saved as list of 9 images fore_idx =[] # list of indexes at which foreground image is present in a mosaic image i.e from 0 to 9 mosaic_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(desired_num): np.random.seed(i) bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) mosaic_list_of_images.append(image_list) mosaic_label.append(label) plt.imshow(torch.transpose(mosaic_list_of_images[0][1],dim0= 0,dim1 = 2)) class MosaicDataset(Dataset): """MosaicDataset dataset.""" def __init__(self, mosaic_list_of_images, mosaic_label, fore_idx): """ Args: csv_file (string): Path to the csv file with annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ self.mosaic = mosaic_list_of_images self.label = mosaic_label self.fore_idx = fore_idx def __len__(self): return len(self.label) def __getitem__(self, idx): return self.mosaic[idx] , self.label[idx], self.fore_idx[idx] batch = 250 msd = MosaicDataset(mosaic_list_of_images, mosaic_label , fore_idx) train_loader = DataLoader( msd,batch_size= batch ,shuffle=True) class Focus(nn.Module): def __init__(self): super(Focus, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=0,bias=False) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=0,bias=False) self.conv3 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=0,bias=False) self.conv4 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=0,bias=False) self.conv5 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=0,bias=False) self.conv6 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1,bias=False) self.pool = nn.MaxPool2d(kernel_size=2, stride=2) self.batch_norm1 = nn.BatchNorm2d(32,track_running_stats=False) self.batch_norm2 = nn.BatchNorm2d(64,track_running_stats=False) self.batch_norm3 = nn.BatchNorm2d(256,track_running_stats=False) self.dropout1 = nn.Dropout2d(p=0.05) self.dropout2 = nn.Dropout2d(p=0.1) self.fc1 = nn.Linear(256,64,bias=False) self.fc2 = nn.Linear(64, 32,bias=False) self.fc3 = nn.Linear(32, 10,bias=False) self.fc4 = nn.Linear(10, 1,bias=False) torch.nn.init.xavier_normal_(self.conv1.weight) torch.nn.init.xavier_normal_(self.conv2.weight) torch.nn.init.xavier_normal_(self.conv3.weight) torch.nn.init.xavier_normal_(self.conv4.weight) torch.nn.init.xavier_normal_(self.conv5.weight) torch.nn.init.xavier_normal_(self.conv6.weight) torch.nn.init.xavier_normal_(self.fc1.weight) torch.nn.init.xavier_normal_(self.fc2.weight) torch.nn.init.xavier_normal_(self.fc3.weight) torch.nn.init.xavier_normal_(self.fc4.weight) def forward(self,z): #y is avg image #z batch of list of 9 images y = torch.zeros([batch,256, 3,3], dtype=torch.float64) x = torch.zeros([batch,9],dtype=torch.float64) ftr = torch.zeros([batch,9,256,3,3]) y = y.to("cuda") x = x.to("cuda") ftr = ftr.to("cuda") for i in range(9): out,ftrs = self.helper(z[:,i]) #print(out.shape) x[:,i] = out ftr[:,i] = ftrs log_x = F.log_softmax(x,dim=1) # log_alpha x = F.softmax(x,dim=1) for i in range(9): x1 = x[:,i] y = y + torch.mul(x1[:,None,None,None],ftr[:,i]) return x, y, log_x, #alpha, log_alpha, avg_data def helper(self, x): #x1 = x #x1 =x x = self.conv1(x) x = F.relu(self.batch_norm1(x)) x = (F.relu(self.conv2(x))) x = self.pool(x) x = self.conv3(x) x = F.relu(self.batch_norm2(x)) x = (F.relu(self.conv4(x))) x = self.pool(x) x = self.dropout1(x) x = self.conv5(x) x = F.relu(self.batch_norm3(x)) x = self.conv6(x) x1 = F.tanh(x) x = F.relu(x) x = self.pool(x) x = x.view(x.size(0), -1) x = self.dropout2(x) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.dropout2(x) x = F.relu(self.fc3(x)) x = self.fc4(x) x = x[:,0] # print(x.shape) return x,x1 torch.manual_seed(n_seed) focus_net = Focus().double() focus_net = focus_net.to("cuda") class Classification(nn.Module): def __init__(self): super(Classification, self).__init__() self.conv1 = nn.Conv2d(in_channels=256, out_channels=128, kernel_size=3, padding=1) self.conv2 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1) self.conv3 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1) self.conv4 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1) self.conv5 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, padding=1) self.conv6 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, padding=1) self.pool = nn.MaxPool2d(kernel_size=2, stride=2,padding=1) self.batch_norm1 = nn.BatchNorm2d(128,track_running_stats=False) self.batch_norm2 = nn.BatchNorm2d(256,track_running_stats=False) self.batch_norm3 = nn.BatchNorm2d(512,track_running_stats=False) self.dropout1 = nn.Dropout2d(p=0.05) self.dropout2 = nn.Dropout2d(p=0.1) self.global_average_pooling = nn.AvgPool2d(kernel_size=2) self.fc1 = nn.Linear(512,128) # self.fc2 = nn.Linear(128, 64) # self.fc3 = nn.Linear(64, 10) self.fc2 = nn.Linear(128, 3) torch.nn.init.xavier_normal_(self.conv1.weight) torch.nn.init.xavier_normal_(self.conv2.weight) torch.nn.init.xavier_normal_(self.conv3.weight) torch.nn.init.xavier_normal_(self.conv4.weight) torch.nn.init.xavier_normal_(self.conv5.weight) torch.nn.init.xavier_normal_(self.conv6.weight) torch.nn.init.zeros_(self.conv1.bias) torch.nn.init.zeros_(self.conv2.bias) torch.nn.init.zeros_(self.conv3.bias) torch.nn.init.zeros_(self.conv4.bias) torch.nn.init.zeros_(self.conv5.bias) torch.nn.init.zeros_(self.conv6.bias) torch.nn.init.xavier_normal_(self.fc1.weight) torch.nn.init.xavier_normal_(self.fc2.weight) torch.nn.init.zeros_(self.fc1.bias) torch.nn.init.zeros_(self.fc2.bias) def forward(self, x): x = self.conv1(x) x = F.relu(self.batch_norm1(x)) x = (F.relu(self.conv2(x))) x = self.pool(x) x = self.conv3(x) x = F.relu(self.batch_norm2(x)) x = (F.relu(self.conv4(x))) x = self.pool(x) x = self.dropout1(x) x = self.conv5(x) x = F.relu(self.batch_norm3(x)) x = (F.relu(self.conv6(x))) x = self.pool(x) #print(x.shape) x = self.global_average_pooling(x) x = x.squeeze() #x = x.view(x.size(0), -1) #print(x.shape) x = self.dropout2(x) x = F.relu(self.fc1(x)) #x = F.relu(self.fc2(x)) #x = self.dropout2(x) #x = F.relu(self.fc3(x)) x = self.fc2(x) return x torch.manual_seed(n_seed) classify = Classification().double() classify = classify.to("cuda") test_images =[] #list of mosaic images, each mosaic image is saved as laist of 9 images fore_idx_test =[] #list of indexes at which foreground image is present in a mosaic image test_label=[] # label of mosaic image = foreground class present in that mosaic for i in range(10000): np.random.seed(i+30000) bg_idx = np.random.randint(0,35000,8) fg_idx = np.random.randint(0,15000) fg = np.random.randint(0,9) fore_idx_test.append(fg) image_list,label = create_mosaic_img(bg_idx,fg_idx,fg) test_images.append(image_list) test_label.append(label) test_data = MosaicDataset(test_images,test_label,fore_idx_test) test_loader = DataLoader( test_data,batch_size= batch ,shuffle=False) criterion = nn.CrossEntropyLoss() def my_cross_entropy(x, y,alpha,log_alpha,k): # log_prob = -1.0 * F.log_softmax(x, 1) # loss = log_prob.gather(1, y.unsqueeze(1)) # loss = loss.mean() loss = criterion(x,y) #alpha = torch.clamp(alpha,min=1e-10) b = -1.0* alpha * log_alpha b = torch.mean(torch.sum(b,dim=1)) closs = loss entropy = b loss = (1-k)*loss + ((k)*b) return loss,closs,entropy import torch.optim as optim # criterion_classify = nn.CrossEntropyLoss() optimizer_focus = optim.Adam(focus_net.parameters(), lr=0.001)#, momentum=0.9) optimizer_classify = optim.Adam(classify.parameters(), lr=0.001)#, momentum=0.9) col1=[] col2=[] col3=[] col4=[] col5=[] col6=[] col7=[] col8=[] col9=[] col10=[] col11=[] col12=[] col13=[] col14 = [] # train average sparsity col15 = [] # test average sparsity correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images,_ = focus_net(inputs) # print(inputs.shape, alphas.shape, avg_images.shape) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total)) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) print(count) print("="*100) col1.append(0) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) col14.append(sparse_val) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images,_ = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %f %%' % (100 * correct / total)) print("total correct", correct) print("total test set images", total) print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total)) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) col15.append(sparse_val) nos_epochs = 100 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 focus_net.train() classify.train() tr_loss = [] for epoch in range(nos_epochs): # loop over the dataset multiple times focus_net.train() classify.train() focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 running_loss = 0.0 epoch_loss = [] cnt=0 iteration = desired_num // batch #training data set for i, data in enumerate(train_loader): inputs , labels , fore_idx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"), labels.to("cuda") # zero the parameter gradients optimizer_focus.zero_grad() optimizer_classify.zero_grad() alphas, avg_images,log_alphas = focus_net(inputs) outputs = classify(avg_images) # outputs, alphas, avg_images = classify(inputs) _, predicted = torch.max(outputs.data, 1) # print(outputs) # print(outputs.shape,labels.shape , torch.argmax(outputs, dim=1)) loss,_,_ = my_cross_entropy(outputs, labels,alphas,log_alphas,k) loss.backward() optimizer_focus.step() optimizer_classify.step() running_loss += loss.item() mini = 60 if cnt % mini == mini-1: # print every 40 mini-batches print('[%d, %5d] loss: %.3f' %(epoch + 1, cnt + 1, running_loss / mini)) epoch_loss.append(running_loss/mini) running_loss = 0.0 cnt=cnt+1 if epoch % 1 == 0: sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 tr_loss.append(np.mean(epoch_loss)) if epoch % 1 == 0: col1.append(epoch+1) col2.append(argmax_more_than_half) col3.append(argmax_less_than_half) col4.append(focus_true_pred_true) col5.append(focus_false_pred_true) col6.append(focus_true_pred_false) col7.append(focus_false_pred_false) col14.append(sparse_val) #************************************************************************ #testing data set focus_net.eval() classify.eval() with torch.no_grad(): focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 for data in test_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images,log_alphas = focus_net(inputs) outputs = classify(avg_images) #outputs, alphas, avg_images = classify(inputs) _, predicted = torch.max(outputs.data, 1) sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range (batch): focus = torch.argmax(alphas[j]) if(alphas[j][focus] >= 0.5): argmax_more_than_half +=1 else: argmax_less_than_half +=1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true +=1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false +=1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false +=1 col8.append(argmax_more_than_half) col9.append(argmax_less_than_half) col10.append(focus_true_pred_true) col11.append(focus_false_pred_true) col12.append(focus_true_pred_false) col13.append(focus_false_pred_false) col15.append(sparse_val) if(np.mean(epoch_loss) <= 0.05): break; print('Finished Training') torch.save(focus_net.state_dict(),path+"weights_focus_0.pt") torch.save(classify.state_dict(),path+"weights_classify_0.pt") columns = ["epochs", "argmax > 0.5" ,"argmax < 0.5", "focus_true_pred_true", "focus_false_pred_true", "focus_true_pred_false", "focus_false_pred_false" ,"sparse_val"] df_train = pd.DataFrame() df_test = pd.DataFrame() len(col1),col9 plt.plot(np.arange(1,epoch+2),tr_loss) plt.xlabel("epochs", fontsize=14, fontweight = 'bold') plt.ylabel("Loss", fontsize=14, fontweight = 'bold') plt.title("Train Loss") plt.grid() plt.show() np.save("train_loss.npy",{"training_loss":tr_loss}) df_train[columns[0]] = col1 df_train[columns[1]] = col2 df_train[columns[2]] = col3 df_train[columns[3]] = col4 df_train[columns[4]] = col5 df_train[columns[5]] = col6 df_train[columns[6]] = col7 df_train[columns[7]] = col14 df_test[columns[0]] = col1 df_test[columns[1]] = col8 df_test[columns[2]] = col9 df_test[columns[3]] = col10 df_test[columns[4]] = col11 df_test[columns[5]] = col12 df_test[columns[6]] = col13 df_test[columns[7]] = col15 df_train df_train.to_csv(path+"_train.csv",index=False) # plt.figure(12,12) plt.plot(col1,col2, label='argmax > 0.5') plt.plot(col1,col3, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("training data") plt.title("On Training set") plt.show() plt.figure(figsize=(6,5)) plt.plot(col1,np.array(col4)/300, label ="FTPT ") plt.plot(col1,np.array(col5)/300, label ="FFPT ") plt.plot(col1,np.array(col6)/300, label ="FTPF ") plt.plot(col1,np.array(col7)/300, label ="FFPF ") plt.title("On Training set") #plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs", fontsize=14, fontweight = 'bold') plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold') # plt.xlabel("epochs") # plt.ylabel("training data") plt.legend() plt.savefig(path + "_train.png",bbox_inches="tight") plt.savefig(path + "_train.pdf",bbox_inches="tight") plt.grid() plt.show() plt.figure(figsize=(6,5)) plt.plot(col1,np.array(col14)/30000, label ="sparsity_val") plt.title("On Training set") #plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs", fontsize=14, fontweight = 'bold') plt.ylabel("average sparsity value", fontsize=14, fontweight = 'bold') # plt.xlabel("epochs") # plt.ylabel("sparsity_value") plt.savefig(path + "sparsity_train.png",bbox_inches="tight") plt.savefig(path + "sparsity_train.pdf",bbox_inches="tight") plt.grid() plt.show() df_test df_test.to_csv(path+"_test.csv") # plt.figure(12,12) plt.plot(col1,col8, label='argmax > 0.5') plt.plot(col1,col9, label='argmax < 0.5') plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") plt.ylabel("Testing data") plt.title("On Testing set") plt.show() plt.figure(figsize=(6,5)) plt.plot(col1,np.array(col10)/100, label ="FTPT ") plt.plot(col1,np.array(col11)/100, label ="FFPT ") plt.plot(col1,np.array(col12)/100, label ="FTPF ") plt.plot(col1,np.array(col13)/100, label ="FFPF ") plt.title("On Testing set") #plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs", fontsize=14, fontweight = 'bold') plt.ylabel("percentage test data", fontsize=14, fontweight = 'bold') # plt.xlabel("epochs") # plt.ylabel("training data") plt.legend() #plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs") # plt.ylabel("Testing data") plt.savefig(path + "_test.png",bbox_inches="tight") plt.savefig(path + "_test.pdf",bbox_inches="tight") plt.grid() plt.show() plt.figure(figsize=(6,5)) plt.plot(col1,np.array(col15)/10000, label ="sparsity_val") plt.title("On Testing set") #plt.legend(loc='center left', bbox_to_anchor=(1, 0.5)) plt.xlabel("epochs", fontsize=14, fontweight = 'bold') plt.ylabel("average sparsity value", fontsize=14, fontweight = 'bold') plt.grid() plt.savefig(path + "sparsity_test.png",bbox_inches="tight") plt.savefig(path + "sparsity_test.pdf",bbox_inches="tight") plt.show() correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images,_ = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range(labels.size(0)): count += 1 focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total)) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) print(count) print("="*100) correct = 0 total = 0 count = 0 flag = 1 focus_true_pred_true =0 focus_false_pred_true =0 focus_true_pred_false =0 focus_false_pred_false =0 argmax_more_than_half = 0 argmax_less_than_half =0 sparse_val = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels , fore_idx = inputs.to("cuda"),labels.to("cuda"), fore_idx.to("cuda") alphas, avg_images , _ = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) sparse_val += torch.sum(torch.sum(alphas>0.01,dim=1)).item() for j in range(labels.size(0)): focus = torch.argmax(alphas[j]) if alphas[j][focus] >= 0.5 : argmax_more_than_half += 1 else: argmax_less_than_half += 1 if(focus == fore_idx[j] and predicted[j] == labels[j]): focus_true_pred_true += 1 elif(focus != fore_idx[j] and predicted[j] == labels[j]): focus_false_pred_true += 1 elif(focus == fore_idx[j] and predicted[j] != labels[j]): focus_true_pred_false += 1 elif(focus != fore_idx[j] and predicted[j] != labels[j]): focus_false_pred_false += 1 total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %f %%' % ( 100 * correct / total)) print("total correct", correct) print("total test set images", total) print("focus_true_pred_true %d =============> FTPT : %f %%" % (focus_true_pred_true , (100 * focus_true_pred_true / total) ) ) print("focus_false_pred_true %d =============> FFPT : %f %%" % (focus_false_pred_true, (100 * focus_false_pred_true / total) ) ) print("focus_true_pred_false %d =============> FTPF : %f %%" %( focus_true_pred_false , ( 100 * focus_true_pred_false / total) ) ) print("focus_false_pred_false %d =============> FFPF : %f %%" % (focus_false_pred_false, ( 100 * focus_false_pred_false / total) ) ) print("Sparsity_Value %d =============> AVG Sparsity : %f " % (sparse_val,(sparse_val)/total)) print("argmax_more_than_half ==================> ",argmax_more_than_half) print("argmax_less_than_half ==================> ",argmax_less_than_half) correct = 0 total = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in train_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images,_ = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 30000 train images: %f %%' % ( 100 * correct / total)) print("total correct", correct) print("total train set images", total) correct = 0 total = 0 focus_net.eval() classify.eval() with torch.no_grad(): for data in test_loader: inputs, labels , fore_idx = data inputs = inputs.double() inputs, labels = inputs.to("cuda"), labels.to("cuda") alphas, avg_images,_ = focus_net(inputs) outputs = classify(avg_images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %f %%' % ( 100 * correct / total)) print("total correct", correct) print("total test set images", total) ```
github_jupyter
``` # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. from __future__ import print_function import numpy as np import tensorflow as tf from six.moves import cPickle as pickle from six.moves import range # The folder when dumped big 3D array has been stored from previous excercise data_root = 'D:\\1_Workspaces\\UNDER_VCS\\github\\1_ML_NN\\python_with_math\\data' #a big 3D array to a big file. pickle_file = 'notMNIST.pickle' with open(data_root + '\\' + pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print('Training set', train_dataset.shape, train_labels.shape) print('Validation set', valid_dataset.shape, valid_labels.shape) print('Test set', test_dataset.shape, test_labels.shape) # It loads all the data into TensorFlow and build the computation graph corresponding to our training: # With gradient descent training, even this much data is prohibitive. # Subset the training data for faster turnaround. train_subset = 10000 graph = tf.Graph() with graph.as_default(): # Input data. # Load the training, validation and test data into constants that are # attached to the graph. tf_train_dataset = tf.constant(train_dataset[:train_subset, :]) tf_train_labels = tf.constant(train_labels[:train_subset]) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. # These are the parameters that we are going to be training. The weight # matrix will be initialized using random values following a (truncated) # normal distribution. The biases get initialized to zero. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. # We multiply the inputs with the weight matrix, and add biases. We compute # the softmax and cross-entropy (it's one operation in TensorFlow, because # it's very common, and it can be optimized). We take the average of this # cross-entropy across all training examples: that's our loss. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. # We are going to find the minimum of this loss using gradient descent. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. # These are not part of training, but merely here so that we can report # accuracy figures as we train. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) num_steps = 10000 #why 801? def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) #it performs the train with tf.Session(graph=graph) as session: # This is a one-time operation which ensures the parameters get initialized as # we described in the graph: random weights for the matrix, zeros for the # biases. tf.global_variables_initializer().run() print('Tensorflow graph initialized') for step in range(num_steps): # Run the computations. We tell .run() that we want to run the optimizer, # and get the loss value and the training predictions returned as numpy # arrays. _, l, predictions = session.run([optimizer, loss, train_prediction]) if (step % 100 == 0): print('Loss at step %d: %f' % (step, l)) print('Training accuracy: %.1f%%' % accuracy( predictions, train_labels[:train_subset, :])) # Calling .eval() on valid_prediction is basically like calling run(), but # just to get that one numpy array. Note that it recomputes all its graph # dependencies. print('Validation accuracy: %.1f%%' % accuracy( valid_prediction.eval(), valid_labels)) print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels)) #TODO plot graph from accuracy data # Let's now switch to stochastic gradient descent training instead, which is much faster. batch_size = 128 graph = tf.Graph() with graph.as_default(): # Input data. For the training data, we use a placeholder that will be fed # at run time with a training minibatch. tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels)) tf_valid_dataset = tf.constant(valid_dataset) tf_test_dataset = tf.constant(test_dataset) # Variables. weights = tf.Variable( tf.truncated_normal([image_size * image_size, num_labels])) biases = tf.Variable(tf.zeros([num_labels])) # Training computation. logits = tf.matmul(tf_train_dataset, weights) + biases loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits)) # Optimizer. optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss) # Predictions for the training, validation, and test data. train_prediction = tf.nn.softmax(logits) valid_prediction = tf.nn.softmax( tf.matmul(tf_valid_dataset, weights) + biases) test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases) num_steps = 10000 with tf.Session(graph=graph) as session: tf.global_variables_initializer().run() print("Initialized") for step in range(num_steps): # Pick an offset within the training data, which has been randomized. # Note: we could use better randomization across epochs. offset = (step * batch_size) % (train_labels.shape[0] - batch_size) # Generate a minibatch. batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :] # Prepare a dictionary telling the session where to feed the minibatch. # The key of the dictionary is the placeholder node of the graph to be fed, # and the value is the numpy array to feed to it. feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels} _, l, predictions = session.run( [optimizer, loss, train_prediction], feed_dict=feed_dict) if (step % 500 == 0): print("Minibatch loss at step %d: %f" % (step, l)) print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels)) print("Validation accuracy: %.1f%%" % accuracy( valid_prediction.eval(), valid_labels)) print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels)) #TODO measure time # Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() # and 1024 hidden nodes. This model should improve your validation / test accuracy. #Do TODOs ```
github_jupyter
## Fitting a diagonal covariance Gaussian mixture model to text data In a previous assignment, we explored k-means clustering for a high-dimensional Wikipedia dataset. We can also model this data with a mixture of Gaussians, though with increasing dimension we run into two important issues associated with using a full covariance matrix for each component. * Computational cost becomes prohibitive in high dimensions: score calculations have complexity cubic in the number of dimensions M if the Gaussian has a full covariance matrix. * A model with many parameters require more data: observe that a full covariance matrix for an M-dimensional Gaussian will have M(M+1)/2 parameters to fit. With the number of parameters growing roughly as the square of the dimension, it may quickly become impossible to find a sufficient amount of data to make good inferences. Both of these issues are avoided if we require the covariance matrix of each component to be diagonal, as then it has only M parameters to fit and the score computation decomposes into M univariate score calculations. Recall from the lecture that the M-step for the full covariance is: \begin{align*} \hat{\Sigma}_k &= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_i-\hat{\mu}_k)(x_i - \hat{\mu}_k)^T \end{align*} Note that this is a square matrix with M rows and M columns, and the above equation implies that the (v, w) element is computed by \begin{align*} \hat{\Sigma}_{k, v, w} &= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})(x_{iw} - \hat{\mu}_{kw}) \end{align*} When we assume that this is a diagonal matrix, then non-diagonal elements are assumed to be zero and we only need to compute each of the M elements along the diagonal independently using the following equation. \begin{align*} \hat{\sigma}^2_{k, v} &= \hat{\Sigma}_{k, v, v} \\ &= \frac{1}{N_k^{soft}} \sum_{i=1}^N r_{ik} (x_{iv}-\hat{\mu}_{kv})^2 \end{align*} In this section, we will use an EM implementation to fit a Gaussian mixture model with **diagonal** covariances to a subset of the Wikipedia dataset. The implementation uses the above equation to compute each variance term. We'll begin by importing the dataset and coming up with a useful representation for each article. After running our algorithm on the data, we will explore the output to see whether we can give a meaningful interpretation to the fitted parameters in our model. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Import necessary packages ``` from __future__ import print_function # to conform python 2.x print to python 3.x import turicreate ``` We also have a Python file containing implementations for several functions that will be used during the course of this assignment. ``` from em_utilities import * ``` ## Load Wikipedia data and extract TF-IDF features Load Wikipedia data and transform each of the first 5000 document into a TF-IDF representation. ``` wiki = turicreate.SFrame('people_wiki.sframe/').head(5000) wiki['tf_idf'] = turicreate.text_analytics.tf_idf(wiki['text']) ``` Using a utility we provide, we will create a sparse matrix representation of the documents. This is the same utility function you used during the previous assignment on k-means with text data. ``` wiki = wiki.add_row_number() tf_idf, map_word_to_index = sframe_to_scipy(wiki, 'tf_idf') map_index_to_word = dict([[map_word_to_index[i], i] for i in map_word_to_index.keys()]) ``` As in the previous assignment, we will normalize each document's TF-IDF vector to be a unit vector. ``` %%time tf_idf = normalize(tf_idf) ``` We can check that the length (Euclidean norm) of each row is now 1.0, as expected. ``` for i in range(5): doc = tf_idf[i] print(np.linalg.norm(doc.todense())) ``` ## EM in high dimensions EM for high-dimensional data requires some special treatment: * E step and M step must be vectorized as much as possible, as explicit loops are dreadfully slow in Python. * All operations must be cast in terms of sparse matrix operations, to take advantage of computational savings enabled by sparsity of data. * Initially, some words may be entirely absent from a cluster, causing the M step to produce zero mean and variance for those words. This means any data point with one of those words will have 0 probability of being assigned to that cluster since the cluster allows for no variability (0 variance) around that count being 0 (0 mean). Since there is a small chance for those words to later appear in the cluster, we instead assign a small positive variance (~1e-10). Doing so also prevents numerical overflow. We provide the complete implementation for you in the file `em_utilities.py`. For those who are interested, you can read through the code to see how the sparse matrix implementation differs from the previous assignment. You are expected to answer some quiz questions using the results of clustering. **Initializing mean parameters using k-means** Recall from the lectures that EM for Gaussian mixtures is very sensitive to the choice of initial means. With a bad initial set of means, EM may produce clusters that span a large area and are mostly overlapping. To eliminate such bad outcomes, we first produce a suitable set of initial means by using the cluster centers from running k-means. That is, we first run k-means and then take the final set of means from the converged solution as the initial means in our EM algorithm. ``` %%time from sklearn.cluster import KMeans np.random.seed(5) num_clusters = 25 # Use scikit-learn's k-means to simplify workflow #kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=-1) # uncomment to use parallelism -- may break on your installation kmeans_model = KMeans(n_clusters=num_clusters, n_init=5, max_iter=400, random_state=1, n_jobs=1) kmeans_model.fit(tf_idf) centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_ means = [centroid for centroid in centroids] ``` **Initializing cluster weights** We will initialize each cluster weight to be the proportion of documents assigned to that cluster by k-means above. ``` %%time num_docs = tf_idf.shape[0] weights = [] for i in range(num_clusters): # Compute the number of data points assigned to cluster i: num_assigned = np.sum(cluster_assignment == i) # YOUR CODE HERE w = float(num_assigned) / num_docs weights.append(w) np.sum(cluster_assignment == 1) ``` **Initializing covariances** To initialize our covariance parameters, we compute $\hat{\sigma}_{k, j}^2 = \sum_{i=1}^{N}(x_{i,j} - \hat{\mu}_{k, j})^2$ for each feature $j$. For features with really tiny variances, we assign 1e-8 instead to prevent numerical instability. We do this computation in a vectorized fashion in the following code block. ``` covs = [] for i in range(num_clusters): member_rows = tf_idf[cluster_assignment==i] cov = (member_rows.multiply(member_rows) - 2*member_rows.dot(diag(means[i]))).sum(axis=0).A1 / member_rows.shape[0] \ + means[i]**2 cov[cov < 1e-8] = 1e-8 covs.append(cov) ``` **Running EM** Now that we have initialized all of our parameters, run EM. ``` out = EM_for_high_dimension(tf_idf, means, covs, weights, cov_smoothing=1e-10) out['loglik'] ``` ## Interpret clustering results In contrast to k-means, EM is able to explicitly model clusters of varying sizes and proportions. The relative magnitude of variances in the word dimensions tell us much about the nature of the clusters. Write yourself a cluster visualizer as follows. Examining each cluster's mean vector, list the 5 words with the largest mean values (5 most common words in the cluster). For each word, also include the associated variance parameter (diagonal element of the covariance matrix). A sample output may be: ``` ========================================================== Cluster 0: Largest mean parameters in cluster Word Mean Variance football 1.08e-01 8.64e-03 season 5.80e-02 2.93e-03 club 4.48e-02 1.99e-03 league 3.94e-02 1.08e-03 played 3.83e-02 8.45e-04 ... ``` ``` # Fill in the blanks def visualize_EM_clusters(tf_idf, means, covs, map_index_to_word): print('') print('==========================================================') num_clusters = len(means) for c in range(num_clusters): print('Cluster {0:d}: Largest mean parameters in cluster '.format(c)) print('\n{0: <12}{1: <12}{2: <12}'.format('Word', 'Mean', 'Variance')) # The k'th element of sorted_word_ids should be the index of the word # that has the k'th-largest value in the cluster mean. Hint: Use np.argsort(). sorted_word_ids = np.argsort(means[c])[::-1] # YOUR CODE HERE for i in sorted_word_ids[:5]: print('{0: <12}{1:<10.2e}{2:10.2e}'.format(map_index_to_word[i], means[c][i], covs[c][i])) print('\n==========================================================') '''By EM''' visualize_EM_clusters(tf_idf, out['means'], out['covs'], map_index_to_word) ``` **Quiz Question**. Select all the topics that have a cluster in the model created above. [multiple choice] - Baseball - Basketball - Soccer/Football - Music - Politics - Law - Finance ## Comparing to random initialization Create variables for randomly initializing the EM algorithm. Complete the following code block. ``` np.random.seed(5) # See the note below to see why we set seed=5. num_clusters = len(means) num_docs, num_words = tf_idf.shape random_means = [] random_covs = [] random_weights = [] for k in range(num_clusters): # Create a numpy array of length num_words with random normally distributed values. # Use the standard univariate normal distribution (mean 0, variance 1). # YOUR CODE HERE mean = np.random.normal(0, 1, size=num_words) # Create a numpy array of length num_words with random values uniformly distributed between 1 and 5. # YOUR CODE HERE cov = np.random.uniform(1, 5, size=(num_words)) # Initially give each cluster equal weight. # YOUR CODE HERE weight = 1 random_means.append(mean) random_covs.append(cov) random_weights.append(weight) ``` **Quiz Question**: Try fitting EM with the random initial parameters you created above. (Use `cov_smoothing=1e-5`.) Store the result to `out_random_init`. What is the final loglikelihood that the algorithm converges to? ``` out_random_init = EM_for_high_dimension(tf_idf, random_means, random_covs, random_weights, cov_smoothing=1e-5) print("{:e}".format(out_random_init['loglik'][-1])) ``` **Quiz Question:** Is the final loglikelihood larger or smaller than the final loglikelihood we obtained above when initializing EM with the results from running k-means? ``` out['loglik'] ``` **Quiz Question**: For the above model, `out_random_init`, use the `visualize_EM_clusters` method you created above. Are the clusters more or less interpretable than the ones found after initializing using k-means? ``` # YOUR CODE HERE. Use visualize_EM_clusters, which will require you to pass in tf_idf and map_index_to_word. visualize_EM_clusters(tf_idf, out_random_init['means'], out_random_init['covs'], map_index_to_word) ``` **Note**: Random initialization may sometimes produce a superior fit than k-means initialization. We do not claim that random initialization is always worse. However, this section does illustrate that random initialization often produces much worse clustering than k-means counterpart. This is the reason why we provide the particular random seed (`np.random.seed(5)`). ## Takeaway In this assignment we were able to apply the EM algorithm to a mixture of Gaussians model of text data. This was made possible by modifying the model to assume a diagonal covariance for each cluster, and by modifying the implementation to use a sparse matrix representation. In the second part you explored the role of k-means initialization on the convergence of the model as well as the interpretability of the clusters.
github_jupyter
# Example notebook for the functions contained in cry_file_readwrite.py ### Crystal_input class ``` from crystal_functions.file_readwrite import Crystal_input ``` #### Create a crystal input object from blocks ``` geom_block = ['MGO BULK - GEOMETRY TEST\n', 'CRYSTAL\n', '0 0 0\n', '225\n', '4.217\n', '2\n', '12 0. 0. 0.\n', '8 0.5 0.5 0.5\n', 'END\n'] bs_block = ['BASISSET\n','POB-DZVP\n'] func_block = ['DFT\n', 'B3LYP\n', 'XXLGRID\n', 'ENDDFT\n'] scf_block = [['TOLINTEG\n', '7 7 7 7 14\n'], ['SHRINK\n', '12 24\n'], ['MAXCYCLE\n', '200\n'], ['FMIXING\n', '70\n'], 'DIIS\n', 'ENDSCF\n'] mgo_input = Crystal_input().from_blocks(geom_block,bs_block,func_block,scf_block) mgo_input.geom_block ``` #### Create a crystal input object from an existing input file ``` mgo_input = Crystal_input().from_file('data/mgo.d12') mgo_input.geom_block ``` ### Crystal_output class ``` from crystal_functions.file_readwrite import Crystal_output ``` #### 3D system ``` mgo_output = Crystal_output().read_cry_output('data/mgo_optgeom.out') mgo_output ``` #### Functions and properties ``` #Final energy print("Final energy = %s eV \n" % mgo_output.get_final_energy()) #Fermi energy print("Fermi energy = %s eV \n" % mgo_output.get_fermi_energy()) #Primitive lattice print("Primitive lattice \n %s \n" % mgo_output.get_primitive_lattice()) #Reciprocal lattice print("Reciprocal lattice \n %s \n" % mgo_output.get_reciprocal_lattice()) #Band gap print("Band gap = %s eV \n" % mgo_output.get_band_gap()) #Last geometry print("Last geometry = \n %s \n" % mgo_output.get_last_geom()) #Symmetry operators print("Symmetry operators = \n %s \n" % mgo_output.get_symm_ops()) #Forces print("Forces on cell = \n %s \n Forces on atoms = \n %s \n Gradient = \n %s \n" % (mgo_output.get_forces(grad=True)[0], mgo_output.get_forces(grad=True)[1], mgo_output.grad)) #Scf convergence print("Total energy = \n %s \n Delta energy = \n %s \n" % (mgo_output.get_scf_convergence()[0], mgo_output.get_scf_convergence()[1])) ``` #### 0D system ``` co_output = Crystal_output().read_cry_output('data/co.out') #Final energy print("Final energy = %s eV \n" % co_output.get_final_energy()) #Fermi energy print("Fermi energy = %s eV \n" % co_output.get_fermi_energy()) #Primitive lattice print("Primitive lattice \n %s \n" % co_output.get_primitive_lattice()) #Reciprocal lattice print("Reciprocal lattice \n %s \n" % co_output.get_reciprocal_lattice()) #Band gap print("Band gap = %s eV \n" % co_output.get_band_gap()) #Last geometry print("Last geometry = \n %s \n" % co_output.get_last_geom()) #Symmetry operators print("Symmetry operators = \n %s \n" % co_output.get_symm_ops()) #Forces print("Forces on cell = \n %s \n Forces on atoms = \n %s \n Gradient = \n %s \n" % (co_output.get_forces(grad=True)[0], co_output.get_forces(grad=True)[1], co_output.grad)) #Scf convergence print("Total energy = \n %s \n Delta energy = \n %s \n" % (co_output.get_scf_convergence()[0], co_output.get_scf_convergence()[1])) ``` ### Properties_output class ``` from crystal_functions.file_readwrite import Properties_output ``` #### Bands ``` mgo_bands = Properties_output() mgo_bands.read_cry_bands('../examples/data/mgo_BAND_dat.BAND') mgo_bands ``` #### Doss ``` mgo_doss = Properties_output() mgo_doss = mgo_doss.read_cry_doss('data/mgo_DOSS_dat.DOSS') mgo_doss ``` ### write_cry_input function ``` from crystal_functions.file_readwrite import Crystal_input from crystal_functions.file_readwrite import write_crystal_input ``` #### Use existing input file ``` #Read the original input mgo_original_input = Crystal_input().from_file('data/mgo.d12') #Write the input new_input_name = 'data/mgo_from_file.d12' write_crystal_input(new_input_name,crystal_input=mgo_original_input) ``` #### Build from blocks ``` #Define the blocks geom_block = ['MGO BULK - GEOMETRY TEST\n', 'CRYSTAL\n', '0 0 0\n', '225\n', '4.217\n', '2\n', '12 0. 0. 0.\n', '8 0.5 0.5 0.5\n', 'END\n'] bs_block = ['12 4\n', '0 0 8 2.0 1.0\n', ' 68370.0 0.0002226\n', ' 9661.0 0.001901\n', ' 2041.0 0.011042\n', ' 529.6 0.05005\n', ' 159.17 0.1690\n', ' 54.71 0.36695\n', ' 21.236 0.4008\n', ' 8.791 0.1487\n', '0 1 5 8.0 1.0\n', ' 143.7 -0.00671 0.00807\n', ' 31.27 -0.07927 0.06401\n', ' 9.661 -0.08088 0.2092\n', ' 3.726 0.2947 0.3460\n', ' 1.598 0.5714 0.3731\n', '0 1 1 2.0 1.0\n', ' 0.688 1.0 1.0\n', '0 1 1 0.0 1.0\n', ' 0.28 1.0 1.0\n', '8 4\n', '0 0 8 2.0 1.0\n', ' 8020.0 0.00108\n', ' 1338.0 0.00804\n', ' 255.4 0.05324\n', ' 69.22 0.1681\n', ' 23.90 0.3581\n', ' 9.264 0.3855\n', ' 3.851 0.1468\n', ' 1.212 0.0728\n', '0 1 4 6.0 1.0\n', ' 49.43 -0.00883 0.00958\n', ' 10.47 -0.0915 0.0696\n', ' 3.235 -0.0402 0.2065\n', ' 1.217 0.379 0.347\n', '0 1 1 0.0 1.0\n', ' 0.4764 1.0 1.0\n', '0 1 1 0.0 1.0\n', ' 0.1802 1.0 1.0\n', '99 0\n', 'ENDBS\n'] func_block = ['DFT\n', 'B3LYP\n', 'XXLGRID\n', 'ENDDFT\n'] scf_block = [['TOLINTEG\n', '7 7 7 7 14\n'], ['SHRINK\n', '12 24\n'], ['MAXCYCLE\n', '200\n'], ['FMIXING\n', '70\n'], 'DIIS\n', 'ENDSCF\n'] #Write the input object mgo_from_blocks = Crystal_input().from_blocks(geom_block,bs_block,func_block,scf_block) new_input_name = 'data/mgo_from_blocks.d12' write_crystal_input(new_input_name,mgo_from_blocks) #Write input for external object (ASE or pymatgen) #Start from original inout mgo_original_input = Crystal_input().from_file('data/mgo.d12') #Generate the external object from pymatgen.core import Structure, Lattice from pymatgen.symmetry.analyzer import SpacegroupAnalyzer substrate = Structure.from_spacegroup("Fm-3m", Lattice.cubic(4.217), ["Mg","O"], [[0, 0, 0],[0.5,0.5,0.5]]) substrate_primitive = SpacegroupAnalyzer(substrate).get_primitive_standard_structure() #Write the input new_input_name = 'data/mgo_external_obj.d12' write_crystal_input(new_input_name,crystal_input=mgo_original_input,external_obj=substrate_primitive) ``` ### write_cry_properties function #### bands from k points coordinates ``` from crystal_functions.file_readwrite import Properties_input from crystal_functions.file_readwrite import write_properties_input #Create the bands input object bands_input = Properties_input() #Add the newk block to the input object bands_input.make_newk_block(12,24) #Prepare the band_block k_path = [[0,0,0],[0.5,0,0],[0.5,0.5,0.5],[0.25,0,0.5]] n_kpoints = 200 first_band = 1 last_band = 26 bands_input.make_bands_block(k_path,n_kpoints,first_band,last_band) #Write the input write_properties_input('data/bands_input_1.d3',bands_input) bands_input.property_block ``` #### bands from pymatgen HighSymmKpath object ``` from pymatgen.symmetry.bandstructure import HighSymmKpath from pymatgen.symmetry.analyzer import SpacegroupAnalyzer from crystal_functions.file_readwrite import write_properties_input from crystal_functions.file_readwrite import Crystal_output from crystal_functions.file_readwrite import Properties_input from crystal_functions.convert import cry_out2pmg #Create the bands input object bands_input = Properties_input() #Add the newk block to the input object bands_input.make_newk_block(12,24) #Read the structure mgo = Crystal_output().read_cry_output('data/mgo.out') mgo = cry_out2pmg(mgo) mgo_prim = SpacegroupAnalyzer(mgo).get_primitive_standard_structure(international_monoclinic=False) #Obtain the k path object k_path = HighSymmKpath(mgo_prim) n_kpoints = 200 first_band = 1 last_band = 26 bands_input.make_bands_block(k_path,n_kpoints,first_band,last_band) #Write the input write_properties_input('data/bands_input_2.d3',bands_input) bands_input.property_block ``` #### doss ``` from crystal_functions.file_readwrite import write_properties_input from crystal_functions.file_readwrite import Properties_input #Create the doss input object doss_input = Properties_input() #Add the newk block to the input object doss_input.make_newk_block(12,24) #Prepare the doss_block doss_input.make_doss_block(n_points=200,e_range=[-5,15],plotting_option=2,poly=12,print_option=1) #Write the input write_properties_input('data/doss_input.d3',doss_input) !cat data/doss_input.d3 ``` #### pdoss (atoms) ``` from crystal_functions.file_readwrite import write_properties_input from crystal_functions.file_readwrite import Properties_input #Create the doss input object pdoss_input = Properties_input() #Add the newk block to the input object pdoss_input.make_newk_block(12,24) #Prepare the pdoss_block projections = [[1],[2]] pdoss_input.make_pdoss_block(projections,proj_type='atom',n_points=200,band_range=[1,26], plotting_option=2,poly=12,print_option=1) #Write the input write_properties_input('data/pdoss_input.d3',pdoss_input) !cat data/pdoss_input.d3 ``` #### pdoss (ao) ``` from crystal_functions.file_readwrite import write_properties_input from crystal_functions.file_readwrite import Properties_input #Create the doss input object pdoss_input = Properties_input() #Add the newk block to the input object pdoss_input.make_newk_block(12,24) #Prepare the pdoss_block projections = [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]] pdoss_input.make_pdoss_block(projections,proj_type='ao',n_points=200,band_range=[1,26], plotting_option=2,poly=12,print_option=1) #Write the input write_properties_input('data/pdoss_input_ao.d3',pdoss_input) !cat data/pdoss_input_ao.d3 ``` ### write_cry_gui function #### from pymatgen structure #### bulk (3D) ``` from crystal_functions.file_readwrite import Crystal_gui, write_crystal_gui from crystal_functions.convert import cry_pmg2gui from pymatgen.core import Structure, Lattice #Generate bulk structure substrate = Structure.from_spacegroup("Fm-3m", Lattice.cubic(4.217), ["Mg",'O'], [[0, 0, 0],[0.5,0.5,0.5]]) #The dimensionality parameter is optional. The function will recognise it is a pymatgen bulk object #Create the gui object by converting the pymatgen structure mgo_gui = cry_pmg2gui(substrate,dimensionality=2) write_crystal_gui('data/mgo_write_gui.gui',mgo_gui) ! cat data/mgo_write_gui.gui ``` #### slab (2D) ``` import sys sys.path.insert(1,'../crystal_functions/') from file_readwrite import * from convert import * #from crystal_functions.file_readwrite import write_crystal_gui from pymatgen.core import Structure, Lattice from pymatgen.core.surface import SlabGenerator #Generate bulk structure substrate = Structure.from_spacegroup("Fm-3m", Lattice.cubic(4.217), ["Mg",'O'], [[0, 0, 0],[0.5,0.5,0.5]]) #Generate slab substrate = SlabGenerator(substrate, (1,0,0), 5., 10., center_slab=True).get_slab() #The dimensionality parameter is optional. mgo_slab_gui = cry_pmg2gui(substrate, dimensionality=2) write_crystal_gui('data/mgo_100_write_gui.gui',mgo_slab_gui) ! cat data/mgo_100_write_gui.gui ```
github_jupyter
# Text Data Explanation Benchmarking: Emotion Multiclass Classification This notebook demonstrates how to use the benchmark utility to benchmark the performance of an explainer for text data. In this demo, we showcase explanation performance for partition explainer on an Emotion Multiclass Classification model. The metrics used to evaluate are "keep positive" and "keep negative". The masker used is Text Masker. The new benchmark utility uses the new API with MaskedModel as wrapper around user-imported model and evaluates masked values of inputs. ``` import copy import pandas as pd import numpy as np import matplotlib.pyplot as plt from transformers import AutoTokenizer, AutoModelForSequenceClassification import shap.benchmark as benchmark import shap import scipy as sp import nlp import torch pd.set_option('display.max_columns', None) pd.set_option('display.max_rows', None) pd.set_option('max_colwidth', None) ``` ### Load Data and Model ``` train, test = nlp.load_dataset("emotion", split = ["train", "test"]) data={'text':train['text'], 'emotion':train['label']} data = pd.DataFrame(data) tokenizer = AutoTokenizer.from_pretrained("nateraw/bert-base-uncased-emotion",use_fast=True) model = AutoModelForSequenceClassification.from_pretrained("nateraw/bert-base-uncased-emotion") ``` ### Class Label Mapping ``` # set mapping between label and id id2label = model.config.id2label label2id = model.config.label2id labels = sorted(label2id, key=label2id.get) ``` ### Define Score Function ``` def f(x): tv = torch.tensor([tokenizer.encode(v, padding='max_length', max_length=128,truncation=True) for v in x]) attention_mask = (tv!=0).type(torch.int64) outputs = model(tv,attention_mask=attention_mask)[0].detach().numpy() scores = (np.exp(outputs).T / np.exp(outputs).sum(-1)).T val = sp.special.logit(scores) return val ``` ### Create Explainer Object ``` explainer = shap.Explainer(f,tokenizer,output_names=labels) ``` ### Run SHAP Explanation ``` shap_values = explainer(data['text'][0:20]) ``` ### Define Metrics (Sort Order & Perturbation Method) ``` sort_order = 'positive' perturbation = 'keep' ``` ### Benchmark Explainer ``` sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation) xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20]) sequential_perturbation.plot(xs, ys, auc) sort_order = 'negative' perturbation = 'keep' sequential_perturbation = benchmark.perturbation.SequentialPerturbation(explainer.model, explainer.masker, sort_order, perturbation) xs, ys, auc = sequential_perturbation.model_score(shap_values, data['text'][0:20]) sequential_perturbation.plot(xs, ys, auc) ```
github_jupyter
# Mapas Autorganizados (SOM) <img src="../img/som_1.jpg" width="500"> Se toma un conjunto de datos multidimensional los cuales se pueden mostrar como un mapa bidimensional. El objetivo del **SOM** es reducir las columnas y quedarnos solo con las que nos aporten más info. O lo que es lo mismo, reducir lo máximo la dimensión de la entrada de datos. ## Ejemplo <img src="../img/som_2.jpg" heigh="700"> En el mapa **SOM** central nos indica la prosperidad o pobreza de los paises agrupados en colores, usando como inputs varios datos de éstos. ## Cómo aprenden los SOM: Abajo podemos ver 3 columnas de datos de entrada ($X_{1},X_{2},X_{3}$) convertido en un **SOM** en 2-D. <img src="../img/som_3.jpg" width="400"> Abajo tenemos la misma red neuronal pero puesto los nodos de salida en una sola columna. Cada Nodo le llegan 3 sinapsis-enlace de los nodos de entrada, cada nodo tiene un peso asociado para cada enlace $W_{i,j}$. En las redes neurnales artificiales los pesos se usaban para multiplicar la entrada del peso de ese nodo para luego sumarlo, agruparlo o lo que fuese en la neurona final y aplicarle una función de activación, en cambio en los mapas autorganizados **SOM** no hay función de activación, los pesos son una característica del propio nodo y por lo tanto dichos pesos no se colocan en las sinápsis sino en el propio nodo (ahora los nodos tienen una serie de coordenadas dados por los pesos). <img src="../img/som_4.jpg" width="700"> El objetivo es descubrir el nodo que más cercano está a cada uno de los nodos de entrada, para ello calculamos su distancia (por ejmplo la Euclidea). En el caso de arriba observamos que el nodo3 es el más cercano. <img src="../img/som_6.jpg" width="500"> <img src="../img/som_5.jpg" width="500"> Si pasamos los nodos a un plano bidimensional y si nos imaginamos que el nodo verde (figura **1**) es el nodo más cercano a las variables input $X_j$, lo que pasará después es que el **SOM** actualizará los pesos para que cada uno de ellos distribuya a las observaciones. Si nos fijamos en el dibujo inferior, lo que vemos en términos visuales descrito arriba es que el punto más cercano "estirará" del mapa hasta él para que se acerque todavía más en las siguientes iteraciones. En definitiva, para cada uno de los nodos, va a intentar cambiar las coordenadas(pesos) de dicho nodos para que cada vez se parezcan más a las observaciones. A continuación lo que se hará será tomar del punto en cuestión(nodo verde en **1**) un radio que hará que los nodos que se encuentren dentro de esta zona amarilla se actualizarán sus coordenadas-pesos, siendo los nodos más cercanos al verde los que más se actualicen y viceversa. Luego se procederá igual, por ejemplo con el nodo azúl que se parezca más a las variables de entrada (ver **2**), con su propio radio de actuación y hará lo mismo. En el momento que un nodo entra en conflicto con el azúl o el verde, obviamente será atraído más por el nodo más cercano (nos fijamos en la figura **3**). Si nos fijamos en la figura **4** vemos que para un nodo intermedio entre el azúl y verde, al estar en medio tenría un color intermedio, el turquesa. Y para los que están cerca del verde tendrían un colo casi verde del todo, con muy poco de azúl. Obteniendo así finalmente un mapa de colores. <img src="../img/som_7.jpg" width="400"> <img src="../img/som_8.jpg" width="400"> Si tenemos más unidades de mayor coincidencia con las variables de entrada pasará lo mismo obteniendo el mapa de color. En cada iteración (epoch) los radios de acción amarillos se irán haciendo más pequeños. EL proceso se vuelve más y más preciso. ## Importante recordar: - Los SOMs retienen la topología del conjunto de entrada. - Los SOMs revelan correlaciones que no se identifican fácilmente. - Los SOMs clasifican datos sin supervisión alguna. - No hay vector de variable target -> no hay propagación hacia atrás. - No hay conexiones laterales entre los nodos. [Ejemplo de SOM](http://ai-junkie.com/ann/som/som1.html) con programa descargable *.exe* ## Cómo leer un mapa autorganizado SOM avanzado: Ejemplo de SOM de patrones en el voto de 535 senadores nortaméricanos <img src="../img/som_9.jpg" width="600"> # Pasos para entrenar un SOM: - <span style='color:#288c17'> <b>PASO 1:</span> Empezamos con un dataset compuesto de *n_features* variables independientes. - <span style='color:#288c17'> <b>PASO 2:</span> Preparamos una parrilla compuesta de nodos, cada uno con un vector de pesos de *n_features* elementos. - <span style='color:#288c17'> <b>PASO 3:</span> Aleatoriamente inicializamos valores del vector de pesos a números pequeños cercanos a $0$ (pero no $0$). - <span style='color:#288c17'> <b>PASO 4:</span> Seleccionar una observación aleatoria del dataser. - <span style='color:#288c17'> <b>PASO 5:</span> Calcular la distancia Euclídea desde dicho puntos a las diferentes neuronas de la red. - <span style='color:#288c17'> <b>PASO 6:</span> Seleccionar la neurona con la menor distancia al punto. Dicha neurona es el nodo ganador. - <span style='color:#288c17'> <b>PASO 7:</span> Actualizar lso epsos del nodo ganador para moverlo más cerca dle punto. - <span style='color:#288c17'> <b>PASO 8:</span> Utilizar una función Gaussiana al vecindario del punto de medie el nodo ganador y actualizar los pesos de los vecinos para moverlos más cerca del punto. El radio de los vecinos afectados es la desviación típica de la Gaussiana. - <span style='color:#288c17'> <b>PASO 9:</span> Repetir los pasos <span style='color:#288c17'> <b>1</span> a <span style='color:#288c17'> <b>5</span> y actualizar los pesos después de cada observación (*Reinforcement Learning*) o después de un conjunto de observaciones (*Batch Learning*), hasta que la red neuronal converja en un punto donde los vecindarios no cambien.
github_jupyter
# Unsupervised outliers detection (event detection) ``` import drama as drm import numpy as np import matplotlib.pylab as plt from matplotlib import gridspec from drama.outlier_finder import grid_run_drama from keras.datasets import mnist %matplotlib inline n_try = 5 # MNIST dataset (x_train, y_train), (x_test, y_test) = mnist.load_data() image_size = x_train.shape[1] x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) inlier_labels = [2,3,4,6,8] outlier_labels = [5,0] n_inliers = 500 n_outliers = 10 X = [] y = [] for i in inlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_inliers]) y.extend(n_inliers*[0]) for i in outlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_outliers]) y.extend(n_outliers*[1]) X = np.array(X) y = np.array(y) X.shape,y.shape X = np.reshape(X, [-1, image_size*image_size]) lof_all = np.zeros((n_try,3)) ifr_all = np.zeros((n_try,3)) df = drm.sk_check(X.reshape(-1,784),X.reshape(-1,784),y,[1]) for i in range(n_try): for j,scr in enumerate(['AUC','MCC','RWS']): lof_all[i,j] = df[scr][0] ifr_all[i,j] = df[scr][1] df ``` # Outlier detection ``` metrics = ['cityblock', 'L2', 'L4', 'braycurtis', 'canberra', 'chebyshev', 'correlation', 'mahalanobis', 'wL2', 'wL4'] drt_list = ['DAE1D', 'DVAE1D'] result = [] for i in range(n_try): # auc,mcc,rws,conf = grid_run_drama(X,y) auc,mcc,rws,conf = grid_run_drama(X,y, drt_list=drt_list, metrics=metrics, n_split=2) arr = np.stack([auc,mcc,rws],axis=-1) result.append(arr) result = np.array(result) drts = np.unique(conf[:,1]) metrs = np.unique(conf[:,2]) res = result.reshape(n_try,len(drt_list),len(metrics),-1) drm.plot_table(np.mean(res,axis=0),drts,metrs) auc = np.sum((res[:, :, :, 0].T>lof_all[:, 0]) & (res[:, :, :, 0].T>ifr_all[:, 0]),axis=-1).T mcc = np.sum((res[:, :, :, 1].T>lof_all[:, 1]) & (res[:, :, :, 1].T>ifr_all[:, 1]),axis=-1).T rws = np.sum((res[:, :, :, 2].T>lof_all[:, 2]) & (res[:, :, :, 2].T>ifr_all[:, 2]),axis=-1).T fig = plt.figure(figsize=(20,10)) plt.clf() ax = fig.add_subplot(111) ax.set_aspect('auto') ax.imshow(auc, cmap=plt.cm.jet,interpolation='nearest') width, height = auc.shape for x in range(width): for y in range(height): ax.annotate('AUC: {:d}\n MCC: {:d}\n RWS: {:d}'.format(auc[x][y],mcc[x][y],rws[x][y]), xy=(y, x), horizontalalignment='center', verticalalignment='center',fontsize=18); plt.xticks(range(len(metrs)),metrs,fontsize=15) plt.yticks(range(len(drts)), drts,fontsize=15) plt.title('Number of successes (LOF and i-forest) out of 20 data set',fontsize=25) plt.annotate('** Colors depend on AUC.', (0,0), (0, -30), xycoords='axes fraction', textcoords='offset points', va='top',fontsize=15) # plt.savefig('AND_success.jpg',dpi=150,bbox_inches='tight') inlier_labels = [1,2,3,4] outlier_labels = [5] n_inliers = 1000 n_outliers = 30 X = [] y = [] for i in inlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_inliers]) y.extend(n_inliers*[0]) for i in outlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_outliers]) y.extend(n_outliers*[1]) X = np.array(X) y = np.array(y) X = np.reshape(X, [-1, image_size*image_size, 1]) metrics = ['cityblock', 'L2', 'L4', 'braycurtis', 'canberra', 'chebyshev', 'correlation', 'mahalanobis', 'wL2', 'wL4'] drt_list = ['CAE1D', 'CVAE1D'] result = [] for i in range(n_try): # auc,mcc,rws,conf = grid_run_drama(X,y) auc,mcc,rws,conf = grid_run_drama(X,y, drt_list=drt_list, metrics=metrics, n_split=2) arr = np.stack([auc,mcc,rws],axis=-1) result.append(arr) result = np.array(result) drts = np.unique(conf[:,1]) metrs = np.unique(conf[:,2]) res = result.reshape(n_try,len(drt_list),len(metrics),-1) drm.plot_table(np.mean(res,axis=0),drts,metrs) auc = np.sum((res[:, :, :, 0].T>lof_all[:, 0]) & (res[:, :, :, 0].T>ifr_all[:, 0]),axis=-1).T mcc = np.sum((res[:, :, :, 1].T>lof_all[:, 1]) & (res[:, :, :, 1].T>ifr_all[:, 1]),axis=-1).T rws = np.sum((res[:, :, :, 2].T>lof_all[:, 2]) & (res[:, :, :, 2].T>ifr_all[:, 2]),axis=-1).T fig = plt.figure(figsize=(20,10)) plt.clf() ax = fig.add_subplot(111) ax.set_aspect('auto') ax.imshow(auc, cmap=plt.cm.jet,interpolation='nearest') width, height = auc.shape for x in range(width): for y in range(height): ax.annotate('AUC: {:d}\n MCC: {:d}\n RWS: {:d}'.format(auc[x][y],mcc[x][y],rws[x][y]), xy=(y, x), horizontalalignment='center', verticalalignment='center',fontsize=18); plt.xticks(range(len(metrs)),metrs,fontsize=15) plt.yticks(range(len(drts)), drts,fontsize=15) plt.title('Number of successes (LOF and i-forest) out of 20 data set',fontsize=25) plt.annotate('** Colors depend on AUC.', (0,0), (0, -30), xycoords='axes fraction', textcoords='offset points', va='top',fontsize=15) # plt.savefig('AND_success.jpg',dpi=150,bbox_inches='tight') inlier_labels = [1,2,3,4] outlier_labels = [5] n_inliers = 1000 n_outliers = 30 X = [] y = [] for i in inlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_inliers]) y.extend(n_inliers*[0]) for i in outlier_labels: filt = y_train==i ns = np.sum(filt) X.extend(x_train[filt][:n_outliers]) y.extend(n_outliers*[1]) X = np.array(X) y = np.array(y) X = np.reshape(X, [-1, image_size, image_size, 1]) metrics = ['cityblock', 'L2', 'L4', 'braycurtis', 'canberra', 'chebyshev', 'correlation', 'mahalanobis', 'wL2', 'wL4'] drt_list = ['CAE2D', 'CVAE2D'] result = [] for i in range(n_try): # auc,mcc,rws,conf = grid_run_drama(X,y) auc,mcc,rws,conf = grid_run_drama(X,y, drt_list=drt_list, metrics=metrics, n_split=2) arr = np.stack([auc,mcc,rws],axis=-1) result.append(arr) result = np.array(result) drts = np.unique(conf[:,1]) metrs = np.unique(conf[:,2]) res = result.reshape(n_try,len(drt_list),len(metrics),-1) drm.plot_table(np.mean(res,axis=0),drts,metrs) auc = np.sum((res[:, :, :, 0].T>lof_all[:, 0]) & (res[:, :, :, 0].T>ifr_all[:, 0]),axis=-1).T mcc = np.sum((res[:, :, :, 1].T>lof_all[:, 1]) & (res[:, :, :, 1].T>ifr_all[:, 1]),axis=-1).T rws = np.sum((res[:, :, :, 2].T>lof_all[:, 2]) & (res[:, :, :, 2].T>ifr_all[:, 2]),axis=-1).T fig = plt.figure(figsize=(20,10)) plt.clf() ax = fig.add_subplot(111) ax.set_aspect('auto') ax.imshow(auc, cmap=plt.cm.jet,interpolation='nearest') width, height = auc.shape for x in range(width): for y in range(height): ax.annotate('AUC: {:d}\n MCC: {:d}\n RWS: {:d}'.format(auc[x][y],mcc[x][y],rws[x][y]), xy=(y, x), horizontalalignment='center', verticalalignment='center',fontsize=18); plt.xticks(range(len(metrs)),metrs,fontsize=15) plt.yticks(range(len(drts)), drts,fontsize=15) plt.title('Number of successes (LOF and i-forest) out of 20 data set',fontsize=25) plt.annotate('** Colors depend on AUC.', (0,0), (0, -30), xycoords='axes fraction', textcoords='offset points', va='top',fontsize=15) # plt.savefig('AND_success.jpg',dpi=150,bbox_inches='tight') ```
github_jupyter
# Further information ## What are the differences between recursion and iteration? When giving instructions to a computer it is possible to use recursion to directly implement a common mathematical definition. For example consider the following sequence: $$ \left\{\begin{array}{l} a_1 = 1\\ a_{n + 1}= 3a_n, n > 1 \end{array}\right. $$ We can define this in Python as follows: ``` def generate_sequence(n): """ Generate the sequence defined by: a_1 = 1 a_n = 3 a_{n - 1} This is done using recursion. """ if n == 1: return 1 return 3 * generate_sequence(n - 1) ``` The first 6 terms: ``` [generate_sequence(n) for n in range(1, 7)] ``` We note that in this case this corresponds to powers of $3$, and indeed we can prove that: $a_n = 3 ^ {n - 1}$. We will not carry out the proof here but one approach to doing it would be to use proof by induction which is closely related to recursive functions. We can write a different python function that uses this formulae. This is called **iteration**: ``` def calculate_sequence(n): """ Calculate the nth term of the sequence defined by: a_1 = 1 a_n = 3 a_{n - 1} This is done using iteration using the direct formula: a_n = 3 ^ n """ return 3 ** (n - 1) [calculate_sequence(n) for n in range(1, 7)] ``` We can in fact use a Jupyter [magic command](https://ipython.readthedocs.io/en/stable/interactive/magics.html) to time the run time of a command. It is clear that recursion is slower. ``` %timeit [generate_sequence(n) for n in range(1, 25)] %timeit [calculate_sequence(n) for n in range(1, 25)] ``` In practice: - Using recursion is powerful as it can be used to directly implement recursive definitions. - Using iteration is more computationally efficient but it is not always straightforward to obtain an iterative formula. (what_is_caching)= ## What is caching One of the reasons that recursion is computationally inefficient is that it always has to recalculate previously calculated values. For example: $$ \left\{\begin{array}{l} a_1 = 1\\ a_{n + 1}= 3a_n, n > 1 \end{array}\right. $$ One way to overcome this is to use caching which means that when a function is called for a value it has already computed it remembers the value. Python has a caching tool available in the functools library: ``` import functools def generate_sequence(n): """ Generate the sequence defined by: a_1 = 1 a_n = 3 a_{n - 1} This is done using recursion. """ if n == 1: return 1 return 3 * generate_sequence(n - 1) @functools.lru_cache() def cached_generate_sequence(n): """ Generate the sequence defined by: a_1 = 1 a_n = 3 a_{n - 1} This is done using recursion but also includes a cache. """ if n == 1: return 1 return 3 * cached_generate_sequence(n - 1) ``` Timing both these approaches confirms a substantial increase in time for the cached version. ``` %timeit [generate_sequence(n) for n in range(1, 25)] %timeit [cached_generate_sequence(n) for n in range(1, 25)] ```
github_jupyter