markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Exponentiation
2^2 # note the difference to python 2 ** 2
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Remainder
4 % 3
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Negation
!true # note the difference to numpys ~
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Equality
true == true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Inequality
true != true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Elementwise operation
[1 2; 3 3] .* [9 9;9 9] # elementwise [1 2; 3 3] * [9 9;9 9] #matrix materix product
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Check for nan
isnan(9)
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Ternary operator The syntax iscond ? do_true : else
1 != 1 ? println(3) : println(999)
999
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
And/or
true && true false || true
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Sources - [ ] https://juliadocs.github.io/Julia-Cheat-Sheet/- [ ] https://github.com/JuliaLang/julia- [ ] https://arxiv.org/pdf/2003.10146.pdf- [ ] https://github.com/h-Klok/StatsWithJuliaBook- [ ] juliahub- [ ] juliaacademy- [ ] https://www.sas.upenn.edu/~jesusfv/Chapter_HPC_8_Julia.pdf- [ ] https://www.packtpub.com/product/hands-on-design-patterns-and-best-practices-with-julia/9781838648817- [ ] https://www.elsevier.com/books/introduction-to-quantitative-macroeconomics-using-julia/caraiani/978-0-12-812219-8- [ ] https://colab.research.google.com/github/ageron/julia_notebooks/blob/master/Julia_for_Pythonistas.ipynbscrollTo=EEzvvzCl1i0F- [ ] https://cheatsheets.quantecon.org/
_____no_output_____
Apache-2.0
_notebooks/2020-10-20-julia.ipynb
ChristopherTh/statistics-blog
Eliminamos descargas anteriores
!rm *pdf
rm: *pdf: No such file or directory
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Obtenemos las urls de los informes (EPI y Generales)
response = subprocess.check_output(shlex.split('curl --request GET https://www.gob.cl/coronavirus/cifrasoficiales/')) url_reporte = [] url_informe_epi = [] for line in response.decode().splitlines(): if "Reporte_Covid19.pdf" in line: url = line.strip().split('https://')[1].split("\"")[0] url_reporte.append(url) #El informe a veces está en minúsculas elif "INFORME_EPI" in line: test = line.strip() test = test.split('https://')[1].split("\"")[0] url_informe_epi.append(test) url_informe_epi
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Double Check
url_reporte url_informe_epi
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Descarga Informes
#for url in set(url_reporte): # subprocess.check_output(shlex.split("wget "+ url)) for url in set(url_informe_epi): subprocess.check_output(shlex.split("wget "+ url)) !ls
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
PreprocesamientoUsamos tabula-py: wrapper de Tabula App (escrita en Java). A library for extracting tables from PDF files https://github.com/chezou/tabula-py
import tabula dfs_files = {} for url in url_informe_epi: pdf_file = url.split('/')[-1] df = tabula.read_pdf(pdf_file, pages='all', multiple_tables=True) fecha = pdf_file.split('_')[-1].split('.')[0] print(fecha) dfs_files['tablas_' + fecha] = df
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Verificamos algunas tablas
tablas_20200401 = dfs_files['tablas_20200401v2'] tablas_20200330 = dfs_files['tablas_20200330'] df_comunas_20200401 = {} unnamed_primeraCol = {} for idx, df in enumerate(tablas_20200401): if 'Comuna' in df.columns: key= 'tabla_' + str(idx + 1) print(key) df_comunas_20200401[key] = df df_comunas_20200401['tabla_6'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Tabla empieza con un *Unnamed: 0*
df_comunas_20200401['tabla_22'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Tabla **no** empieza con un *Unnamed: 0*
df_comunas_20200330 = {} unnamed_primeraCol = {} for idx, df in enumerate(tablas_20200330): if 'Comuna' in df.columns: key= 'tabla_' + str(idx + 1) print(key) df_comunas_20200330[key] = df df_comunas_20200330['tabla_7'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Misma tabla empieza con un *Unnamed: 0*
df_comunas_20200330['tabla_23'].head()
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Misma tabla **no** empieza con un *Unnamed: 0* Separamos estas dos categorias:
df_comunas_20200401 = {} unnamed_primeraCol_20200401 = {} for idx, df in enumerate(tablas_20200401): if 'Comuna' in df.columns: key = 'tabla_' + str(idx + 1) df_comunas_20200401[key] = df if 'Unnamed' in df.columns[0]: print(key) unnamed_primeraCol_20200401[key] = df df_comunas_20200330 = {} unnamed_primeraCol_20200330 = {} for idx, df in enumerate(tablas_20200330): if 'Comuna' in df.columns: key= 'tabla_' + str(idx + 1) df_comunas_20200330[key] = df if 'Unnamed' in df.columns[0]: print(key) unnamed_primeraCol_20200330[key] = df
tabla_7 tabla_13 tabla_18 tabla_19 tabla_22
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Resumen * El informe 20200330 tiene una tabla más al parecer (en realidad esto no es así y parecer que un cambio en el gráfico dejo la kgá).* La extracción de tablas parece tener los mismos errores en las mismas tablas.
%%capture """ for tup_1, tup_2 in zip(df_comunas.items(), df_comunas_2.items()): key_1, df_1 = tup_1 key_2, df_2 = tup_2 if (key_1 or key_2) in unnamed_primeraCol: if (df_1.columns == df_2.columns).all: print("LAS COLUMNAS DE LAS TABLAS *diferentes* coinciden!", key_1, key_2) """
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Estandarizamos las tablas
for key in df_comunas_20200401.keys(): df = df_comunas_20200401[key] if key in unnamed_primeraCol_20200401.keys(): df['Comuna'] = df['Unnamed: 0'] df['N°'] = df['Unnamed: 1'] df['Tasa'] = df['Unnamed: 2'] df_comunas_20200401[key] = df.drop(labels='Unnamed: 0', axis=1).drop(labels='Unnamed: 1', axis=1).drop(labels='Unnamed: 2', axis=1) else: if key == 'tabla_22': continue df_comunas_20200401[key] = df.drop(labels='Unnamed: 0', axis=1).drop(labels='Unnamed: 1', axis=1) for key in df_comunas_20200330.keys(): df = df_comunas_20200330[key] if key in unnamed_primeraCol_20200330.keys(): df['Comuna'] = df['Unnamed: 0'] df['N°'] = df['Unnamed: 1'] df['Tasa'] = df['Unnamed: 2'] df_comunas_20200330[key] = df.drop(labels='Unnamed: 0', axis=1).drop(labels='Unnamed: 1', axis=1).drop(labels='Unnamed: 2', axis=1) else: if key == 'tabla_22': continue df_comunas_20200330[key] = df.drop(labels='Unnamed: 0', axis=1).drop(labels='Unnamed: 1',axis=1) for key, region in df_comunas_20200401.items(): print(key, region.columns) for key, region in df_comunas_20200330.items(): print(key, region.columns)
tabla_7 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_8 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_9 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_10 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_11 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_12 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_13 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_15 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_16 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_17 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_18 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_19 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_20 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_21 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_22 Index(['Comuna', 'N°', 'Población', 'Tasa'], dtype='object') tabla_23 Index(['Comuna', 'N°', 'Población', 'Tasa', 'Unnamed: 2'], dtype='object')
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Ultima tabla tiene *Unnamed: 2*
df_comunas_20200401['tabla_21'] df_comunas_20200330['tabla_23']
_____no_output_____
MIT
herramientas/procesamiento_informes_EPI/COVID_Descarga_y_Preprocesamiento_Informes_EPI_MINSAL.ipynb
DiazSalinas/COVID-19
Class with Multiple Objects
class Birds: def __init__(self,bird_name): self.bird_name = bird_name def flying_birds(self): print(f"{self.bird_name} flies above clouds") def non_flying_birds(self): print(f"{self.bird_name} is the national bird of the Philippines") vulture = Birds("Griffon Vulture") crane = Birds("Common Crane") emu = Birds ("Emu") vulture.flying_birds() crane.flying_birds() emu.non_flying_birds()
Griffon Vulture flies above clouds Common Crane flies above clouds Emu is the national bird of the Philippines
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Encapsulation with Private Attributes
class foo: def __init__(self,a,b): self.a = a self.b = b def add(self): return self.a + self.b foo_object = foo(3,4) foo_object.add() foo_object.a = 6 foo_object.add() class foo: def __init__(self,a,b): self._a = a self._b = b def add(self): return self._a + self._b foo_object = foo(3,4) foo_object.add() foo_object.a = 6 foo_object.add()
_____no_output_____
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Encapsulation by mangling with double underscores
class Counter: def __init__(self): self.current = 0 def increment(self): self.current += 1 #current = current+1 def value(self): return self.current def reset(self): self.current = 0 counter = Counter() counter.increment() counter.increment() counter.increment() print(counter.value())
3
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Inheritance
class Person: def __init__(self,fname,sname): self.fname = fname self.sname = sname def printname(self): print(self.fname,self.sname) x = Person("Andrei","Benavidez") x.printname() class Teacher(Person): pass x = Teacher("Drei", "Benavidez") x.printname()
Andrei Benavidez Drei Benavidez
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Polymorphism
class RegularPolygon: def __init__(self,side): self._side = side class Square(RegularPolygon): def area(self): return self._side * self._side class EquilateralTriangle(RegularPolygon): def area(self): return self._side * self._side * 0.433 obj1 = Square(4) obj2 = EquilateralTriangle(3) obj1.area() obj2.area()
_____no_output_____
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
* Create a Python program that displays the name of 3 students (Student 1, Student 2, Student 3) and their grades* Create a class name "Person" and attributes - std1, std2, std3, pre, mid, fin* Compute for the average grade of each term using Grade() method* Information about student's grades must be hidden from others
import random class Person: def __init__ (self, student, pre, mid, fin): self.student = student self.pre = pre *0.30 self.mid = mid *0.30 self.fin = fin *0.40 def Grade (self): print (self.student, "has an average grade of", self.pre, "in Prelims") print (self.student, "has an average grade of", self.mid, "in Midterms") print (self.student, "has an average grade of", self.fin, "in Finals") std1 = Person ("Andrei", random.randint(70,100), random.randint(70,100), random.randint(70,100)) std2 = Person ("Ady", random.randint(70,100), random.randint(70,100), random.randint(70,100)) std3 = Person ("Drei", random.randint(70,100), random.randint(70,100), random.randint(70,100)) std1.Grade() std2.Grade() std3.Grade()
Andrei has an average grade of 21.9 in Prelims Andrei has an average grade of 23.4 in Midterms Andrei has an average grade of 33.2 in Finals Ady has an average grade of 26.4 in Prelims Ady has an average grade of 27.599999999999998 in Midterms Ady has an average grade of 28.0 in Finals Drei has an average grade of 28.2 in Prelims Drei has an average grade of 29.099999999999998 in Midterms Drei has an average grade of 38.800000000000004 in Finals
Apache-2.0
OOP_58001_OOP_Concepts_2.ipynb
AndreiBenavidez/OOP-58001
Part 1.1 基于枚举方法来搭建中文分词工具(新)此项目需要的数据:1. 综合类中文词库.xlsx: 包含了中文词,当做词典来用2. 以变量的方式提供了部分unigram概率 word_prob举个例子: 给定词典=[我们 学习 人工 智能 人工智能 未来 是], 另外我们给定unigram概率:p(我们)=0.25, p(学习)=0.15, p(人工)=0.05, p(智能)=0.1, p(人工智能)=0.2, p(未来)=0.1, p(是)=0.15 Step 1: 对于给定字符串:”我们学习人工智能,人工智能是未来“, 找出所有可能的分割方式- [我们,学习,人工智能,人工智能,是,未来]- [我们,学习,人工,智能,人工智能,是,未来]- [我们,学习,人工,智能,人工,智能,是,未来]- [我们,学习,人工智能,人工,智能,是,未来]....... Step 2: 我们也可以计算出每一个切分之后句子的概率- p(我们,学习,人工智能,人工智能,是,未来)= -log p(我们)-log p(学习)-log p(人工智能)-log p(人工智能)-log p(是)-log p(未来)- p(我们,学习,人工,智能,人工智能,是,未来)=-log p(我们)-log p(学习)-log p(人工)-log p(智能)-log p(人工智能)-log p(是)-log p(未来)- p(我们,学习,人工,智能,人工,智能,是,未来)=-log p(我们)-log p(学习)-log p(人工)-log p(智能)-log p(人工)-log p(智能)-log p(是)-log p(未来)- p(我们,学习,人工智能,人工,智能,是,未来)=-log p(我们)-log p(学习)-log p(人工智能)-log p(人工)-log p(智能)-log(是)-log p(未来)..... Step 3: 返回第二步中概率最大的结果
import pandas as pd import numpy as np path = "./data/综合类中文词库.xlsx" data_frame = pd.read_excel(path, header = None) dic_word_list = data_frame[data_frame.columns[0]].tolist() dic_words = dic_word_list # 保存词典库中读取的单词 # 以下是每一个单词出现的概率。为了问题的简化,我们只列出了一小部分单词的概率。 在这里没有出现的的单词但是出现在词典里的,统一把概率设置成为0.00001 # 比如 p("学院")=p("概率")=...0.00001 word_prob = {"北京":0.03,"的":0.08,"天":0.005,"气":0.005,"天气":0.06,"真":0.04,"好":0.05,"真好":0.04,"啊":0.01,"真好啊":0.02, "今":0.01,"今天":0.07,"课程":0.06,"内容":0.06,"有":0.05,"很":0.03,"很有":0.04,"意思":0.06,"有意思":0.005,"课":0.01, "程":0.005,"经常":0.08,"意见":0.08,"意":0.01,"见":0.005,"有意见":0.02,"分歧":0.04,"分":0.02, "歧":0.005} for item in dic_words: word_prob.setdefault(item, 0.00001) def split_word_with_dic_front_max(dic=[], input_str=""): '''前项最大分词''' input_str_tmp = input_str segments = [] while(input_str_tmp!=""): for i in range(len(input_str_tmp), -1, -1): word = input_str_tmp[:i] if word in dic: segments.append(word) input_str_tmp = input_str_tmp[len(word):] break return segments def split_word_with_dic_front_min(dic=[], input_str=""): '''前项最小分词''' input_str_tmp = input_str segments = [] while(input_str_tmp!=""): for i in range(len(input_str_tmp)): word = input_str_tmp[:i] if len(input_str_tmp) != 1 else input_str_tmp[0] # 防止为空时找不到索引而循环 if word in dic: segments.append(word) input_str_tmp = input_str_tmp[len(word):] break return segments def split_word_with_dic_back_max(dic=[], input_str=""): '''后项最大分词''' input_str_tmp = input_str segments = [] while(input_str_tmp!=""): for i in range(len(input_str_tmp)): word = input_str_tmp[i:] if word in dic: segments.append(word) input_str_tmp = input_str_tmp[:-len(word)] break return segments[::-1] def split_word_with_dic_back_min(dic=[], input_str=""): '''后项最小分词''' input_str_tmp = input_str segments = [] while(input_str_tmp!=""): for i in range(len(input_str_tmp), -1, -1): word = input_str_tmp[i:] if word in dic: segments.append(word) input_str_tmp = input_str_tmp[:-len(word)] break return segments[::-1] def split_word(dic=[], input=""): tmp_result = [] tmp_result.append(split_word_with_dic_front_max(dic, input)) tmp_result.append(split_word_with_dic_back_max(dic, input)) tmp_result.append(split_word_with_dic_front_min(dic, input)) tmp_result.append(split_word_with_dic_back_min(dic, input)) return tmp_result def get_split_probability_use_segment(split_word_segment=[]): sum_result = 0 for seg in split_word_segment: sum_result -= np.log(word_prob.get(seg)) return sum_result def get_split_probability(split_word_segments=[[], ]): ''' 根据传入的分词结果计算出概率最高的分词结果并返回 ''' index = 0 max_index = 0 sum = get_split_probability_use_segment(split_word_segments[0]) for segment in split_word_segments: tmp = get_split_probability_use_segment(segment) if(sum>tmp): sum = tmp max_index = index index += 1 return split_word_segments[max_index], sum # 分数(10) ## TODO 请编写word_segment_naive函数来实现对输入字符串的分词 def word_segment_naive(input_str): """ 1. 对于输入字符串做分词,并返回所有可行的分词之后的结果。 2. 针对于每一个返回结果,计算句子的概率 3. 返回概率最高的最作为最后结果 input_str: 输入字符串 输入格式:“今天天气好” best_segment: 最好的分词结果 输出格式:["今天","天气","好"] """ # TODO: 第一步: 计算所有可能的分词结果,要保证每个分完的词存在于词典里,这个结果有可能会非常多。 segments = split_word(list(word_prob.keys()), input_str) # 存储所有分词的结果。如果次字符串不可能被完全切分,则返回空列表(list) # 格式为:segments = [["今天",“天气”,“好”],["今天",“天“,”气”,“好”],["今“,”天",“天气”,“好”],...] # TODO: 第二步:循环所有的分词结果,并计算出概率最高的分词结果,并返回 best_segment, best_score = get_split_probability(segments) return best_segment # 测试 print(word_segment_naive("北京的天气真好啊")) print(word_segment_naive("今天的课程内容很有意思")) print(word_segment_naive("经常有意见分歧"))
['北京', '的', '天气', '真好啊'] ['今天', '的', '课程', '内容', '很有', '意思'] ['经常', '有意见', '分歧']
Apache-2.0
enumerate.ipynb
chmoe/NLPLearning-CNWordSegmentation
1 Logistic Regression 1.1 Visualizing the data
import matplotlib.pyplot as plt import numpy as np from numpy import genfromtxt data = genfromtxt('data/ex2data1.txt', delimiter=',') # Print first five rows to see what it looks like print(data[:5, :]) X = data[:, 0:2] # scores on test1, test2 Y = data[:, 2] # admitted yes/no print(X[:5]) print(Y[:5]) plt.figure(figsize=(10, 7)) plt.scatter(X[Y==1, 0], X[Y==1, 1], c='g', marker='P') plt.scatter(X[Y==0, 0], X[Y==0, 1], c='r', marker='o') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plt.legend(['Admitted','Not admitted']) plt.show()
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2 Implementation 1.2.1 Warmup exercise: sigmoid function
import math def sigmoid(z): g = 1. / (1. + math.exp(-z)) return g # Vectorize sigmoid function so it works on all elements of a numpy array sigmoid = np.vectorize(sigmoid) # Test sigmoid function test = np.array([[0]]) sigmoid(test) sigmoid(0) test = np.array([[-10,-1], [0,0], [1,10]]) sigmoid(test)
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.2 Cost function and gradient
# Setup the data matrix appropriately, and add ones for the intercept term [m, n] = X.shape # Add intercept term to X X = np.column_stack((np.ones(m), X)) # Initialize fitting parameters initial_theta = np.zeros([n + 1, 1]) def costFunction(theta, X, y): # Cost J = 0 m = len(y) for i in range(m): z = np.dot(theta.T, X[i]) J += -y[i]*math.log(sigmoid(z)) - (1 - y[i])*math.log((1 - sigmoid(z))) J = J/m # Gradient grad = np.zeros(theta.shape) for j in range(X.shape[1]): for i in range(m): z = np.dot(theta.T, X[i]) grad[j] += (sigmoid(z) - y[i]) * X[i,j] grad[j] = grad[j]/m return J, grad # Compute and display initial cost and gradient cost, grad = costFunction(initial_theta, X, Y) print('Cost at initial theta (zeros):\n', cost) print('Expected cost (approx):\n 0.693\n') print('Gradient at initial theta (zeros):\n', grad) print('\nExpected gradients (approx):\n -0.1000\n -12.0092\n -11.2628\n') # Compute and display cost and gradient with non-zero theta test_theta = np.array([-24, 0.2, 0.2]) cost, grad = costFunction(test_theta, X, Y) print('Cost at test theta (zeros):\n', cost) print('Expected cost (approx):\n 0.218\n') print('Gradient at test theta (zeros):\n', grad) print('\nExpected gradients (approx):\n 0.043\t 2.566\t 2.647')
Cost at test theta (zeros): 0.218330193827 Expected cost (approx): 0.218 Gradient at test theta (zeros): [ 0.04290299 2.56623412 2.64679737] Expected gradients (approx): 0.043 2.566 2.647
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.3 Learning parameters using fminunc We're supposed to use Octave's ```fminunc``` function for this. I can't find a python implementation of this, so let's use ```scipy.optimize.minimize(method='TNC')``` instead.
from scipy.optimize import minimize res = minimize(fun=costFunction, x0=initial_theta, args=(X, Y), method='TNC', jac=True, options={'maxiter':400}) res theta = res.x print('Cost at theta found by fmin_tnc:\n', res.fun) print('Expected cost (approx):\n 0.203\n') print('Theta:\n', res.x) print('Expected theta (approx):\n -25.161\t 0.206\t 0.201') def plotDecisionBoundary(theta, X, Y): plt.figure(figsize=(10, 7)) plt.scatter(X[Y==1, 1], X[Y==1, 2], c='g', marker='P') plt.scatter(X[Y==0, 1], X[Y==0, 2], c='r', marker='o') plt.xlabel('Exam 1 score') plt.ylabel('Exam 2 score') plot_x = [min(X[:,1]-2), max(X[:,1])+2] plot_y = [(-1/theta[2])*(theta[1]*plot_x[0] + theta[0]), (-1/theta[2])*(theta[1]*plot_x[1] + theta[0])] plt.plot(plot_x, plot_y) plt.xlim(min(X[:,1]-2),max(X[:,1])+2) plt.ylim(min(X[:,2]-2),max(X[:,2])+2) plt.legend(['Decision boundary', 'Admitted', 'Not admitted']) plt.show() plotDecisionBoundary(theta, X, Y)
_____no_output_____
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
1.2.4 Evaluating logistic regression
# Predict probability of admission for a student with score 45 on exam 1 and score 85 on exam 2 prob = sigmoid(np.dot([1, 45, 85], theta)) print('For a student with scores 45 and 85, we predict an admission probability of:\n', prob) print('Expected value:\n 0.775 +/- 0.002\n\n') # Compute accuracy on our training set def predict(theta, X): m = X.shape[0] # Number of training examples p = np.zeros(m) for i in range(m): prob = sigmoid(np.dot(X[i,:], theta)) if prob >= 0.5: p[i] = 1 # Predict "Admitted" if prob >= 0.5 return p p = predict(theta, X) accuracy = sum(p==Y) / m print('Training accuracy:\n', accuracy * 100, '%') print('Expected accuracy (approx):\n 89.0 %\n')
Training accuracy: 89.0 % Expected accuracy (approx): 89.0 %
MIT
ex2/2_1_Logistic_regression.ipynb
surajsimon/Andrew-ng-machine-learning-course-python-implementation
zip(df2, axes.flatten())
fig, axes = plt.subplots(8,2) x= groups['q-value'] fig, axes = plt.subplots(6,6, figsize=(12,12), sharex=True) #axr = axes.ravel() #zip(groups, axes.flatten()) for ax, x in zip(axes.flat, x): sb.distplot(x[1], ax=ax) ax.set_title(x[0]) ax.axvline(0.05, color='r', ls=':') #axes.flat[-1].set_visible(False) ax.set_xlim(0,1) plt.tight_layout() fig, ax = plt.subplots(1, 1) print(x[1]) sb.distplot(x[1]['q-value'], hist=False, ax=ax, ) ax.set_title(x[0]) ax.axvline(0.05, color='r', ls=':') plt.gca() plt.show() x= groups['p-value'] fig, axes = plt.subplots(6,6, figsize=(12,12), sharex=True) #axr = axes.ravel() #zip(groups, axes.flatten()) for ax, x in zip(axes.flat, x): sb.distplot(x[1], ax=ax) ax.set_title(x[0]) ax.axvline(0.05, color='r', ls=':') #axes.flat[-1].set_visible(False) ax.set_xlim(0,0.001) plt.tight_layout()
_____no_output_____
MIT
notebook/distribution_qvals_dmmpmm.ipynb
isabelleberger/isabelle-
Example notebook that does stuff with the output files from a xspec, namely:* the .txt from wdata that saves the data/model,* the *.fits from writefits that save out the fit parameters.IGH 14 Feb 2020 - Started IGH 20 Feb 2020 - Better latex font, and fancier error label
from astropy.io import fits import numpy as np import matplotlib.pyplot as plt import matplotlib import warnings warnings.simplefilter('ignore') # Some useful parameters # norm = 1e-14/(4piD_A^2)*\int n_e n_p dV # The norm factor from the XSPEC APEC model is defined here: https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node134.html kev2mk=0.0861733 emfact=3.5557e-42 # An example file produced from writefits using FPMA and FPMA and a model of const*apec hdumok = fits.open('mod_thf2prb.fits') # hdumok.info() mokprm=hdumok[1].data mokcol=hdumok[1].columns.names hdumok.close() print(mokcol) t1=mokprm['kt2'][0]/kev2mk t1_cr=mokprm['ekt2'][0]/kev2mk print('T1: ',t1,'MK, Err Rng: ',t1_cr) em1=np.double(mokprm['norm5'][0])/emfact em1_cr=np.double(mokprm['enorm5'][0])/emfact print('EM1: ',em1,'cm^-3, Err Rng: ',em1_cr) fac=mokprm['factor6'][0] fac_cr=mokprm['efactor6'][0] print('Fac: ',fac,' Fac Rng: ',fac_cr) # An example file produced from wdata from an iplot ldata ufspec delchi dd=[] with open('mod_thf2prb.txt', 'r') as f: lines = f.readlines() dd.append(lines) # Get's rid of the first 3 lines which are normally not the data dd=dd[0][3:] # Different plots separated by 'NO NO NO NO NO\n' so need to find where this occurs id_break=[i for i, value in enumerate(dd) if value == 'NO NO NO NO NO\n'] dd_ld=dd[:id_break[0]] dd_uf=dd[id_break[0]+1:id_break[1]] dd_dc=dd[id_break[1]+1:] # For this example just assign the ldata plot eng_ld=[] deng_ld=[] data_ld=[] edata_ld=[] mod_ld=[] for i in dd_ld: temp_ld=i.split() eng_ld.append(float(temp_ld[0])) deng_ld.append(float(temp_ld[1])) data_ld.append(float(temp_ld[2])) edata_ld.append(float(temp_ld[3])) mod_ld.append(float(temp_ld[4])) eng_ld=np.array(eng_ld) deng_ld=np.array(deng_ld) data_ld=np.array(data_ld) edata_ld=np.array(edata_ld) mod_ld=np.array(mod_ld) # # Setup the font used for plotting matplotlib.rcParams['font.sans-serif'] = "Arial" matplotlib.rcParams['font.family'] = "sans-serif" matplotlib.rcParams['font.size'] = 18 matplotlib.rcParams['mathtext.default']="regular" # A lot of this is just to get the plot exactly how I want it fig,axs=plt.subplots(2,1,figsize=(7,10),gridspec_kw=dict( height_ratios=[4,1],hspace=0.05)) # Plot the data and model fit on the top plot axs[0].semilogy(eng_ld,data_ld,'.',ms=0.5,color='k') for i in np.arange(len(data_ld)): axs[0].plot([eng_ld[i],eng_ld[i]],[data_ld[i]-edata_ld[i],data_ld[i]+edata_ld[i]],color='k') axs[0].plot([eng_ld[i]-deng_ld[i],eng_ld[i]+deng_ld[i]],[data_ld[i],data_ld[i]],color='k') axs[0].plot(eng_ld,mod_ld,color='firebrick',drawstyle='steps-mid') axs[0].set_ylabel('NuSTAR count s$^{-1}$ keV$^{-1}$') ylim=[2e-1,7e4] xlim=[2,9] axs[0].set_ylim(ylim) for aa in axs: aa.set_xlim(xlim) aa.label_outer() # Put the actual fit values on the top plot param_labelt="{0:5.3f} ({1:5.3f} - {2:5.3f}) MK ".format(t1,t1_cr[0],t1_cr[1]) param_labelem="{0:5.2e} ({1:5.2e} - {2:5.2e}) ".format(em1,em1_cr[0],em1_cr[1])+"$cm^{-3}$" axs[0].text(0.95,0.92,param_labelt,color='firebrick',ha='right',transform=axs[0].transAxes) axs[0].text(0.95,0.86,param_labelem,color='firebrick',ha='right',transform=axs[0].transAxes) axs[0].text(0.95,0.80,"{0:4.2f}".format(fac),color='k',ha='right',transform=axs[0].transAxes) # You need to specify this yourself fiter=[2.5,7.0] axs[0].plot([fiter[0],fiter[0]],[ylim[0],10**(0.8*np.log10(ylim[1]))],':',color='grey') axs[0].plot([fiter[1],fiter[1]],[ylim[0],10**(0.8*np.log10(ylim[1]))],':',color='grey') # Calculate and plot the residuals on the bottom plot resid=(data_ld-mod_ld)/edata_ld axs[1].plot(eng_ld,resid,'.',ms=0.5,color='k') axs[1].set_ylim([-4,4]) axs[1].set_xlabel('Energy [keV]') axs[1].plot(xlim,[0,0],'--',color='grey') for i in np.arange(len(data_ld)): axs[1].plot([eng_ld[i]-deng_ld[i],eng_ld[i]+deng_ld[i]],[resid[i],resid[i]],color='firebrick') axs[1].set_ylabel('Resid') plt.show() # Same as above but just a fancier way of plotting the error ranges fig,axs=plt.subplots(2,1,figsize=(7,10),gridspec_kw=dict( height_ratios=[4,1],hspace=0.05)) # Plot the data and model fit on the top plot axs[0].semilogy(eng_ld,data_ld,'.',ms=0.5,color='k') for i in np.arange(len(data_ld)): axs[0].plot([eng_ld[i],eng_ld[i]],[data_ld[i]-edata_ld[i],data_ld[i]+edata_ld[i]],color='k') axs[0].plot([eng_ld[i]-deng_ld[i],eng_ld[i]+deng_ld[i]],[data_ld[i],data_ld[i]],color='k') axs[0].plot(eng_ld,mod_ld,color='firebrick',drawstyle='steps-mid') axs[0].set_ylabel('NuSTAR count s$^{-1}$ keV$^{-1}$') ylim=[2e-1,7e4] xlim=[2,9] axs[0].set_ylim(ylim) for aa in axs: aa.set_xlim(xlim) aa.label_outer() # Put the actual fit values on the top plot labt="{0:5.3f}".format(t1) labtup="{0:5.3f}".format(t1_cr[1]-t1) labtlw="{0:5.3f}".format(t1-t1_cr[0]) powem1=np.floor(np.log10(em1)) labpowem1="{0:2d}".format(int(powem1)) labem="{0:5.2f}".format(em1/10**powem1) labemup="{0:5.2f}".format((em1_cr[1]-em1)/10**powem1) labemlw="{0:5.2f}".format((em1-em1_cr[0])/10**powem1) labf="{0:5.2f}".format(fac) labfup="{0:5.2f}".format(fac_cr[1]-fac) labflw="{0:5.2f}".format(fac-fac_cr[0]) axs[0].text(0.02,0.92,labt+" $^{+"+labtup+"}_{-"+labtlw+"}\;MK,$",color='firebrick',ha='left',transform=axs[0].transAxes) axs[0].text(0.34,0.92,labem+" $^{+"+labemup+"}_{-"+labemlw+"}\,×\,10^{"+labpowem1+"}\;cm^{-3},$",color='firebrick',ha='left',transform=axs[0].transAxes) axs[0].text(0.78,0.92,labf+" $^{+"+labfup+"}_{-"+labflw+"}$",color='k',ha='left',transform=axs[0].transAxes) # You need to specify this yourself fiter=[2.5,7.0] axs[0].plot([fiter[0],fiter[0]],[ylim[0],10**(0.8*np.log10(ylim[1]))],':',color='grey') axs[0].plot([fiter[1],fiter[1]],[ylim[0],10**(0.8*np.log10(ylim[1]))],':',color='grey') # Calculate and plot the residuals on the bottom plot resid=(data_ld-mod_ld)/edata_ld axs[1].plot(eng_ld,resid,'.',ms=0.5,color='k') axs[1].set_ylim([-4,4]) axs[1].set_xlabel('Energy [keV]') axs[1].plot(xlim,[0,0],'--',color='grey') for i in np.arange(len(data_ld)): axs[1].plot([eng_ld[i]-deng_ld[i],eng_ld[i]+deng_ld[i]],[resid[i],resid[i]],color='firebrick') axs[1].set_ylabel('Resid') plt.show()
_____no_output_____
MIT
xspec/example_xspec.ipynb
ianan/nsigh
Inverse Kinematics OptimizationThe previous doc explained features and how they define objectives of a constrained optimization problem. Here we show how to use this to solve IK optimization problems.At the bottom there is more general text explaining the basic concepts. Demo of features in Inverse KinematicsLet's setup a standard configuration. (Lock the window with "Always on Top".)
import sys sys.path.append('../build') #rai/lib') import numpy as np import libry as ry C = ry.Config() C.addFile('../rai-robotModels/pr2/pr2.g') C.addFile('../rai-robotModels/objects/kitchen.g') C.view()
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
For simplicity, let's add a frame that represents goals
goal = C.addFrame("goal") goal.setShape(ry.ST.sphere, [.05]) goal.setColor([.5,1,1]) goal.setPosition([1,.5,1]) X0 = C.getFrameState() #store the initial configuration
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We create an IK engine. The only objective is that the `positionDiff` (position difference in world coordinates) between `pr2L` (the yellow blob in the left hand) and `goal` is equal to zero:
IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times = [1,2], feature=ry.FS.positionDiff, frames=['pr2L', 'goal'])
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We now call the optimizer (True means with random initialization/restart).
IK.optimize() IK.getReport()
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00914103 (kin:0.000131 coll:0.000132 feat:0 newton: 0.00105) setJointStateCount:35 sos:0.0808073 ineq:0 eq:0.238354
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The best way to retrieve the result is to copy the optimized IK configuration back into your working configuration C, which is now also displayed
#IK.getFrameState(1) C.setFrameState(IK.getFrameState(0))
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
We can redo the optimization, but for a different configuration, namely a configuration where the goal is in another location.For this we move goal in our working configuration C, then copy C back into the IK engine's configurations:
## (iterate executing this cell for different goal locations!) # move goal goal.setPosition([.8,.2,.5]) # copy C into the IK's internal configuration(s) IK.setConfigurations(C) # reoptimize IK.optimize(0.) # 0: no adding of noise for a random restart #print(IK.getReport()) print(np.shape(IK.getFrameState(0))) print(np.shape(IK.getFrameState(0)[1])) # grab result # C.setFrameState( IK.getConfiguration(1) ) C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.000305789 (kin:0.000238 coll:0.000149 feat:0 newton: 0.001415) setJointStateCount:3 sos:0.000285026 ineq:0 eq:0.0270084 (179, 7) (7,)
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
Let's solve some other problems, always creating a novel IK engine:The relative position of `goal` in `pr2R` coordinates equals [0,0,-.2] (which is 20cm straight in front of the yellow blob)
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq,times=[1], feature=ry.FS.positionRel, frames=['goal','pr2R'], target=[0,0,-.2]) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00105824 (kin:5.2e-05 coll:1.1e-05 feat:0 newton: 0.000124) setJointStateCount:12 sos:0.00848536 ineq:0 eq:0.0341739
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The distance between `pr2R` and `pr2L` is zero:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.distance, frames=['pr2L','pr2R']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00069327 (kin:3.3e-05 coll:5e-06 feat:0 newton: 5.9e-05) setJointStateCount:6 sos:0.00209253 ineq:0 eq:0.0149894
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The 3D difference between the z-vector of `pr2R` and the z-vector of `goal`:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.vectorZDiff, frames=['pr2R', 'goal']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.00144349 (kin:0.000111 coll:2.9e-05 feat:0 newton: 0.000115) setJointStateCount:12 sos:0.0163838 ineq:0 eq:0.0143332
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
The scalar product between the z-vector of `pr2R` and the z-vector of `goal` is zero:
C.setFrameState(X0) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.scalarProductZZ, frames=['pr2R', 'goal']) IK.optimize() C.setFrameState(IK.getFrameState(0))
** KOMO::run solver:dense collisions:0 x-dim:25 T:1 k:1 phases:1 stepsPerPhase:1 tau:1 #timeSlices:2 #totalDOFs:25 #frames:358 ** optimization time:0.000686185 (kin:7.1e-05 coll:3e-06 feat:0 newton: 4.2e-05) setJointStateCount:4 sos:0.000248896 ineq:0 eq:0.00308733
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
etc etc More explanationsAll methods to compute paths or configurations solve constrained optimization problems. To use them, you need to learn to define constrained optimization problems. Some definitions:* An objective defines either a sum-of-square cost term, or an equality constraint, or an inequality constraint in the optimization problem. An objective is defined by its type and its feature. The type can be `sos` (sum-of-squares), `eq`, or `ineq`, referring to the three cases.* A feature is a (differentiable mapping) from the decision variable (the full path, or robot configuration) to a feature space. If the feature space is, e.g., 3-dimensional, this defines 3 sum-of-squares terms, or 3 inequality, or 3 equality constraints, one for each dimension. For instance, the feature can be the 3-dim robot hand position in the 15th time slice of a path optimization problem. If you put an equality constraint on this feature, then this adds 3 equality constraints to the overall path optimization problem.* A feature is defined by the keyword for the feature map (e.g., `pos` for position), typically by a set of frame names that tell which objects we refer to (e.g., `pr2L` for the left hand of the pr2), optionally some modifiers (e.g., a scale or target, which linearly transform the feature map), and the set of configuration IDs or time slices the feature is to be computed from (e.g., `confs=[15]` if the feat is to be computed from the 15th time slice in a path optimization problem).* In path optimization problems, we often want to add objectives for a whole time interval rather than for a single time slice or specific configuration. E.g., avoid collisions from start to end. When adding objectives to the optimization problem we can specify whole intervals, with `times=[1., 2.]`, so that the objective is added to each configuration in this time interval.* Some features, especially velocity and acceleration, refer to a tuple of (consecutive) configurations. E.g., when you impose an acceleration feature, you need to specify `confs=[13, 14, 15]`. Or if you use `times=[1.,1.]`, the acceleration features is computed from the configuration that corresponds to time=1 and the two configurations *before*.* All kinematic feature maps (that depend on only one configuration) can be modified to become a velocity or acceleration features. E.g., the position feature map can be modified to become a velocity or acceleration feature.* The `sos`, `eq`, and `ineq` always refer to the feature map to be *zero*, e.g., constraining all features to be equal to zero with `eq`. This is natural for many features, esp. when they refer to differences (e.g. `posDiff` or `posRel`, which compute the relative position between two frames). However, when one aims to constrain the feature to a non-zero constant value, one can modify the objective with a `target` specification.* Finally, all features can be rescaled with a `scale` specification. Rescaling changes the costs that arise from `sos` objectives. Rescaling also has significant impact on the convergence behavior for `eq` and `ineq` constraints. As a guide: scale constraints so that if they *would* be treated as squared penalties (squaredPenalty optim mode, to be made accessible) convergence to reasonable approximate solutions is efficient. Then the AugLag will also converge efficiently to precise constraints.
# Designing a cylinder grasp D=0 C=0 import sys sys.path.append('../build') #rai/lib') import numpy as np import libry as ry C = ry.Config() C.addFile('../rai-robotModels/pr2/pr2.g') C.addFile('../rai-robotModels/objects/kitchen.g') C.view() C.setJointState([.7], ["l_gripper_l_finger_joint"]) C.setJointState( C.getJointState() ) goal = C.addFrame("goal") goal.setShape(ry.ST.cylinder, [0,0,.2, .03]) goal.setColor([.5,1,1]) goal.setPosition([1.81,.5,1]) X0 = C.getFrameState() C.setFrameState(X0) goal.setPosition([1.81,.5,1]) IK = C.komo_IK(False) IK.addObjective(type=ry.OT.eq, times=[1],feature=ry.FS.positionDiff, frames=['pr2L', 'goal'], scale=[[1,0,0],[0,1,0]]) IK.addObjective(type=ry.OT.ineq, times=[1], feature=ry.FS.positionDiff, frames=['pr2L', 'goal'], scale=[[0,0,1]], target=[0,0,.0005]) IK.addObjective(type=ry.OT.ineq, times=[1], feature=ry.FS.positionDiff, frames=['pr2L', 'goal'], scale=[[0,0,-1]], target=[0,0,-.0005]) IK.addObjective(type=ry.OT.sos, times=[1], feature=ry.FS.scalarProductZZ, frames=['pr2L', 'goal'], scale=[0.1]) IK.addObjective(type=ry.OT.eq, times=[1], feature=ry.FS.scalarProductXZ, frames=['pr2L', 'goal']) IK.optimize() C.setFrameState(IK.getFrameState(0)) IK.getReport() IK.view()
_____no_output_____
MIT
tutorials/3-IK-optimization.ipynb
Zwoelf12/rai-python
Welcome to Python FundamentalsIn this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:* Variables and Data Types * Operations* Input and Output Operations* Logic Control* Iterables* Functions Variable and Data Types
x = 1 a,b = 0, -1 type(x) y = 1,0 type(y) x = float(x) type(x) s,t,u ="0", "1", "one" type(s) s_int = int(s) s_int
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Operations Arithmetic
a,b,c,d = 2.0, -0.5, 0, -32 ### Addition S = a+b S ### Subtraction D = b-d D ### Multiplication P = a*d P ### Division Q = c/d Q ### Floor Division Fq = a//b Fq ### Exponentiation E = a**b E ### Modulo mod = d%a mod
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Assingment Operations
G, H, J, K = 0, 100, 2, 2 G += a G H -= d H J *= 2 J K **= 3 K
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Comparators
res_1, res_2, res_3 = 1, 2.0, "1" true_val = 1.0 ## Equality res_1 == true_val ## Non-equality res_2 != true_val ## Inequality t1 = res_1 > res_2 t2 = res_1 < res_2/2 t3 = res_1 >= res_2/2 t4 = res_1 <= res_2 t1
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Logical
res_1 == true_val res_1 is true_val res_1 is not true_val p, q = True, False conj = p and q conj p, q = True, False disj = p or q disj p, q = True, False nand = not(p and q) nand p, q = True, False xor = (not p and q) or (p and not q) xor
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
1/0
print ("Hello World") cnt = 1 string = "Hello World" print(string, ", Current run count is:", cnt) cnt +=1 print(f"{string}, Current count is {cnt}") sem_grade = 82.24356457461234 name = "cath" print("Hello {}, your semestral grade is: {}".format(name, sem_grade)) w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 print("The weights of your semestral grades are:\ \n\t{:.2%} for Prelims\ \n\t{:.2%} for Midterms, and\ \n\t{:.2%} for Finals, ".format(w_pg, w_mg, w_fg)) x = input("enter a number: ") x name = input("kimi no nawa: ") pg = input("Enter prelim grade: ") mg = input("Enter midterm grade: ") fg = input("Enter finals grade: ") sem_grade = None print("Hello {}, your semestral grade is: {}". format (name, sem_grade))
kimi no nawa: Cath Enter prelim grade: 1.00 Enter midterm grade: 1.00 Enter finals grade: 1.00 Hello Cath, your semestral grade is: None
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Looping Statements While
## while loops i, j = 0, 10 while(i<=j): print(f"{i}\t|\t{j}") i+=1
0 | 10 1 | 10 2 | 10 3 | 10 4 | 10 5 | 10 6 | 10 7 | 10 8 | 10 9 | 10 10 | 10
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
For
# for(int i=0; i<10; i++){ # printf(i) # } i=0 for i in range(11): print(i) playlist = ["Crazier", "Bahay-Kubo", "Happier"] print('Now Playing:\n') for song in playlist: print(song)
Now Playing: Crazier Bahay-Kubo Happier
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Flow Control Conditions Statemnents
numeral1, numeral2 = 12, 12 if(numeral1 == numeral2): print("Yey") elif(numeral1>numeral2): print("Hoho") else: print("AWW") print("Hip hip")
Yey Hip hip
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Functions
[ ] # void DeleteUser(int userid){ # delete(userid); # } def delete_user (userid): print("Successfully deleted user: {}". format(userid)) def delete_all_users (): print("Successfully deleted all users") userid = 202011844 delete_user(202011844) delete_all_users() def add(addend1, addend2): print("I know how to add addend1 and addend2") return addend1 + addend2 def power_of_base2(exponent): return 2**exponent addend1 = 5 addend2 = 10 exponent = 5 #add(addend1, addend2) power_of_base2(exponent)
_____no_output_____
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Grade Calculator Create a grade calculator that computes for the semestral grade of a course. Students could type their names, the name of the course, then their prelim, midterm, and final grade.The program should print the semestral grade in 2 decimal points and should display the following emojis depending on the situation:happy - when grade is greater thann 70.00laughing - wen grade is exactly 70.00sad - when grade is below 70.00...happy, lol, sad - "\U0001F600", "\U0001F606", "\U0001F62D"
w_pg, w_mg, w_fg = 0.3, 0.3, 0.4 name = input("Enter your name: ") course = input("Enter your course: ") pg = float(input("Enter prelim grade: ")) mg = float(input("Enter midterm grade: ")) fg = float(input("Enter final grade: ")) sem_grade = (pg*w_pg)+(mg*w_mg)+(fg*w_fg) print("Hello {} from {}, your semestral grade is: {}" .format(name, course, round(sem_grade,2))) if(sem_grade > 70.00): print("\U0001f600") elif(sem_grade == 70.00): print("\U0001F606") else: print("\U0001F620")
Enter your name: Catherine Enter your course: BS Chemical Engineering Enter prelim grade: 97 Enter midterm grade: 98 Enter final grade: 99 Hello Catherine from BS Chemical Engineering, your semestral grade is: 98.1 😀
Apache-2.0
Activity_1_Python_Fundamentals.ipynb
catherinedrio/Linear-Algebra_ChE_2nd-Sem-2021-2022
Nifti Read ExampleThe purpose of this notebook is to illustrate reading Nifti files and iterating over patches of the volumes loaded from them.
%matplotlib inline import os import sys from glob import glob import tempfile import numpy as np import matplotlib.pyplot as plt import nibabel as nib import torch from torch.utils.data import DataLoader import monai from monai.data import NiftiDataset, GridPatchDataset, create_test_image_3d from monai.transforms import Compose, AddChannel, Transpose, ScaleIntensity, ToTensor, RandSpatialCrop monai.config.print_config()
MONAI version: 0.1a1.dev8+6.gb3c5761.dirty Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0] Numpy version: 1.18.1 Pytorch version: 1.4.0 Ignite version: 0.3.0
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Create a number of test Nifti files:
tempdir = tempfile.mkdtemp() for i in range(5): im, seg = create_test_image_3d(128, 128, 128) n = nib.Nifti1Image(im, np.eye(4)) nib.save(n, os.path.join(tempdir, 'im%i.nii.gz'%i)) n = nib.Nifti1Image(seg, np.eye(4)) nib.save(n, os.path.join(tempdir, 'seg%i.nii.gz'%i))
_____no_output_____
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Create a data loader which yields uniform random patches from loaded Nifti files:
images = sorted(glob(os.path.join(tempdir, 'im*.nii.gz'))) segs = sorted(glob(os.path.join(tempdir, 'seg*.nii.gz'))) imtrans = Compose([ ScaleIntensity(), AddChannel(), RandSpatialCrop((64, 64, 64), random_size=False), ToTensor() ]) segtrans = Compose([ AddChannel(), RandSpatialCrop((64, 64, 64), random_size=False), ToTensor() ]) ds = NiftiDataset(images, segs, transform=imtrans, seg_transform=segtrans) loader = DataLoader(ds, batch_size=10, num_workers=2, pin_memory=torch.cuda.is_available()) im, seg = monai.utils.misc.first(loader) print(im.shape, seg.shape)
torch.Size([5, 1, 64, 64, 64]) torch.Size([5, 1, 64, 64, 64])
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Alternatively create a data loader which yields patches in regular grid order from loaded images:
imtrans = Compose([ ScaleIntensity(), AddChannel(), ToTensor() ]) segtrans = Compose([ AddChannel(), ToTensor() ]) ds = NiftiDataset(images, segs, transform=imtrans, seg_transform=segtrans) ds = GridPatchDataset(ds, (64, 64, 64)) loader = DataLoader(ds, batch_size=10, num_workers=2, pin_memory=torch.cuda.is_available()) im, seg = monai.utils.misc.first(loader) print(im.shape, seg.shape) !rm -rf {tempdir}
_____no_output_____
Apache-2.0
examples/notebooks/nifti_read_example.ipynb
gml16/MONAI
Network Communities Detection In this notebook, we will explore some methods to perform a community detection using several algortihms. Before testing the algorithms, let us create a simple benchmark graph.
%matplotlib inline from matplotlib import pyplot as plt import numpy as np import pandas as pd import networkx as nx G = nx.barbell_graph(m1=10, m2=4)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Matrix Factorization We start by using some matrix factorization technique to extract the embeddings, which are visualized and then clustered traditional clustering algorithms.
from gem.embedding.hope import HOPE gf = HOPE(d=4, beta=0.01) gf.learn_embedding(G) embeddings = gf.get_embedding() from sklearn.manifold import TSNE tsne = TSNE(n_components=2) emb2d = tsne.fit_transform(embeddings) plt.plot(embeddings[:, 0], embeddings[:, 1], 'o', linewidth=0)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
We start by using a GaussianMixture model to perform the clustering
from sklearn.mixture import GaussianMixture gm = GaussianMixture(n_components=3, random_state=0) #.(embeddings) labels = gm.fit_predict(embeddings) colors = ["blue", "green", "red"] nx.draw_spring(G, node_color=[colors[label] for label in labels])
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Spectral Clustering We now perform a spectral clustering based on the adjacency matrix of the graph. It is worth noting that this clustering is not a mutually exclusive clustering and nodes may belong to more than one community
adj=np.array(nx.adjacency_matrix(G).todense()) from communities.algorithms import spectral_clustering communities = spectral_clustering(adj, k=3)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
In the next plot we highlight the nodes that belong to a community using the red color. The blue nodes do not belong to the given community
plt.figure(figsize=(20, 5)) for ith, community in enumerate(communities): cols = ["red" if node in community else "blue" for node in G.nodes] plt.subplot(1,3,ith+1) plt.title(f"Community {ith}") nx.draw_spring(G, node_color=cols)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
The next command shows the node ids belonging to the different communities
communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Non Negative Matrix Factorization Here, we again use matrix factorization, but now using the Non-Negative Matrix Factorization, and associating the clusters with the latent dimensions.
from sklearn.decomposition import NMF nmf = NMF(n_components=2) emb = nmf.fit_transform(adj) plt.plot(emb[:, 0], emb[:, 1], 'o', linewidth=0)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
By setting a threshold value of 0.01, we determine which nodes belong to the given community.
communities = [set(np.where(emb[:,ith]>0.01)[0]) for ith in range(2)] plt.figure(figsize=(20, 5)) for ith, community in enumerate(communities): cols = ["red" if node in community else "blue" for node in G.nodes] plt.subplot(1,3,ith+1) plt.title(f"Community {ith}") nx.draw_spring(G, node_color=cols)
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Although the example above does not show this, in general also this clustering method may be non-mutually exclusive, and nodes may belong to more than one community Louvain and Modularity Optimization Here, we use the Louvain method, which is one of the most popular methods for performing community detection, even on fairly large graphs. As described in the chapter, the Louvain method basically optimize the partitioning (it is a mutually exclusing community detection algorithm), identifying the one that maximize the modularity score, meaning that nodes belonging to the same community are very well connected among themself, and weakly connected to the other communities. **Louvain, unlike other community detection algorithms, does not require to specity the number of communities in advance and find the best, optimal number of communities.**
from communities.algorithms import louvain_method communities = louvain_method(adj) c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values nx.draw_spring(G, node_color=c) communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
Girvan Newman The Girvan–Newman algorithm detects communities by progressively removing edges from the original graph. The algorithm removes the “most valuable” edge, traditionally the edge with the highest betweenness centrality, at each step. As the graph breaks down into pieces, the tightly knit community structure is exposed and the result can be depicted as a dendrogram.**BE AWARE that because of the betweeness centrality computation, this method may not scale well on large graphs**
from communities.algorithms import girvan_newman communities = girvan_newman(adj, n=2) c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values nx.draw_spring(G, node_color=c) communities
_____no_output_____
MIT
Chapter05/02_community_detection_algorithms.ipynb
Wapiti08/Graph-Machine-Learning
ML Pipeline PreparationFollow the instructions below to help you create your ML pipeline. 1. Import libraries and load data from database.- Import Python libraries- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)- Define feature and target variables X and Y
# import libraries import pandas as pd from sqlalchemy import create_engine from sklearn.model_selection import train_test_split, GridSearchCV from sklearn.multioutput import MultiOutputClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.metrics import classification_report from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer from nltk.tokenize import word_tokenize from nltk.stem import WordNetLemmatizer import pickle import nltk nltk.download(['punkt', 'wordnet', 'averaged_perceptron_tagger']) # load data from database engine = create_engine('sqlite:///DISASTER.db') df = pd.read_sql_table("CLEAN_MESSAGES", engine) X = df['message'] Y = df.iloc[:,4:] X.head()
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
2. Write a tokenization function to process your text data
def tokenize(text): tokens = word_tokenize(text) lemmatizer = WordNetLemmatizer() clean_tokens = [] for tok in tokens: clean_tok = lemmatizer.lemmatize(tok).lower().strip() clean_tokens.append(clean_tok) return clean_tokens
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
3. Build a machine learning pipelineThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
# create the NLP ML Pipeline pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('tfidf', TfidfTransformer()), ('clf', MultiOutputClassifier(RandomForestClassifier())) ])
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
4. Train pipeline- Split data into train and test sets- Train pipeline
# Split the data in train and test sets X_train, X_test, Y_train, Y_test = train_test_split(X, Y) # Fit the pipeline pipeline.fit(X_train, Y_train)
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
5. Test your modelReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
# Predict using the test data Y_pred = pipeline.predict(X_test) # Print the classification report for for each column for i, column in enumerate(Y_train.columns): print("Columns: ", column) print(classification_report(Y_test.values[:,i], Y_pred[:,i])) print()
Columns: related precision recall f1-score support 0 0.76 0.26 0.39 1539 1 0.80 0.98 0.88 4967 2 1.00 0.04 0.08 48 accuracy 0.80 6554 macro avg 0.86 0.43 0.45 6554 weighted avg 0.80 0.80 0.76 6554 Columns: request precision recall f1-score support 0 0.89 0.99 0.94 5442 1 0.89 0.40 0.56 1112 accuracy 0.89 6554 macro avg 0.89 0.70 0.75 6554 weighted avg 0.89 0.89 0.87 6554 Columns: offer precision recall f1-score support 0 1.00 1.00 1.00 6527 1 0.00 0.00 0.00 27 accuracy 1.00 6554 macro avg 0.50 0.50 0.50 6554 weighted avg 0.99 1.00 0.99 6554 Columns: aid_related precision recall f1-score support 0 0.77 0.89 0.82 3873 1 0.79 0.61 0.69 2681 accuracy 0.77 6554 macro avg 0.78 0.75 0.76 6554 weighted avg 0.78 0.77 0.77 6554 Columns: medical_help precision recall f1-score support 0 0.93 1.00 0.96 6054 1 0.62 0.05 0.10 500 accuracy 0.93 6554 macro avg 0.77 0.52 0.53 6554 weighted avg 0.90 0.93 0.90 6554 Columns: medical_products precision recall f1-score support 0 0.95 1.00 0.97 6221 1 0.72 0.05 0.10 333 accuracy 0.95 6554 macro avg 0.84 0.53 0.54 6554 weighted avg 0.94 0.95 0.93 6554 Columns: search_and_rescue precision recall f1-score support 0 0.97 1.00 0.99 6358 1 0.86 0.03 0.06 196 accuracy 0.97 6554 macro avg 0.91 0.52 0.52 6554 weighted avg 0.97 0.97 0.96 6554 Columns: security precision recall f1-score support 0 0.98 1.00 0.99 6430 1 0.00 0.00 0.00 124 accuracy 0.98 6554 macro avg 0.49 0.50 0.50 6554 weighted avg 0.96 0.98 0.97 6554 Columns: military precision recall f1-score support 0 0.97 1.00 0.98 6350 1 0.75 0.03 0.06 204 accuracy 0.97 6554 macro avg 0.86 0.51 0.52 6554 weighted avg 0.96 0.97 0.96 6554 Columns: child_alone precision recall f1-score support 0 1.00 1.00 1.00 6554 accuracy 1.00 6554 macro avg 1.00 1.00 1.00 6554 weighted avg 1.00 1.00 1.00 6554 Columns: water precision recall f1-score support 0 0.95 1.00 0.97 6117 1 0.95 0.21 0.34 437 accuracy 0.95 6554 macro avg 0.95 0.60 0.66 6554 weighted avg 0.95 0.95 0.93 6554 Columns: food precision recall f1-score support 0 0.94 0.99 0.96 5843 1 0.83 0.48 0.61 711 accuracy 0.93 6554 macro avg 0.89 0.73 0.79 6554 weighted avg 0.93 0.93 0.92 6554 Columns: shelter precision recall f1-score support 0 0.93 1.00 0.96 5978 1 0.85 0.25 0.38 576 accuracy 0.93 6554 macro avg 0.89 0.62 0.67 6554 weighted avg 0.92 0.93 0.91 6554 Columns: clothing precision recall f1-score support 0 0.98 1.00 0.99 6442 1 0.67 0.05 0.10 112 accuracy 0.98 6554 macro avg 0.83 0.53 0.55 6554 weighted avg 0.98 0.98 0.98 6554 Columns: money precision recall f1-score support 0 0.98 1.00 0.99 6407 1 1.00 0.02 0.04 147 accuracy 0.98 6554 macro avg 0.99 0.51 0.51 6554 weighted avg 0.98 0.98 0.97 6554 Columns: missing_people precision recall f1-score support 0 0.99 1.00 0.99 6486 1 1.00 0.01 0.03 68 accuracy 0.99 6554 macro avg 0.99 0.51 0.51 6554 weighted avg 0.99 0.99 0.98 6554 Columns: refugees
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
6. Improve your modelUse grid search to find better parameters.
# Define GridSearch parameters parameters = {'clf__estimator__n_estimators': range(100,200,100), 'clf__estimator__min_samples_split': range(2,3)} # Instantiate GridSearch object cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs=4) # Use GridSearch to find the best parameters cv.fit(X_train, Y_train)
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
7. Test your modelShow the accuracy, precision, and recall of the tuned model. Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
# Predict using the trained model with the best parameters Y_pred = cv.predict(X_test) # Print the classification report for for each column for i, column in enumerate(Y_train.columns): print("Columns: ", column) print(classification_report(Y_test.values[:,i], Y_pred[:,i])) print()
Columns: related precision recall f1-score support 0 0.77 0.25 0.38 1539 1 0.80 0.98 0.88 4967 2 1.00 0.04 0.08 48 accuracy 0.80 6554 macro avg 0.86 0.42 0.45 6554 weighted avg 0.80 0.80 0.76 6554 Columns: request precision recall f1-score support 0 0.89 0.99 0.94 5442 1 0.88 0.42 0.57 1112 accuracy 0.89 6554 macro avg 0.89 0.70 0.75 6554 weighted avg 0.89 0.89 0.88 6554 Columns: offer precision recall f1-score support 0 1.00 1.00 1.00 6527 1 0.00 0.00 0.00 27 accuracy 1.00 6554 macro avg 0.50 0.50 0.50 6554 weighted avg 0.99 1.00 0.99 6554 Columns: aid_related precision recall f1-score support 0 0.77 0.88 0.82 3873 1 0.79 0.63 0.70 2681 accuracy 0.78 6554 macro avg 0.78 0.76 0.76 6554 weighted avg 0.78 0.78 0.77 6554 Columns: medical_help precision recall f1-score support 0 0.93 1.00 0.96 6054 1 0.68 0.06 0.11 500 accuracy 0.93 6554 macro avg 0.80 0.53 0.54 6554 weighted avg 0.91 0.93 0.90 6554 Columns: medical_products precision recall f1-score support 0 0.95 1.00 0.97 6221 1 0.72 0.05 0.10 333 accuracy 0.95 6554 macro avg 0.84 0.53 0.54 6554 weighted avg 0.94 0.95 0.93 6554 Columns: search_and_rescue precision recall f1-score support 0 0.97 1.00 0.99 6358 1 0.80 0.02 0.04 196 accuracy 0.97 6554 macro avg 0.89 0.51 0.51 6554 weighted avg 0.97 0.97 0.96 6554 Columns: security precision recall f1-score support 0 0.98 1.00 0.99 6430 1 0.00 0.00 0.00 124 accuracy 0.98 6554 macro avg 0.49 0.50 0.50 6554 weighted avg 0.96 0.98 0.97 6554 Columns: military precision recall f1-score support 0 0.97 1.00 0.98 6350 1 0.65 0.05 0.10 204 accuracy 0.97 6554 macro avg 0.81 0.53 0.54 6554 weighted avg 0.96 0.97 0.96 6554 Columns: child_alone precision recall f1-score support 0 1.00 1.00 1.00 6554 accuracy 1.00 6554 macro avg 1.00 1.00 1.00 6554 weighted avg 1.00 1.00 1.00 6554 Columns: water precision recall f1-score support 0 0.95 1.00 0.97 6117 1 0.88 0.21 0.34 437 accuracy 0.95 6554 macro avg 0.91 0.60 0.65 6554 weighted avg 0.94 0.95 0.93 6554 Columns: food precision recall f1-score support 0 0.93 0.99 0.96 5843 1 0.86 0.42 0.56 711 accuracy 0.93 6554 macro avg 0.90 0.70 0.76 6554 weighted avg 0.93 0.93 0.92 6554 Columns: shelter precision recall f1-score support 0 0.93 1.00 0.96 5978 1 0.84 0.21 0.33 576 accuracy 0.93 6554 macro avg 0.88 0.60 0.65 6554 weighted avg 0.92 0.93 0.91 6554 Columns: clothing precision recall f1-score support 0 0.98 1.00 0.99 6442 1 0.73 0.07 0.13 112 accuracy 0.98 6554 macro avg 0.86 0.54 0.56 6554 weighted avg 0.98 0.98 0.98 6554 Columns: money precision recall f1-score support 0 0.98 1.00 0.99 6407 1 1.00 0.02 0.04 147 accuracy 0.98 6554 macro avg 0.99 0.51 0.51 6554 weighted avg 0.98 0.98 0.97 6554 Columns: missing_people precision recall f1-score support 0 0.99 1.00 0.99 6486 1 1.00 0.01 0.03 68 accuracy 0.99 6554 macro avg 0.99 0.51 0.51 6554 weighted avg 0.99 0.99 0.98 6554 Columns: refugees
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
8. Try improving your model further. Here are a few ideas:* try other machine learning algorithms* add other features besides the TF-IDF
# TODO: Model is taking too long to fit # I have to find a better engine to process it, # before testing new ideas
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
9. Export your model as a pickle file
# Save the model to pickl file with open("DISASSTER_MODEL.pkl", 'wb') as file: file.write(pickle.dumps(cv))
_____no_output_____
MIT
models/ML Pipeline Preparation.ipynb
thiagofuruchima/disaster_message_classification
Examples and Exercises from Think Stats, 2nd Editionhttp://thinkstats2.comCopyright 2016 Allen B. DowneyMIT License: https://opensource.org/licenses/MIT
from __future__ import print_function, division %matplotlib inline import numpy as np import brfss import thinkstats2 import thinkplot
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
I'll start with the data from the BRFSS again.
df = brfss.ReadBrfss(nrows=None)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Here are the mean and standard deviation of female height in cm.
female = df[df.sex==2] female_heights = female.htm3.dropna() mean, std = female_heights.mean(), female_heights.std() mean, std
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`NormalPdf` returns a Pdf object that represents the normal distribution with the given parameters.`Density` returns a probability density, which doesn't mean much by itself.
pdf = thinkstats2.NormalPdf(mean, std) pdf.Density(mean + std)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`thinkplot` provides `Pdf`, which plots the probability density with a smooth curve.
thinkplot.Pdf(pdf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
`Pdf` provides `MakePmf`, which returns a `Pmf` object that approximates the `Pdf`.
pmf = pdf.MakePmf() thinkplot.Pmf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
If you have a `Pmf`, you can also plot it using `Pdf`, if you have reason to think it should be represented as a smooth curve.
thinkplot.Pdf(pmf, label='normal') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE).If you run this a few times, you'll see how much variation there is in the estimate.
thinkplot.Pdf(pdf, label='normal') sample = np.random.normal(mean, std, 500) sample_pdf = thinkstats2.EstimatedPdf(sample, label='sample') thinkplot.Pdf(sample_pdf, label='sample KDE') thinkplot.Config(xlabel='x', ylabel='PDF', xlim=[140, 186])
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
MomentsRaw moments are just sums of powers.
def RawMoment(xs, k): return sum(x**k for x in xs) / len(xs)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The first raw moment is the mean. The other raw moments don't mean much.
RawMoment(female_heights, 1), RawMoment(female_heights, 2), RawMoment(female_heights, 3) def Mean(xs): return RawMoment(xs, 1) Mean(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The central moments are powers of distances from the mean.
def CentralMoment(xs, k): mean = RawMoment(xs, 1) return sum((x - mean)**k for x in xs) / len(xs)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The first central moment is approximately 0. The second central moment is the variance.
CentralMoment(female_heights, 1), CentralMoment(female_heights, 2), CentralMoment(female_heights, 3) def Var(xs): return CentralMoment(xs, 2) Var(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel.
def StandardizedMoment(xs, k): var = CentralMoment(xs, 2) std = np.sqrt(var) return CentralMoment(xs, k) / std**k
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The third standardized moment is skewness.
StandardizedMoment(female_heights, 1), StandardizedMoment(female_heights, 2), StandardizedMoment(female_heights, 3) def Skewness(xs): return StandardizedMoment(xs, 3) Skewness(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median.
def Median(xs): cdf = thinkstats2.Cdf(xs) return cdf.Value(0.5)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
But in this case the mean is greater than the median, which indicates skew to the right.
Mean(female_heights), Median(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust.
def PearsonMedianSkewness(xs): median = Median(xs) mean = RawMoment(xs, 1) var = CentralMoment(xs, 2) std = np.sqrt(var) gp = 3 * (mean - median) / std return gp
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right.
PearsonMedianSkewness(female_heights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW