markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Setting limitsNow, we want to space the axes to see all the data points
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1)
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Setting ticksCurrent ticks are not ideal because they do not show the interesting values ($+/-\pi$, $+/-\pi/2$) for sine and cosine.
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi]) plt.yticks([-1, 0, +1])
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Setting tick labels* Ticks are correctly placed but their labels are not very explicit* We can guess that 3.142 is $\pi$, but it would be better to make it explicit* When we set tick values, we can also provide a corresponding label in the second argument list* We can use $\LaTeX$ when defining the labels
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '$+\pi/2$', '$+\pi$']) plt.yticks([-1, 0, +1], ['$-1$', '$0$', '$+1$'])
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Moving spines* **Spines** are the lines connecting the axis tick marks and noting the boundaries of the data area.* Spines can be placed at arbitrary positions* Until now, they are on the border of the axis * We want to have them in the middle* There are four of them: top, bottom, left, right* Therefore, the top and the right will be discarded by setting their color to `none` * The bottom and the left ones will be moved to coordinate 0 in data space coordinates
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '$+\pi/2$', '$+\pi$']) plt.yticks([-1, 0, +1], ['$-1$', '$0$', '$+1$']) ax = plt.gca() # 'get current axis' # discard top and right spines ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0))
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Adding a legend * Let us include a legend in the upper right of the plot
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '$+\pi/2$', '$+\pi$']) plt.yticks([-1, 0, +1], ['$-1$', '$0$', '$+1$']) ax = plt.gca() # 'get current axis' # discard top and right spines ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) plt.legend(loc='upper right')
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Annotate some points* The `annotate` command allows us to include annotation in the plot* For instance, to annotate the value $\frac{2\pi}{3}$ of both the sine and the cosine, we have to: 1. draw a marker on the curve as well as a straight dotted line 2. use the annotate command to display some text with an arrow
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '$+\pi/2$', '$+\pi$']) plt.yticks([-1, 0, +1], ['$-1$', '$0$', '$+1$']) t = 2 * np.pi / 3 plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--") plt.scatter([t, ], [np.cos(t), ], 50, color='blue') plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--") plt.scatter([t, ],[np.sin(t), ], 50, color='red') plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$', xy=(t, np.sin(t)), xycoords='data', xytext=(+10, +30), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) ax = plt.gca() # 'get current axis' # discard top and right spines ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) plt.legend(loc='upper left')
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
* The tick labels are now hardly visible because of the blue and red lines* We can make them bigger and we can also adjust their properties to be rendered on a semi-transparent white background* This will allow us to see both the data and the label
plt.figure(figsize=(10, 6), dpi=80) plt.plot(x, c, color="blue", linewidth=2.5, linestyle="-", label="cosine") plt.plot(x, s, color="red", linewidth=2.5, linestyle="solid", label="sine") plt.xlim(x.min() * 1.1, x.max() * 1.1) plt.ylim(c.min() * 1.1, c.max() * 1.1) plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi], ['$-\pi$', '$-\pi/2$', '$0$', '$+\pi/2$', '$+\pi$']) plt.yticks([-1, 0, +1], ['$-1$', '$0$', '$+1$']) t = 2 * np.pi / 3 plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--") plt.scatter([t, ], [np.cos(t), ], 50, color='blue') plt.annotate(r'$cos(\frac{2\pi}{3})=-\frac{1}{2}$', xy=(t, np.cos(t)), xycoords='data', xytext=(-90, -50), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) plt.plot([t, t],[0, np.sin(t)], color='red', linewidth=2.5, linestyle="--") plt.scatter([t, ],[np.sin(t), ], 50, color='red') plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$', xy=(t, np.sin(t)), xycoords='data', xytext=(+10, +30), textcoords='offset points', fontsize=16, arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2")) ax = plt.gca() # 'get current axis' # discard top and right spines ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.xaxis.set_ticks_position('bottom') ax.spines['bottom'].set_position(('data',0)) ax.yaxis.set_ticks_position('left') ax.spines['left'].set_position(('data',0)) plt.legend(loc='upper left') for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_fontsize(16) label.set_bbox(dict(facecolor='white', edgecolor='None', alpha=0.65))
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Scatter plots
n = 1024 x = np.random.normal(0, 1, n) y = np.random.normal(0, 1, n) t = np.arctan2(y, x) plt.axes([0.025, 0.025, 0.95, 0.95]) plt.scatter(x, y, s=75, c=t, alpha=.5) plt.xlim(-1.5, 1.5) plt.xticks(()) plt.ylim(-1.5, 1.5) plt.yticks(()) ax = plt.gca() ax.spines['top'].set_color('none') ax.spines['right'].set_color('none') ax.spines['bottom'].set_color('none') ax.spines['left'].set_color('none')
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
Bar plots* Creates two bar plots overlying the same axis* Include the value of each bar
n = 12 xs = np.arange(n) y1 = (1 - xs / float(n)) * np.random.uniform(0.5, 1.0, n) y2 = (1 - xs / float(n)) * np.random.uniform(0.5, 1.0, n) plt.axes([0.025, 0.025, 0.95, 0.95]) plt.bar(xs, +y1, facecolor='#9999ff', edgecolor='white') plt.bar(xs, -y2, facecolor='#ff9999', edgecolor='white') for x, y in zip(xs, y1): plt.text(x + 0.4, y + 0.05, '%.2f' % y, ha='center', va= 'bottom') for x, y in zip(xs, y2): plt.text(x + 0.4, -y - 0.05, '%.2f' % y, ha='center', va= 'top') plt.xlim(-.5, n) plt.xticks(()) plt.ylim(-1.25, 1.25) plt.yticks(()) ## Images image = np.random.rand(30, 30) plt.imshow(image, cmap=plt.cm.hot) plt.colorbar() years, months, sales = np.loadtxt('data/carsales.csv', delimiter=',', skiprows=1, dtype=int, unpack=True)
_____no_output_____
Apache-2.0
02-plotting-with-matplotlib.ipynb
theed-ml/notebooks
ClassesFor more information on the magic methods of pytho classes, consult the docs: https://docs.python.org/3/reference/datamodel.html
class DumbClass: """ This class is just meant to demonstrate the magic __repr__ method """ def __repr__(self): """ I'm giving this method a docstring """ return("I'm representing an instance of my dumbclass") dc = DumbClass() print(dc) dc help(DumbClass) class Stack: """ A simple class implimenting some common features of Stack objects """ def __init__(self, iterable=None): """ Initializes Stack objects. If an iterable is provided, add elements from the iterable to this Stack until the iterable is exhausted """ self.head = None self.size = 0 if(iterable is not None): for item in iterable: self.add(item) def add(self, item): """ Add an element to the top of the stack. This method will modify self and return self. """ self.head = (item, self.head) self.size += 1 return self def pop(self): """ remove the top item from the stack and return it """ if(len(self) > 0): ret = self.head[0] self.head = self.head[1] self.size -= 1 return ret return None def __contains__(self, item): """ Returns True if item is in self """ for i in self: if(i == item): return True return False def __len__(self): """ Returns the number of items in self """ return self.size def __iter__(self): """ prepares this stack for iteration and returns self """ self.curr = self.head return self def __next__(self): """ Returns items from the stack from top to bottom """ if(not hasattr(self, 'curr')): iter(self) if(self.curr is None): raise StopIteration else: ret = self.curr[0] self.curr = self.curr[1] return ret def __reversed__(self): """ returns a copy of self with the stack turned upside down """ return Stack(self) def __add__(self, other): """ Put self on top of other """ ret = Stack(reversed(other)) for item in reversed(self): ret.add(item) return ret def __repr__(self): """ Represent self as a string """ return f'Stack({str(list(self))})' # Create a stack object and test some methods x = Stack([3, 2]) print(x) # adds an element to the top of the stack print('\nLets add 1 to the stack') x.add(1) print(x) # Removes the top most element print('\nLets remove an item from the top of the stack') item = x.pop() print(item) print(x) # Removes the top most element print('\nlets remove another item') item = x.pop() print(item) print(x) x = Stack([4,5,6]) # Because I implimented the __contains__ method, # I can check if items are in stack objects print(f'Does my stack contain 2? {2 in x}') print(f'Does my stack contain 4? {4 in x}') # Because I implimented the __len__ method, # I can check how many items are in stack objects print(f'How many elements are in my stack? {len(x)}') # because my stack class has an __iter__ and __next__ methods # I can iterate over stack objects x = Stack([7,3,4]) print(f"Lets iterate over my stack : {x}") for item in x: print(item) # Because my stack class has a __reversed__ method, # I can easily reverse a stack object print(f'I am flipping my stack upside down : {reversed(x)}') # Because I implimented the __add__ method, # I can add stacks together x = Stack([4,5,6]) y = Stack([1,2,3]) print("I have two stacks") print(f'x : {x}') print(f'y : {y}') print("Let's add them together") print(f'x + y = {x + y}') for item in (x + y): print(item)
_____no_output_____
MIT
.ipynb_checkpoints/12-4_review-checkpoint.ipynb
willdoucet/Classwork
Using the SqlAlchemy ORMFor more information, check out the documentation : https://docs.sqlalchemy.org/en/latest/orm/tutorial.html
from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String, Float, ForeignKey from sqlalchemy.orm import Session, relationship import pymysql pymysql.install_as_MySQLdb() # Sets an object to utilize the default declarative base in SQL Alchemy Base = declarative_base() # Lets define the owners table/class class Owners(Base): __tablename__ = 'owners' id = Column(Integer, primary_key=True) name = Column(String(255)) phone_number = Column(String(255)) pets = relationship("Pets", back_populates="owner") def __repr__(self): return f"<Owners(id={self.id}, name='{self.name}', phone_number='{self.phone_number}')>" # Lets define the pets table/class class Pets(Base): __tablename__ = 'pets' id = Column(Integer, primary_key=True) name = Column(String(255)) owner_id = Column(Integer, ForeignKey('owners.id')) owner = relationship("Owners", back_populates="pets") def __repr__(self): return f"<Pets(id={self.id}, name='{self.name}', owner_id={self.owner_id})>" # Lets connect to my database # engine = create_engine("sqlite:///pets.sqlite") engine = create_engine("mysql://root@localhost/review_db") # conn = engine.connect() Base.metadata.create_all(engine) session = Session(bind=engine) # Lets create me me = Owners(name='Kenton', phone_number='867-5309') session.add(me) session.commit() # Now lets add my dog my_dog = Pets(name='Saxon', owner_id=me.id) session.add(my_dog) session.commit() # We can query the tables using the session object from earlier # Lets just get all the data all_owners = list(session.query(Owners)) all_pets = list(session.query(Pets)) print(all_owners) print(all_pets) me = all_owners[0] rio = all_pets[0] # Because we are using an ORM and have defined relations, # we can easily and intuitively access related data print(me.pets) print(rio.owner)
_____no_output_____
MIT
.ipynb_checkpoints/12-4_review-checkpoint.ipynb
willdoucet/Classwork
Estimation on real data using MSM
from consav import runtools runtools.write_numba_config(disable=0,threads=4) %matplotlib inline %load_ext autoreload %autoreload 2 # Local modules from Model import RetirementClass import figs import SimulatedMinimumDistance as SMD # Global modules import numpy as np import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Data
data = pd.read_excel('SASdata/moments.xlsx') mom_data = data['mom'].to_numpy() se = data['se'].to_numpy() obs = data['obs'].to_numpy() se = se/np.sqrt(obs) se[se>0] = 1/se[se>0] factor = np.ones(len(se)) factor[-15:] = 4 W = np.eye(len(se))*se*factor cov = pd.read_excel('SASdata/Cov.xlsx') Omega = cov*obs Nobs = np.median(obs)
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Set up estimation
single_kwargs = {'simN': int(1e5), 'simT': 68-53+1} Couple = RetirementClass(couple=True, single_kwargs=single_kwargs, simN=int(1e5), simT=68-53+1) Couple.solve() Couple.simulate() def mom_fun(Couple): return SMD.MomFun(Couple) est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "pareto_w", "phi_0_male"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par)
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Estimate
theta0 = SMD.start(9,bounds=[(0,1), (0,1), (0.2,0.8), (0.2,0.8), (0,2)]) theta0 smd.MultiStart(theta0,W) theta = smd.est smd.MultiStart(theta0,W) theta = smd.est
Iteration: 50 (11.08 minutes) alpha_0_male=0.5044 alpha_0_female=0.4625 sigma_eta=0.8192 pareto_w=0.7542 phi_0_male=0.1227 -> 21.6723 Iteration: 100 (11.19 minutes) alpha_0_male=0.5703 alpha_0_female=0.5002 sigma_eta=0.7629 pareto_w=0.7459 phi_0_male=0.1575 -> 17.7938 Iteration: 150 (10.73 minutes) alpha_0_male=0.5546 alpha_0_female=0.5131 sigma_eta=0.6877 pareto_w=0.8166 phi_0_male=0.1905 -> 16.9717 Iteration: 200 (10.94 minutes) alpha_0_male=0.5526 alpha_0_female=0.5128 sigma_eta=0.6891 pareto_w=0.8133 phi_0_male=0.1875 -> 16.9319 1 estimation: success: True | feval: 248 | time: 54.8 min | obj: 16.927585558076142 start par: [0.551, 0.576, 0.596, 0.5, 1.241] par: [0.55258074 0.51274232 0.68921531 0.81324937 0.18777072] Iteration: 250 (11.3 minutes) alpha_0_male=0.6206 alpha_0_female=0.5880 sigma_eta=0.4200 pareto_w=0.4980 phi_0_male=0.5590 -> 57.7093 Iteration: 300 (11.24 minutes) alpha_0_male=0.5428 alpha_0_female=0.4145 sigma_eta=0.6379 pareto_w=0.5308 phi_0_male=0.3868 -> 22.4315 Iteration: 350 (10.62 minutes) alpha_0_male=0.5777 alpha_0_female=0.5323 sigma_eta=0.7206 pareto_w=0.6119 phi_0_male=0.1712 -> 19.5532 Iteration: 400 (10.7 minutes) alpha_0_male=0.5412 alpha_0_female=0.4850 sigma_eta=0.6265 pareto_w=0.7680 phi_0_male=0.1276 -> 17.5896 Iteration: 450 (11.15 minutes) alpha_0_male=0.5727 alpha_0_female=0.5056 sigma_eta=0.6590 pareto_w=0.7641 phi_0_male=0.1026 -> 17.3178 Iteration: 500 (11.37 minutes) alpha_0_male=0.5724 alpha_0_female=0.5112 sigma_eta=0.6671 pareto_w=0.7618 phi_0_male=0.1020 -> 17.2860 2 estimation: success: True | feval: 300 | time: 66.3 min | obj: 17.27324442907804 start par: [0.591, 0.588, 0.42, 0.498, 0.559] par: [0.57229758 0.5114954 0.66670532 0.7624101 0.1016371 ] Iteration: 550 (11.27 minutes) alpha_0_male=0.2415 alpha_0_female=0.5020 sigma_eta=0.5640 pareto_w=0.5470 phi_0_male=1.3920 -> 52.9243 Iteration: 600 (11.18 minutes) alpha_0_male=0.3956 alpha_0_female=0.4874 sigma_eta=0.6780 pareto_w=0.6912 phi_0_male=0.2409 -> 26.3473 Iteration: 650 (11.25 minutes) alpha_0_male=0.4919 alpha_0_female=0.5041 sigma_eta=0.6219 pareto_w=0.7558 phi_0_male=0.2084 -> 18.6088 Iteration: 700 (11.42 minutes) alpha_0_male=0.5489 alpha_0_female=0.4931 sigma_eta=0.6267 pareto_w=0.7717 phi_0_male=0.1391 -> 17.4406 Iteration: 750 (10.88 minutes) alpha_0_male=0.5477 alpha_0_female=0.4897 sigma_eta=0.6247 pareto_w=0.7747 phi_0_male=0.1398 -> 17.4092 Iteration: 800 (10.64 minutes) alpha_0_male=0.5478 alpha_0_female=0.4898 sigma_eta=0.6248 pareto_w=0.7747 phi_0_male=0.1394 -> 17.3802 3 estimation: success: True | feval: 253 | time: 56.0 min | obj: 17.38030688438767 start par: [0.23, 0.502, 0.564, 0.547, 1.392] par: [0.54777719 0.4897951 0.62477554 0.77474538 0.13940557] Iteration: 850 (10.52 minutes) alpha_0_male=0.6309 alpha_0_female=0.4741 sigma_eta=0.8748 pareto_w=0.7275 phi_0_male=0.3000 -> 20.2731 Iteration: 900 (10.65 minutes) alpha_0_male=0.5417 alpha_0_female=0.5320 sigma_eta=0.7344 pareto_w=0.8562 phi_0_male=0.3055 -> 17.2592 Iteration: 950 (10.64 minutes) alpha_0_male=0.5331 alpha_0_female=0.5218 sigma_eta=0.7226 pareto_w=0.8497 phi_0_male=0.2874 -> 17.1254 Iteration: 1000 (10.59 minutes) alpha_0_male=0.5359 alpha_0_female=0.5206 sigma_eta=0.7271 pareto_w=0.8505 phi_0_male=0.2736 -> 17.1173 Iteration: 1050 (10.68 minutes) alpha_0_male=0.5358 alpha_0_female=0.5207 sigma_eta=0.7268 pareto_w=0.8501 phi_0_male=0.2741 -> 17.0704 4 estimation: success: True | feval: 260 | time: 55.2 min | obj: 17.069749122995066 start par: [0.369, 0.367, 0.658, 0.431, 0.62] par: [0.53580109 0.52075601 0.72683222 0.85007036 0.27418587] Iteration: 1100 (10.73 minutes) alpha_0_male=0.5503 alpha_0_female=0.5148 sigma_eta=0.6911 pareto_w=0.8155 phi_0_male=0.1885 -> 16.9585 Iteration: 1150 (10.81 minutes) alpha_0_male=0.5525 alpha_0_female=0.5129 sigma_eta=0.6894 pareto_w=0.8134 phi_0_male=0.1879 -> 16.9468 Iteration: 1200 (10.89 minutes) alpha_0_male=0.5525 alpha_0_female=0.5128 sigma_eta=0.6893 pareto_w=0.8134 phi_0_male=0.1879 -> 16.9224 final estimation: success: True | feval: 142 | obj: 16.922410852892398 total estimation time: 4.4 hours start par: [0.55258074 0.51274232 0.68921531 0.81324937 0.18777072] par: [0.5524854 0.51284598 0.68929759 0.81336732 0.18791813]
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Save parameters
est_par.append('phi_0_female') thetaN = list(theta) thetaN.append(Couple.par.phi_0_male) SMD.save_est(est_par,thetaN,name='baseline2')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Standard errors
est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "pareto_w", "phi_0_male"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta = list(SMD.load_est('baseline2').values()) theta = theta[:5] smd.obj_fun(theta,W) np.round(theta,3) Nobs = np.quantile(obs,0.25) smd.std_error(theta,Omega,W,Nobs,Couple.par.simN*2/Nobs) # Nobs = lower quartile np.round(smd.std,3) # Nobs = lower quartile np.round(smd.std,3) Nobs = np.quantile(obs,0.25) smd.std_error(theta,Omega,W,Nobs,Couple.par.simN*2/Nobs) # Nobs = median np.round(smd.std,3)
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Model fit
smd.obj_fun(theta,W) jmom = pd.read_excel('SASdata/joint_moments_ad.xlsx') for i in range(-2,3): data = jmom[jmom.Age_diff==i]['ssh'].to_numpy() plt.bar(np.arange(-7,8), data, label='Data') plt.plot(np.arange(-7,8),SMD.joint_moments_ad(Couple,i),'k--', label='Predicted') #plt.ylim(0,0.4) plt.legend() plt.show() figs.MyPlot(figs.model_fit_marg(smd,0,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenSingle2.png') figs.MyPlot(figs.model_fit_marg(smd,1,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenSingle2.png') figs.MyPlot(figs.model_fit_marg(smd,0,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenCouple2.png') figs.MyPlot(figs.model_fit_marg(smd,1,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenCouple2.png') figs.model_fit_joint(smd).savefig('figs/ModelFit/Joint2') theta[4] = 1 smd.obj_fun(theta,W) dist1 = smd.mom_sim[44:] theta[4] = 2 smd.obj_fun(theta,W) dist2 = smd.mom_sim[44:] theta[4] = 3 smd.obj_fun(theta,W) dist3 = smd.mom_sim[44:] dist_data = mom_data[44:] figs.model_fit_joint_many(dist_data,dist1,dist2,dist3).savefig('figs/ModelFit/JointMany2')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Sensitivity
est_par_tex = [r'$\alpha^m$', r'$\alpha^f$', r'$\sigma$', r'$\lambda$', r'$\phi$'] fixed_par = ['R', 'rho', 'beta', 'gamma', 'v', 'priv_pension_male', 'priv_pension_female', 'g_adjust', 'pi_adjust_m', 'pi_adjust_f'] fixed_par_tex = [r'$R$', r'$\rho$', r'$\beta$', r'$\gamma$', r'$v$', r'$PPW^m$', r'$PPW^f$', r'$g$', r'$\pi^m$', r'$\pi^f$'] smd.recompute=True smd.sensitivity(theta,W,fixed_par) figs.sens_fig_tab(smd.sens2[:,:5],smd.sens2e[:,:5],theta, est_par_tex,fixed_par_tex[:5]).savefig('figs/ModelFit/CouplePref2.png') figs.sens_fig_tab(smd.sens2[:,5:],smd.sens2e[:,5:],theta, est_par_tex,fixed_par_tex[5:]).savefig('figs/modelFit/CoupleCali2.png') smd.recompute=True smd.sensitivity(theta,W,fixed_par) figs.sens_fig_tab(smd.sens2[:,:5],smd.sens2e[:,:5],theta, est_par_tex,fixed_par_tex[:5]).savefig('figs/ModelFit/CouplePref.png') figs.sens_fig_tab(smd.sens2[:,5:],smd.sens2e[:,5:],theta, est_par_tex,fixed_par_tex[5:]).savefig('figs/modelFit/CoupleCali.png')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Recalibrate model (phi=0)
Couple.par.phi_0_male = 0 Couple.par.phi_0_female = 0 est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta", "pareto_w"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta0 = SMD.start(4,bounds=[(0,1), (0,1), (0.2,0.8), (0.2,0.8)]) smd.MultiStart(theta0,W) theta = smd.est est_par.append("phi_0_male") est_par.append("phi_0_female") theta = list(theta) theta.append(Couple.par.phi_0_male) theta.append(Couple.par.phi_0_male) SMD.save_est(est_par,theta,name='phi0') smd.obj_fun(theta,W) figs.MyPlot(figs.model_fit_marg(smd,0,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenSingle_phi0.png') figs.MyPlot(figs.model_fit_marg(smd,1,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenSingle_phi0.png') figs.MyPlot(figs.model_fit_marg(smd,0,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenCouple_phi0.png') figs.MyPlot(figs.model_fit_marg(smd,1,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenCoupleW_phi0.png') figs.model_fit_joint(smd).savefig('figs/ModelFit/Joint_phi0')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Recalibrate model (phi high)
Couple.par.phi_0_male = 1.187 Couple.par.phi_0_female = 1.671 Couple.par.pareto_w = 0.8 est_par = ["alpha_0_male", "alpha_0_female", "sigma_eta"] smd = SMD.SimulatedMinimumDistance(Couple,mom_data,mom_fun,est_par=est_par) theta0 = SMD.start(4,bounds=[(0.2,0.6), (0.2,0.6), (0.4,0.8)]) theta0 smd.MultiStart(theta0,W) theta = smd.est est_par.append("phi_0_male") est_par.append("phi_0_female") theta = list(theta) theta.append(Couple.par.phi_0_male) theta.append(Couple.par.phi_0_male) SMD.save_est(est_par,theta,name='phi_high') smd.obj_fun(theta,W) figs.MyPlot(figs.model_fit_marg(smd,0,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenSingle_phi_high.png') figs.MyPlot(figs.model_fit_marg(smd,1,0),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenSingle_phi_high.png') figs.MyPlot(figs.model_fit_marg(smd,0,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargWomenCouple_phi_high.png') figs.MyPlot(figs.model_fit_marg(smd,1,1),ylim=[-0.01,0.4],linewidth=3).savefig('figs/ModelFit/MargMenCoupleW_phi_high.png') figs.model_fit_joint(smd).savefig('figs/ModelFit/Joint_phi_high')
_____no_output_____
MIT
Main/MSM_real.ipynb
mathiassunesen/Speciale_retirement
Mais Exercícios de Redução de Dimensionalidade Baseado no livro "Python Data Science Handbook" de Jake VanderPlashttps://jakevdp.github.io/PythonDataScienceHandbook/Usando os dados de rostos do scikit-learn, utilizar as tecnicas de aprendizado de variedade para comparação.
from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=30) faces.data.shape
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
A base de dados tem 2300 imagens de rostos com 2914 pixels cada (47x62)Vamos visualizar as primeiras 32 dessas imagens
import numpy as np from numpy import random from matplotlib import pyplot as plt %matplotlib inline fig, ax = plt.subplots(5, 8, subplot_kw=dict(xticks=[], yticks=[])) for i, axi in enumerate(ax.flat): axi.imshow(faces.images[i], cmap='gray')
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Podemos ver se com redução de dimensionalidade é possível entender algumas das caraterísticas das imagens.
from sklearn.decomposition import PCA model0 = PCA(n_components=0.95) X_pca=model0.fit_transform(faces.data) plt.plot(np.cumsum(model0.explained_variance_ratio_)) plt.xlabel('n components') plt.ylabel('cumulative variance') plt.grid(True) print("Numero de componentes para 95% de variância preservada:",model0.n_components_)
Numero de componentes para 95% de variância preservada: 171
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Quer dizer que para ter 95% de variância preservada na dimensionalidade reduzida precisamos mais de 170 dimensões. As novas "coordenadas" podem ser vistas em quadros de 9x19 pixels
def plot_faces(instances, **options): fig, ax = plt.subplots(5, 8, subplot_kw=dict(xticks=[], yticks=[])) sizex = 9 sizey = 19 images = [instance.reshape(sizex,sizey) for instance in instances] for i,axi in enumerate(ax.flat): axi.imshow(images[i], cmap = "gray", **options) axi.axis("off")
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Vamos visualizar a compressão dessas imagens
plot_faces(X_pca,aspect="auto")
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
A opção ```svd_solver=randomized``` faz o PCA achar as $d$ componentes principais mais rápido quando $d \ll n$, mas o $d$ é fixo. Tem alguma vantagem usar para compressão das imagens de rosto? Teste! Aplicar Isomap para vizualizar em 2D
from sklearn.manifold import Isomap iso = Isomap(n_components=2) X_iso = iso.fit_transform(faces.data) X_iso.shape from matplotlib import offsetbox def plot_projection(data,proj,images=None,ax=None,thumb_frac=0.5,cmap="gray"): ax = ax or plt.gca() ax.plot(proj[:, 0], proj[:, 1], '.k') if images is not None: min_dist_2 = (thumb_frac * max(proj.max(0) - proj.min(0))) ** 2 shown_images = np.array([2 * proj.max(0)]) for i in range(data.shape[0]): dist = np.sum((proj[i] - shown_images) ** 2, 1) if np.min(dist) < min_dist_2: # don't show points that are too close continue shown_images = np.vstack([shown_images, proj[i]]) imagebox = offsetbox.AnnotationBbox( offsetbox.OffsetImage(images[i], cmap=cmap), proj[i]) ax.add_artist(imagebox) def plot_components(data, model, images=None, ax=None, thumb_frac=0.05,cmap="gray"): proj = model.fit_transform(data) plot_projection(data,proj,images,ax,thumb_frac,cmap) fig, ax = plt.subplots(figsize=(10, 10)) plot_projection(faces.data,X_iso,images=faces.images[:, ::2, ::2],thumb_frac=0.07) ax.axis("off")
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
As imagens mais a direita são mais escuras que as da direita (seja iluminação ou cor da pele), as imagens mais embaixo estão orientadas com o rosto à esquerda e as de cima com o rosto à direita. Exercícios: 1. Aplicar LLE à base de dados dos rostos e visualizar em mapa 2D, em particular a versão "modificada" ([link](https://scikit-learn.org/stable/modules/manifold.htmlmodified-locally-linear-embedding))2. Aplicar t-SNE à base de dados dos rostos e visualizar em mapa 2D3. Escolher mais uma implementação de aprendizado de variedade do Scikit-Learn ([link](https://scikit-learn.org/stable/modules/manifold.html)) e aplicar ao mesmo conjunto. (*Hessian, LTSA, Spectral*)Qual funciona melhor? Adicione contador de tempo para comparar a duração de cada ajuste. Kernel PCA e sequências Vamos ver novamente o exemplo do rocambole
import numpy as np from numpy import random from matplotlib import pyplot as plt %matplotlib inline from mpl_toolkits.mplot3d import Axes3D from sklearn.datasets import make_swiss_roll X, t = make_swiss_roll(n_samples=1000, noise=0.2, random_state=42) axes = [-11.5, 14, -2, 23, -12, 15] fig = plt.figure(figsize=(12, 10)) ax = fig.add_subplot(111, projection='3d') ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=t, cmap="plasma") ax.view_init(10, -70) ax.set_xlabel("$x_1$", fontsize=18) ax.set_ylabel("$x_2$", fontsize=18) ax.set_zlabel("$x_3$", fontsize=18) ax.set_xlim(axes[0:2]) ax.set_ylim(axes[2:4]) ax.set_zlim(axes[4:6])
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Como foi no caso do SVM, pode se aplicar uma transformação de *kernel*, para ter um novo espaço de *features* onde pode ser aplicado o PCA. Embaixo o exemplo de PCA com kernel linear (equiv. a aplicar o PCA), RBF (*radial basis function*) e *sigmoide* (i.e. logístico).
from sklearn.decomposition import KernelPCA lin_pca = KernelPCA(n_components = 2, kernel="linear", fit_inverse_transform=True) rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=0.0433, fit_inverse_transform=True) sig_pca = KernelPCA(n_components = 2, kernel="sigmoid", gamma=0.001, coef0=1, fit_inverse_transform=True) plt.figure(figsize=(11, 4)) for subplot, pca, title in ((131, lin_pca, "Linear kernel"), (132, rbf_pca, "RBF kernel, $\gamma=0.04$"), (133, sig_pca, "Sigmoid kernel, $\gamma=10^{-3}, r=1$")): X_reduced = pca.fit_transform(X) if subplot == 132: X_reduced_rbf = X_reduced plt.subplot(subplot) plt.title(title, fontsize=14) plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=t, cmap=plt.cm.hot) plt.xlabel("$z_1$", fontsize=18) if subplot == 131: plt.ylabel("$z_2$", fontsize=18, rotation=0) plt.grid(True)
/usr/local/lib/python3.6/dist-packages/sklearn/utils/extmath.py:516: RuntimeWarning: invalid value encountered in multiply v *= signs[:, np.newaxis] /usr/local/lib/python3.6/dist-packages/sklearn/utils/extmath.py:516: RuntimeWarning: invalid value encountered in multiply v *= signs[:, np.newaxis]
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Selecionar um Kernel e Otimizar HiperparâmetrosComo estos são algoritmos não supervisionados, no existe uma forma "obvia" de determinar a sua performance. Porém a redução de dimensionalidade muitas vezes é um passo preparatório para uma outra tarefa de aprendizado supervisionado. Nesse caso é possível usar o ```GridSearchCV``` para avaliar a melhor performance no passo seguinte, com um ```Pipeline```. A classificação será em base ao valor do ```t``` com limite arbitrário de 6.9.
from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.pipeline import Pipeline y = t>6.9 clf = Pipeline([ ("kpca", KernelPCA(n_components=2)), ("log_reg", LogisticRegression(solver="liblinear")) ]) param_grid = [{ "kpca__gamma": np.linspace(0.03, 0.05, 10), "kpca__kernel": ["rbf", "sigmoid"] }] grid_search = GridSearchCV(clf, param_grid, cv=3) grid_search.fit(X, y) print(grid_search.best_params_)
{'kpca__gamma': 0.043333333333333335, 'kpca__kernel': 'rbf'}
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Exercício :Varie o valor do corte em ```t``` e veja tem faz alguma diferência para o kernel e hiperparámetros ideais. Inverter a transformação e erro de Reconstrução Outra opção seria escolher o kernel e hiperparâmetros que tem o menor erro de reconstrução. O seguinte código, com opção ```fit_inverse_transform=True```, vai fazer junto com o kPCA um modelo de regressão com as instancias projetadas (```X_reduced```) de treino e as originais (```X```) de target. O resultado do ```inverse_transform``` será uma tentativa de reconstrução no espaço original .
rbf_pca = KernelPCA(n_components = 2, kernel="rbf", gamma=13./300., fit_inverse_transform=True) X_reduced = rbf_pca.fit_transform(X) X_preimage = rbf_pca.inverse_transform(X_reduced) X_preimage.shape axes = [-11.5, 14, -2, 23, -12, 15] fig = plt.figure(figsize=(12, 10)) ax = fig.add_subplot(111, projection='3d') ax.scatter(X_preimage[:, 0], X_preimage[:, 1], X_preimage[:, 2], c=t, cmap="plasma") ax.view_init(10, -70) ax.set_xlabel("$x_1$", fontsize=18) ax.set_ylabel("$x_2$", fontsize=18) ax.set_zlabel("$x_3$", fontsize=18) ax.set_xlim(axes[0:2]) ax.set_ylim(axes[2:4]) ax.set_zlim(axes[4:6])
_____no_output_____
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Então é possível computar o "erro" entre o dataset reconstruido e o original (MSE).
from sklearn.metrics import mean_squared_error as mse print(mse(X,X_preimage))
32.79523578725337
MIT
Exemplos_DR/Exercicios_DimensionalReduction.ipynb
UERJ-FISICA/ML4PPGF_UERJ
Ejemplos aplicaciones de las distribuciones de probabilidad Ejemplo BinomialUn modelo de precio de opciones, el cual intente modelar el precio de un activo $S(t)$ en forma simplificada, en vez de usar ecuaciones diferenciales estocásticas. De acuerdo a este modelo simplificado, dado el precio del activo actual $S(0)=S_0$, el precio después de un paso de tiempo $\delta t$, denotado por $S(\delta t)$, puede ser ya sea $S_u=uS_0$ o $S_d=dS_0$, con probabilidades $p_u$ y $p_d$, respectivamente. Los subíndices $u$ y $p$ pueden ser interpretados como 'subida' y 'bajada', además consideramos cambios multiplicativos. Ahora imagine que el proces $S(t)$ es observado hasta el tiempo $T=n\cdot \delta t$ y que las subidas y bajadas del precio son independientes en todo el tiempo. Como hay $n$ pasos, el valor mas grande de $S(T)$ alcanzado es $S_0u^n$ y el valor más pequeño es $S_0d^n$. Note que valores intermedios serán de la forma $S_0u^md^{n-m}$ donde $m$ es el número de saltos de subidas realizadas por el activo y $n-m$ el número bajadas del activo. Observe que es irrelevante la secuencia exacta de subidas y bajadas del precio para determinar el precio final, es decir como los cambios multiplicativos conmutan: $S_0ud=S_0du$. Un simple modelo como el acá propuesto, puede representarse mediante un modelo binomial y se puede representar de la siguiente forma:![imagen.png](attachment:imagen.png)Tal modelo es un poco conveniente para simples opciones de dimensión baja debido a que **(el diagrama puede crecer exponencialmente)**, cuando las recombinaciones mantienen una complejidad baja. Con este modelo podíamos intentar responder - Cúal es la probabilidad que $S(T)=S_0u^md^{(n-m)}$? - **Hablar como construir el modelo binomial** - $n,m,p \longrightarrow X\sim Bin(n,p)$ - PMF $\rightarrow P(X=m)={n \choose m}p^m(1-p)^{n-m}$ - Dibuje la Densidad de probabilidad para $n=30, p_1=0.2,p_2=0.4$
# Importamos librerías a trabajar en todas las simulaciones import matplotlib.pyplot as plt import numpy as np import scipy.stats as st # Librería estadística from math import factorial as fac # Importo la operación factorial from scipy.special import comb # Importamos la función combinatoria %matplotlib inline # Parámetros de la distribución n = 30; p1=0.2; p2 = 0.4 m = np.arange(0,n) n = n*np.ones(len(m)) # Distribución binomial creada P = lambda p,n,m:comb(n,m)*p**m*(1-p)**(n-m) # Distribución binomial del paquete de estadística P2 = st.binom(n,p1).pmf(m) # Comparación de función creada con función de python plt.plot(P(p1,n,m),'o-',label='Función creada') plt.stem(P2,'r--',label='Función de librería estadística') plt.legend() plt.title('Comparación de funciones') plt.show() # Grafica de pmf para el problema de costo de activos plt.plot(P(p1,n,m),'o-.b',label='$p_1 = 0.2$') plt.plot(st.binom(n,p2).pmf(m),'gv--',label='$p_2 = 0.4$') plt.legend() plt.title('Gráfica de pmf para el problema de costo de activos') plt.show()
_____no_output_____
MIT
TEMA-2/Clase12_EjemplosDeAplicaciones.ipynb
kitziafigueroa/SPF-2019-II
EjercicioProblema referencia: Introduction to Operations Research,(Chap.10.1, pag.471 and 1118)> Descargar ejercicio de el siguiente link> https://drive.google.com/file/d/19GvzgEmYUNXrZqlmppRyW5t0p8WfUeIf/view?usp=sharing![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) ![imagen.png](attachment:imagen.png) **Pessimistic case**![imagen.png](attachment:imagen.png) **Possibilities: Most likely**![imagen.png](attachment:imagen.png) **Optimistic case**![imagen.png](attachment:imagen.png) **Approximations**1. **Simplifying Approximation 1:** Assume that the mean critical path will turn out to be the longest path through the project network.2. **Simplifying Approximation 2:** Assume that the durations of the activities on the mean critical path are statistically independent$$\mu_p \longrightarrow \text{Use the approximation 1}$$$$\sigma_p \longrightarrow \text{Use the approximation 1,2}$$ **Choosing the mean critical path**![imagen.png](attachment:imagen.png) 3. **Simplifying Approximation 3:** Assume that the form of the probability distribution of project duration is a `normal distribution`. By using simplifying approximations 1 and 2, one version of the central limit theorem justifies this assumption as being a reasonable approximation if the number of activities on the mean critical path is not too small (say, at least 5). The approximation becomes better as this number of activities increases. Casos de estudioSe tiene entonces la variable aleatoria $T$ la cual representa la duración del proyecto en semanas con media $\mu_p$ y varianza $\sigma_p^2$ y $d$ representa la fecha límite de entrega del proyecto, la cual es de 47 semanas.1. Suponer que $T$ distribuye normal y responder cual es la probabilidad $P(T\leq d)$.
######### Caso de estudio 1 ################ up = 44; sigma = np.sqrt(9); d = 47 P = st.norm(up,sigma).cdf(d) print('P(T<=d)=',P) P2 = st.beta
P(T<=d)= 0.8413447460685429
MIT
TEMA-2/Clase12_EjemplosDeAplicaciones.ipynb
kitziafigueroa/SPF-2019-II
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
# TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv') df.head() df.columns = ['unique_id', 'age','weight','exercise_time'] df.head() df.dtypes #df.reset_index() exercise_bins = pd.cut(df['exercise_time'],10) pd.crosstab(exercise_bins, df['age'], normalize = 'columns') pd.crosstab(exercise_bins, df['weight'], normalize='columns') weight_bins = pd.cut(df['weight'], 5) pd.crosstab(weight_bins, df['age'], normalize='columns')
_____no_output_____
MIT
module3-databackedassertions/Sanjay_Krishna_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
sanjaykmenon/DS-Unit-1-Sprint-1-Dealing-With-Data
Can't seem to find a relationship because there is too much data to analyze here. I think I will try plotting this to see if i can get a better understanding.
import seaborn as sns sns.pairplot(df)
_____no_output_____
MIT
module3-databackedassertions/Sanjay_Krishna_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb
sanjaykmenon/DS-Unit-1-Sprint-1-Dealing-With-Data
Working with Pytrees[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/google/jax/blob/main/docs/jax-101/05.1-pytrees.ipynb)*Author: Vladimir Mikulik*Often, we want to operate on objects that look like dicts of arrays, or lists of lists of dicts, or other nested structures. In JAX, we refer to these as *pytrees*, but you can sometimes see them called *nests*, or just *trees*.JAX has built-in support for such objects, both in its library functions as well as through the use of functions from [`jax.tree_utils`](https://jax.readthedocs.io/en/latest/jax.tree_util.html) (with the most common ones also available as `jax.tree_*`). This section will explain how to use them, give some useful snippets and point out common gotchas. What is a pytree?As defined in the [JAX pytree docs](https://jax.readthedocs.io/en/latest/pytrees.html):> a pytree is a container of leaf elements and/or more pytrees. Containers include lists, tuples, and dicts. A leaf element is anything that’s not a pytree, e.g. an array. In other words, a pytree is just a possibly-nested standard or user-registered Python container. If nested, note that the container types do not need to match. A single “leaf”, i.e. a non-container object, is also considered a pytree.Some example pytrees:
import jax import jax.numpy as jnp example_trees = [ [1, 'a', object()], (1, (2, 3), ()), [1, {'k1': 2, 'k2': (3, 4)}, 5], {'a': 2, 'b': (2, 3)}, jnp.array([1, 2, 3]), ] # Let's see how many leaves they have: for pytree in example_trees: leaves = jax.tree_leaves(pytree) print(f"{repr(pytree):<45} has {len(leaves)} leaves: {leaves}")
[1, 'a', <object object at 0x7fded60bb8c0>] has 3 leaves: [1, 'a', <object object at 0x7fded60bb8c0>] (1, (2, 3), ()) has 3 leaves: [1, 2, 3] [1, {'k1': 2, 'k2': (3, 4)}, 5] has 5 leaves: [1, 2, 3, 4, 5] {'a': 2, 'b': (2, 3)} has 3 leaves: [2, 2, 3] DeviceArray([1, 2, 3], dtype=int32) has 1 leaves: [DeviceArray([1, 2, 3], dtype=int32)]
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
We've also introduced our first `jax.tree_*` function, which allowed us to extract the flattened leaves from the trees. Why pytrees?In machine learning, some places where you commonly find pytrees are:* Model parameters* Dataset entries* RL agent observationsThey also often arise naturally when working in bulk with datasets (e.g., lists of lists of dicts). Common pytree functionsThe most commonly used pytree functions are `jax.tree_map` and `jax.tree_multimap`. They work analogously to Python's native `map`, but on entire pytrees.For functions with one argument, use `jax.tree_map`:
list_of_lists = [ [1, 2, 3], [1, 2], [1, 2, 3, 4] ] jax.tree_map(lambda x: x*2, list_of_lists)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
To use functions with more than one argument, use `jax.tree_multimap`:
another_list_of_lists = list_of_lists jax.tree_multimap(lambda x, y: x+y, list_of_lists, another_list_of_lists)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
For `tree_multimap`, the structure of the inputs must exactly match. That is, lists must have the same number of elements, dicts must have the same keys, etc. Example: ML model parametersA simple example of training an MLP displays some ways in which pytree operations come in useful:
import numpy as np def init_mlp_params(layer_widths): params = [] for n_in, n_out in zip(layer_widths[:-1], layer_widths[1:]): params.append( dict(weights=np.random.normal(size=(n_in, n_out)) * np.sqrt(2/n_in), biases=np.ones(shape=(n_out,)) ) ) return params params = init_mlp_params([1, 128, 128, 1])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
We can use `jax.tree_map` to check that the shapes of our parameters are what we expect:
jax.tree_map(lambda x: x.shape, params)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Now, let's train our MLP:
def forward(params, x): *hidden, last = params for layer in hidden: x = jax.nn.relu(x @ layer['weights'] + layer['biases']) return x @ last['weights'] + last['biases'] def loss_fn(params, x, y): return jnp.mean((forward(params, x) - y) ** 2) LEARNING_RATE = 0.0001 @jax.jit def update(params, x, y): grads = jax.grad(loss_fn)(params, x, y) # Note that `grads` is a pytree with the same structure as `params`. # `jax.grad` is one of the many JAX functions that has # built-in support for pytrees. # This is handy, because we can apply the SGD update using tree utils: return jax.tree_multimap( lambda p, g: p - LEARNING_RATE * g, params, grads ) import matplotlib.pyplot as plt xs = np.random.normal(size=(128, 1)) ys = xs ** 2 for _ in range(1000): params = update(params, xs, ys) plt.scatter(xs, ys) plt.scatter(xs, forward(params, xs), label='Model prediction') plt.legend();
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Custom pytree nodesSo far, we've only been considering pytrees of lists, tuples, and dicts; everything else is considered a leaf. Therefore, if you define my own container class, it will be considered a leaf, even if it has trees inside it:
class MyContainer: """A named container.""" def __init__(self, name: str, a: int, b: int, c: int): self.name = name self.a = a self.b = b self.c = c jax.tree_leaves([ MyContainer('Alice', 1, 2, 3), MyContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Accordingly, if we try to use a tree map expecting our leaves to be the elements inside the container, we will get an error:
jax.tree_map(lambda x: x + 1, [ MyContainer('Alice', 1, 2, 3), MyContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
To solve this, we need to register our container with JAX by telling it how to flatten and unflatten it:
from typing import Tuple, Iterable def flatten_MyContainer(container) -> Tuple[Iterable[int], str]: """Returns an iterable over container contents, and aux data.""" flat_contents = [container.a, container.b, container.c] # we don't want the name to appear as a child, so it is auxiliary data. # auxiliary data is usually a description of the structure of a node, # e.g., the keys of a dict -- anything that isn't a node's children. aux_data = container.name return flat_contents, aux_data def unflatten_MyContainer( aux_data: str, flat_contents: Iterable[int]) -> MyContainer: """Converts aux data and the flat contents into a MyContainer.""" return MyContainer(aux_data, *flat_contents) jax.tree_util.register_pytree_node( MyContainer, flatten_MyContainer, unflatten_MyContainer) jax.tree_leaves([ MyContainer('Alice', 1, 2, 3), MyContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Modern Python comes equipped with helpful tools to make defining containers easier. Some of these will work with JAX out-of-the-box, but others require more care. For instance:
from typing import NamedTuple, Any class MyOtherContainer(NamedTuple): name: str a: Any b: Any c: Any # Since `tuple` is already registered with JAX, and NamedTuple is a subclass, # this will work out-of-the-box: jax.tree_leaves([ MyOtherContainer('Alice', 1, 2, 3), MyOtherContainer('Bob', 4, 5, 6) ])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Notice that the `name` field now appears as a leaf, as all tuple elements are children. That's the price we pay for not having to register the class the hard way. Common pytree gotchas and patterns Gotchas Mistaking nodes for leavesA common problem to look out for is accidentally introducing tree nodes instead of leaves:
a_tree = [jnp.zeros((2, 3)), jnp.zeros((3, 4))] # Try to make another tree with ones instead of zeros shapes = jax.tree_map(lambda x: x.shape, a_tree) jax.tree_map(jnp.ones, shapes)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
What happened is that the `shape` of an array is a tuple, which is a pytree node, with its elements as leaves. Thus, in the map, instead of calling `jnp.ones` on e.g. `(2, 3)`, it's called on `2` and `3`.The solution will depend on the specifics, but there are two broadly applicable options:* rewrite the code to avoid the intermediate `tree_map`.* convert the tuple into an `np.array` or `jnp.array`, which makes the entiresequence a leaf. Handling of None`jax.tree_utils` treats `None` as a node without children, not as a leaf:
jax.tree_leaves([None, None, None])
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
Patterns Transposing treesIf you would like to transpose a pytree, i.e. turn a list of trees into a tree of lists, you can do so using `jax.tree_multimap`:
def tree_transpose(list_of_trees): """Convert a list of trees of identical structure into a single tree of lists.""" return jax.tree_multimap(lambda *xs: list(xs), *list_of_trees) # Convert a dataset from row-major to column-major: episode_steps = [dict(t=1, obs=3), dict(t=2, obs=4)] tree_transpose(episode_steps)
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
For more complicated transposes, JAX provides `jax.tree_transpose`, which is more verbose, but allows you specify the structure of the inner and outer Pytree for more flexibility:
jax.tree_transpose( outer_treedef = jax.tree_structure([0 for e in episode_steps]), inner_treedef = jax.tree_structure(episode_steps[0]), pytree_to_transpose = episode_steps )
_____no_output_____
ECL-2.0
docs/jax-101/05.1-pytrees.ipynb
slowy07/jax
准备工作
from google.colab import drive drive.mount('/content/drive') import os os.chdir('/content/drive/My Drive/Colab Notebooks/PyTorch/data/pycorrector-words/pycorrector-master-new-abs') !pip install -r requirements.txt !pip install pyltp import pycorrector
_____no_output_____
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
测试结果
sent, detail = pycorrector.correct('我是你的眼') print(sent,detail) sentences = [ '他们都很饿了,需要一些食物来充饥', '关于外交事务,我们必须十分谨慎才可以的', '他们都很饿了,需要一些事物来充饥', '关于外交事物,我们必须十分谨慎才可以的', '关于外交食务,我们必须十分谨慎才可以的', '这些方法是非常实用的', '这些方法是非常食用的', '高老师的植物是什么你知道吗', '高老师的值务是什么你知道吗', '高老师的职务是什么你知道马', '你的行为让我们赶到非常震惊', '你的行为让我们感到非常震惊', '他的医生都将在遗憾当中度过', '目前的形势对我们非常有力', '权力和义务是对等的,我们在行使权利的同时,也必须履行相关的义五', '权力和义务是对等的,我们在行使权力的同时', '权利和义务是对等的', '新讲生产建设兵团', '坐位新时代的接班人' '物理取闹', '我们不太敢说话了已经', '此函数其实就是将环境变量座位在path参数里面做替换,如果环境变量不存在,就原样返回。' ] for sentence in sentences: sent, detail = pycorrector.correct(sentence) print(sent, detail) print('\n') sent = '这些方法是非常食用的' sent, detail = pycorrector.correct(sent) print(sent,detail) sent = '这些方法是非常实用的' sent, detail = pycorrector.correct(sent) print(sent,detail) sent = '关于外交事物,我们必须十分谨慎才可以的' sent, detail = pycorrector.correct(sent) print(sent,detail) sent = '关于外交事务,我们必须十分谨慎才可以的' sent, detail = pycorrector.correct(sent) print(sent,detail)
[('关于', 0, 2), ('外交', 2, 4), ('事务', 4, 6), (',', 6, 7), ('我们', 7, 9), ('必须', 9, 11), ('十分', 11, 13), ('谨慎', 13, 15), ('才', 15, 16), ('可以', 16, 18), ('的', 18, 19)] ngram: n=2 [-3.050492286682129, -7.701910972595215, -6.242913246154785, -6.866119384765625, -5.359715938568115, -6.163232326507568, -7.367890357971191, -6.525017738342285, -8.21739387512207, -5.210103988647461, -5.497365951538086, -4.90977668762207] ngram: n=3 [-6.10285758972168, -6.10285758972168, -9.94007682800293, -8.959914207458496, -9.552006721496582, -7.43984317779541, -10.261677742004395, -10.424861907958984, -10.460886001586914, -10.168984413146973, -7.879795551300049, -9.49227237701416, -9.49227237701416] med_abs_deviation: 0.41716365019480506 y_score: [2.06034913 0. 0.59162991 0.43913363 0.37245575 0.6745 1.63484645 1.95325208 0.73589959 0.62460491 0.92836197] median: [-7.65334749] scores: [-6.37906615 -7.65334749 -8.01925778 -7.38175285 -7.42299167 -8.07051114 -8.66446463 -8.86139162 -8.10848546 -7.26704288 -7.07917571] maybe_err: ['十分', 6, 7, 'char'] maybe_err: ['谨慎', 7, 8, 'char'] 关于外交事务失分仅慎们必须十分谨慎才可以的 [['十分', '失分', 6, 7], ['谨慎', '仅慎', 7, 8]]
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
纠错调试(与结果无关)
import jieba words = '权力和义务是对等的' word = jieba.cut(words) print(' '.join(word)) !pip install pyltp import os from pyltp import Segmentor LTP_DATA_DIR='/content/drive/My Drive/Colab Notebooks/PyTorch/data/ltp_data_v3.4.0' cws_model_path=os.path.join(LTP_DATA_DIR,'cws.model') segmentor=Segmentor() segmentor.load(cws_model_path) words=segmentor.segment('权力和义务是对等的') print(type(words)) print(' '.join(words)) words_list = ' '.join(words).split(' ') # segmentor.release() token = list(yield_tuple(words_list)) def yield_tuple(words_list): start = 0 for w in words_list: width = len(w) yield(w, start, start + width) start += width words=segmentor.segment('<s>这些方法是非常实用的</s>') print(type(words)) print(' '.join(words)) # segmentor.release() words=segmentor.segment('这些方法是非常实用的') print(type(words)) print(' '.join(words)) # segmentor.release() for i in range(0): print("hello")
_____no_output_____
Apache-2.0
pycorrector_threshold_1.1.ipynb
JohnParken/iigroup
Week 5 Quiz Perrin Anto - paj2117
# import the datasets module from sklearn from sklearn import datasets # use datasets.load_boston() to load the Boston housing dataset boston = datasets.load_boston() # print the description of the dataset in boston.DESCR print(boston.DESCR) # copy the dataset features from boston.data to X X = boston.data # copy the dataset labels from boston.target to y y = boston.target # import the LinearRegression model from sklearn.linear_model from sklearn.linear_model import LinearRegression # initialize a linear regression model as lr with the default arguments lr = LinearRegression() # fit the lr model using the entire set of X features and y labels lr.fit(X,y) # score the lr model on entire set of X features and y labels lr.score(X,y) # import the DecisionTreeRegressor from sklearn.tree from sklearn.tree import DecisionTreeRegressor # initialize a decision tree model as dt with the default arguments dt = DecisionTreeRegressor() # fit the dt model using the entire set of X features and y labels dt.fit(X,y) # score the dt model on the entire set of X features and y labels dt.score(X,y)
_____no_output_____
CC0-1.0
weekly_quiz/Week_5_Quiz-paj2117.ipynb
perrindesign/data-science-class
from google.colab import drive drive.mount('/gdrive') import cv2 import numpy as np from google.colab.patches import cv2_imshow circles = cv2.imread('/gdrive/My Drive/Colab Notebooks/opencv/circles.png') cv2_imshow(circles) blue_channel = circles[:,:,0] green_channel = circles[:,:,1] red_channel = circles[:,:,2] cv2_imshow(blue_channel) gray = cv2.cvtColor(circles, cv2.COLOR_BGR2GRAY) cv2_imshow(gray) blue = cv2.subtract(blue_channel, gray) cv2_imshow(blue) ret, threshold = cv2.threshold(blue, 110, 255, cv2.THRESH_BINARY) cv2_imshow(threshold) #HSV blue_array = np.uint8([[[255, 0, 0]]]) hsv_blue_array = cv2.cvtColor(blue_array, cv2.COLOR_BGR2HSV) print(hsv_blue_array) img = cv2.imread('/gdrive/My Drive/Colab Notebooks/opencv/circles.png', 1) hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV) cv2_imshow(img) #blue color range blue_low = np.array([110,50,50]) blue_high = np.array([130,255,255]) mask = cv2.inRange(hsv, blue_low, blue_high) cv2_imshow(mask)
_____no_output_____
MIT
opencv_class_2.ipynb
hrnn/image-processing-practice
client 생성
import boto3 ec2 = boto3.resource('ec2') #high level client instances = ec2.instances.all() for i in instances: print(i) i1 = ec2.Instance(id='i-0cda56764352ef50e') tag = i1.tags print(tag) next((t['Value'] for t in i1.tags if t['Key'] == 'Name'), None) b = next((t['Value'] for t in i1.tags if t['Key'] == 'dd'), None) print(b) def findTag(instance, key, value): tags = instance.tags if tags is None: return False tag_value = next((t['Value'] for t in tags if t['Key'] == key), None) return tag_value == value findTag(i1,'Name', value='tt') findTag(i1,'Name', value='june-prod-NAT') findTag(i1,'d', value='june-prod-NAT') for i in instances: print(i.instance_id, findTag(i, 'Stop', 'auto'))
i-0cda56764352ef50e False i-0e0c4fa77f5a678b2 True i-07e52d2fbc2ebd266 False i-0d022de22510a69b7 False i-0e701a6507dbae898 False
MIT
aws/python/AWS boto3 ec2 various test.ipynb
honux77/practice
Methodology Objective**Use FAERS data on drug safety to identify possible risk factors associated with patient mortality and other serious adverse events associated with approved used of a drug or drug class** Data**_Outcome table_** 1. Start with outcome_c table to define unit of analysis (primaryid)2. Reshape outcome_c to one row per primaryid3. Outcomes grouped into 3 categories: a. death, b. serious, c. other 4. Multiclass model target format: each outcome grp coded into separate columns**_Demo table_**1. Drop fields not used in model input to reduce table size (preferably before import to notebook)2. Check if demo table one row per primaryid (if NOT then need to reshape / clean - TBD)**_Model input and targets_**1. Merge clean demo table with reshaped multilabel outcome targets (rows: primaryid, cols: outcome grps)2. Inspect merged file to check for anomalies (outliers, bad data, ...) Model**_Multilabel Classifier_**1. Since each primaryid has multiple outcomes coded in the outcome_c table, the ML model should predict the probability of each possible outcome.2. In scikit-learn lib most/all classifiers can predict multilabel outcomes by coding target outputs into array ResultsTBD InsightsTBD Data Pipeline: Outcome Table
# read outcome_c.csv & drop unnecessary fields infile = '../input/Outc20Q1.csv' cols_in = ['primaryid','outc_cod'] df = pd.read_csv(infile, usecols=cols_in) print(df.head(),'\n') print(f'Total number of rows: {len(df):,}\n') print(f'Unique number of primaryids: {df.primaryid.nunique():,}') # distribution of outcomes from collections import Counter o_cnt = Counter(df['outc_cod']) print('Distribution of Adverse Event Outcomes in FAERS 2020 Q1') for k, v in o_cnt.items(): print(f'{k}: {v:>8,}') print(72*'-') print(f'Most common outcome is {o_cnt.most_common(1)[0][0]} with {o_cnt.most_common(1)[0][1]:,} in 2020Q1') # DO NOT GROUP OUTCOMES FOR MULTILABEL - MUST BE 0 (-1) OR 1 FOR EACH CLASS ### create outcome groups: death:'DE', serious: ['LT','HO','DS','CA',RI], other: 'OT' # - USE TO CREATE OUTCOME GROUPS: key(original code) : value(new code) # map grp dict to outc_cod ''' outc_to_grp = {'DE':'death', 'LT':'serious', 'HO':'serious', 'DS':'serious', 'CA':'serious', 'RI':'serious', 'OT':'other'} df['oc_cat'] = df['outc_cod'].map(outc_to_grp) print(df.head(),'\n')''' print('Distribution of AE Outcomes') print(df['outc_cod'].value_counts()/len(df['outc_cod']),'\n') print(df['outc_cod'].value_counts().plot(kind='pie')) # outcome grps print(df['outc_cod'].value_counts()/len(df['outc_cod']),'\n') # one-hot encoding of outcome grp # step1: pandas automatic dummy var coding cat_cols = ['outc_cod'] #, 'oc_cat'] df1 = pd.get_dummies(df, prefix_sep="__", columns=cat_cols) print('Outcome codes and groups') print(f'Total number of rows: {len(df1):,}') print(f'Unique number of primaryids: {df1.primaryid.nunique():,}\n') print(df1.columns,'\n') print(df1.head()) print(df1.tail()) # step 2: create multilabel outcomes by primaryid with groupby outc_lst = ['outc_cod__CA','outc_cod__DE','outc_cod__DS','outc_cod__HO','outc_cod__LT', 'outc_cod__OT','outc_cod__RI'] #oc_lst = ['oc_cat__death','oc_cat__other','oc_cat__serious'] df2 = df1.groupby(['primaryid'])[outc_lst].sum().reset_index() df2['n_outc'] = df2[outc_lst].sum(axis='columns') # cnt total outcomes by primaryid print(df2.columns) print('-'*72) print('Outcome codes in Multilabel format') print(f'Total number of rows: {len(df2):,}') print(f'Unique number of primaryids: {df2.primaryid.nunique():,}\n') print(df2.head()) #print(df2.tail()) print(df2[outc_lst].corr()) print(df2.describe().T,'\n') # plot distribution of outcome groups ''' color = {'boxes':'DarkGreen', 'whiskers':'DarkOrange', 'medians':'DarkBlue', 'caps':'Gray'} print(df2[outc_lst].plot.bar()) #color=color, sym='r+'))''' # check primaryid from outcomes table with many outcomes # print(df2[df2['n_outc'] >= 6]) # checked in both outcomes and demo - multiple primaryids in outcome but only one primaryid in demo # appears to be okay to use # compare primaryids above in outcomes table to same in demo table #pid_lst = [171962202,173902932,174119951,175773511,176085111] #[print(df_demo[df_demo['primaryid'] == p]) for p in pid_lst] # one row in demo per primaryid - looks ok to join # save multilabel data to csv df2.to_csv('../input/outc_cod-multilabel.csv')
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Data Pipeline - Demo Table
# step 0: read demo.csv & check fields for missing values infile = '../input/DEMO20Q1.csv' #%timeit df_demo = pd.read_csv(infile) # 1 loop, best of 5: 5.19 s per loop df_demo = pd.read_csv(infile) print(df_demo.columns,'\n') print(f'Percent missing by column:\n{(pd.isnull(df_demo).sum()/len(df_demo))*100}') # step 1: exclude fields with large percent missing on read to preserve memory keep_cols = ['primaryid', 'caseversion', 'i_f_code', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'rept_cod', 'mfr_num', 'mfr_sndr', 'age', 'age_cod', 'age_grp','sex', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'occp_cod', 'reporter_country', 'occr_country'] # removed cols: ['auth_num','lit_ref','to_mfr'] infile = '../input/DEMO20Q1.csv' #%timeit df_demo = pd.read_csv(infile, usecols=keep_cols) # 1 loop, best of 5: 4.5 s per loop df_demo = pd.read_csv(infile, usecols=keep_cols) df_demo.set_index('primaryid', drop=False) print(df_demo.head(),'\n') print(f'Total number of rows: {len(df_demo):,}\n') print(f'Percent missing by column:\n{(pd.isnull(df_demo).sum()/len(df_demo))*100}') # step 2: merge demo and multilabel outcomes on primaryid df_demo_outc = pd.merge(df_demo, df2, on='primaryid') print('Demo - Multilabel outcome Merge','\n') print(df_demo_outc.head(),'\n') print(f'Total number of rows: {len(df_demo_outc):,}\n') print(f'Unique number of primaryids: {df_demo_outc.primaryid.nunique():,}','\n') print(f'Percent missing by column:\n{(pd.isnull(df_demo_outc).sum()/len(df_demo_outc))*100}') # step 3: calculate wt_lbs and check print(df_demo_outc.wt_cod.value_counts()) print(df_demo_outc.groupby('wt_cod')['wt'].describe()) # convert kg to lbs df_demo_outc['wt_lbs'] = np.where(df_demo_outc['wt_cod']=='KG',df_demo_outc['wt']*2.204623,df_demo_outc['wt']) print(df_demo_outc[['age','wt_lbs']].describe()) print(df_demo_outc[['age','wt_lbs']].corr()) print(sns.regplot('age','wt_lbs',data=df_demo_outc))
KG 65844 LBS 72 Name: wt_cod, dtype: int64 count mean std min 25% 50% 75% max wt_cod KG 65844.0 73.377305 26.078758 0.0 59.00 72.00 86.26 720.18 LBS 72.0 171.151389 60.316181 17.0 128.75 165.75 195.25 361.00 age wt_lbs count 173965.000000 65916.000000 mean 237.044055 161.779543 std 2050.336650 57.497343 min -3.000000 0.000000 25% 43.000000 130.072757 50% 60.000000 158.732856 75% 72.000000 190.170780 max 41879.000000 1587.725392 age wt_lbs age 1.000000 0.042254 wt_lbs 0.042254 1.000000 AxesSubplot(0.125,0.125;0.775x0.755)
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Insight: No correlation between wt and age + age range looks wrong. Check age distributions
# step 4: check age fields # age_grp print('age_grp') print(df_demo_outc.age_grp.value_counts(),'\n') # age_cod print('age_cod') print(df_demo_outc.age_cod.value_counts(),'\n') # age print('age') print(df_demo_outc.groupby(['age_grp','age_cod'])['age'].describe())
age_grp A 17048 E 8674 N 1004 C 626 T 503 I 344 Name: age_grp, dtype: int64 age_cod YR 168732 DY 2289 MON 1434 DEC 1377 WK 134 HR 11 Name: age_cod, dtype: int64 age count mean std min 25% 50% 75% \ age_grp age_cod A DEC 73.0 4.424658 1.311464 2.0 3.00 5.0 6.00 MON 1.0 19.000000 NaN 19.0 19.00 19.0 19.00 YR 10548.0 46.204115 12.832555 14.0 36.00 49.0 57.00 C MON 4.0 29.500000 5.196152 24.0 26.25 29.0 32.25 YR 315.0 6.726984 3.043486 2.0 4.00 7.0 9.00 E DEC 65.0 7.830769 0.893890 7.0 7.00 8.0 8.00 YR 6096.0 74.605315 7.153633 44.0 69.00 73.0 79.00 I DY 1.0 1.000000 NaN 1.0 1.00 1.0 1.00 MON 63.0 9.190476 5.535391 1.0 5.00 9.0 11.50 WK 4.0 14.250000 14.705441 4.0 6.25 8.5 16.50 YR 12.0 1.166667 0.389249 1.0 1.00 1.0 1.00 N DY 61.0 1.540984 3.423321 0.0 0.00 0.0 1.00 HR 1.0 1.000000 NaN 1.0 1.00 1.0 1.00 MON 14.0 13.857143 11.400790 3.0 5.25 9.5 17.00 YR 6.0 0.166667 0.408248 0.0 0.00 0.0 0.00 T YR 388.0 14.938144 1.631818 12.0 14.00 15.0 16.00 max age_grp age_cod A DEC 6.0 MON 19.0 YR 82.0 C MON 36.0 YR 13.0 E DEC 10.0 YR 103.0 I DY 1.0 MON 23.0 WK 36.0 YR 2.0 N DY 16.0 HR 1.0 MON 34.0 YR 1.0 T YR 19.0
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
age_grp, age_cod, age: Distributions by age group & code look reasonable. Create age in yrs. age_grp* N - Neonate* I - Infant* C - Child* T - Adolescent (teen?)* A - Adult* E - Elderlyage_cod* DEC - decade (yrs = 10*DEC)* YR - year (yrs = 1*YR)* MON - month (yrs = MON/12)* WK - week (yrs = WK/52)* DY - day (yrs = DY/365.25)* HR - hour (yrs = HR/(365.25*24)) or code to zero
# step 5: calculate age_yrs and check corr with wt_lbs df_demo_outc['age_yrs'] = np.where(df_demo_outc['age_cod']=='DEC',df_demo_outc['age']*10, np.where(df_demo_outc['age_cod']=='MON',df_demo_outc['age']/12, np.where(df_demo_outc['age_cod']=='WK',df_demo_outc['age']/52, np.where(df_demo_outc['age_cod']=='DY',df_demo_outc['age']/365.25, np.where(df_demo_outc['age_cod']=='DEC',df_demo_outc['age']/8766, df_demo_outc['age']))))) # age_yrs print('age_yrs') print(df_demo_outc.groupby(['age_grp','age_cod'])['age_yrs'].describe()) print(df_demo_outc[['age','age_yrs']].describe()) print(df_demo_outc[['wt_lbs','age_yrs']].corr()) print(sns.regplot('wt_lbs','age_yrs',data=df_demo_outc))
age_yrs count mean std min 25% \ age_grp age_cod A DEC 73.0 44.246575 13.114645 20.000000 30.000000 MON 1.0 1.583333 NaN 1.583333 1.583333 YR 10548.0 46.204115 12.832555 14.000000 36.000000 C MON 4.0 2.458333 0.433013 2.000000 2.187500 YR 315.0 6.726984 3.043486 2.000000 4.000000 E DEC 65.0 78.307692 8.938895 70.000000 70.000000 YR 6096.0 74.605315 7.153633 44.000000 69.000000 I DY 1.0 0.002738 NaN 0.002738 0.002738 MON 63.0 0.765873 0.461283 0.083333 0.416667 WK 4.0 0.274038 0.282797 0.076923 0.120192 YR 12.0 1.166667 0.389249 1.000000 1.000000 N DY 61.0 0.004219 0.009373 0.000000 0.000000 HR 1.0 1.000000 NaN 1.000000 1.000000 MON 14.0 1.154762 0.950066 0.250000 0.437500 YR 6.0 0.166667 0.408248 0.000000 0.000000 T YR 388.0 14.938144 1.631818 12.000000 14.000000 50% 75% max age_grp age_cod A DEC 50.000000 60.000000 60.000000 MON 1.583333 1.583333 1.583333 YR 49.000000 57.000000 82.000000 C MON 2.416667 2.687500 3.000000 YR 7.000000 9.000000 13.000000 E DEC 80.000000 80.000000 100.000000 YR 73.000000 79.000000 103.000000 I DY 0.002738 0.002738 0.002738 MON 0.750000 0.958333 1.916667 WK 0.163462 0.317308 0.692308 YR 1.000000 1.000000 2.000000 N DY 0.000000 0.002738 0.043806 HR 1.000000 1.000000 1.000000 MON 0.791667 1.416667 2.833333 YR 0.000000 0.000000 1.000000 T YR 15.000000 16.000000 19.000000 age age_yrs count 173965.000000 173965.000000 mean 237.044055 55.906426 std 2050.336650 20.714407 min -3.000000 -3.000000 25% 43.000000 43.000000 50% 60.000000 60.000000 75% 72.000000 71.000000 max 41879.000000 120.000000 wt_lbs age_yrs wt_lbs 1.000000 0.229312 age_yrs 0.229312 1.000000 AxesSubplot(0.125,0.125;0.775x0.755)
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Halis checked and wt in 400-800 range (and max wt of 1,400 lbs) is correct
# review data where wt_lbs > 800 lbs? print(df_demo_outc[df_demo_outc['wt_lbs'] > 800]) # step 6: Number of AE's reported in 2020Q1 by manufacturer print('Number of patients with adverse events by manufacturer reported in 2020Q1 from DEMO table:') print(df_demo_outc.mfr_sndr.value_counts()) # step 7: save updated file to csv print(df_demo_outc.columns) # save merged demo & multilabel data to csv df_demo_outc.to_csv('../input/demo-outc_cod-multilabel-wt_lbs-age_yrs.csv')
Index(['primaryid', 'caseversion', 'i_f_code', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'rept_cod', 'mfr_num', 'mfr_sndr', 'age', 'age_cod', 'age_grp', 'sex', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'occp_cod', 'reporter_country', 'occr_country', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI', 'n_outc', 'wt_lbs', 'age_yrs'], dtype='object')
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
ML Pipeline: Preprocessing
# step 0: check cat vars for one-hot coding cat_lst = ['i_f_code','rept_cod','sex','occp_cod'] [print(df_demo_outc[x].value_counts(),'\n') for x in cat_lst] print(df_demo_outc[cat_lst].describe(),'\n') # sex, occp_cod have missing values # step 1: create one-hot dummies for multilabel outcomes cat_cols = ['i_f_code', 'rept_cod', 'occp_cod', 'sex'] df = pd.get_dummies(df_demo_outc, prefix_sep="__", columns=cat_cols, drop_first=True) print(df.columns) print(df.describe().T) print(df.head())
Index(['primaryid', 'caseversion', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'mfr_num', 'mfr_sndr', 'age', 'age_cod', 'age_grp', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'reporter_country', 'occr_country', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI', 'n_outc', 'wt_lbs', 'age_yrs', 'i_f_code__I', 'rept_cod__5DAY', 'rept_cod__DIR', 'rept_cod__EXP', 'rept_cod__PER', 'occp_cod__HP', 'occp_cod__LW', 'occp_cod__MD', 'occp_cod__PH', 'sex__M', 'sex__UNK'], dtype='object') count mean std min \ primaryid 260715.0 1.905476e+08 1.567929e+08 39651443.0 caseversion 260715.0 1.950620e+00 2.538483e+00 1.0 age 173965.0 2.370441e+02 2.050337e+03 -3.0 wt 65916.0 7.348410e+01 2.633834e+01 0.0 outc_cod__CA 260715.0 6.129298e-03 7.804969e-02 0.0 outc_cod__DE 260715.0 1.542719e-01 3.612099e-01 0.0 outc_cod__DS 260715.0 2.656157e-02 1.607985e-01 0.0 outc_cod__HO 260715.0 4.048175e-01 4.908576e-01 0.0 outc_cod__LT 260715.0 4.762288e-02 2.129674e-01 0.0 outc_cod__OT 260715.0 6.459544e-01 4.782240e-01 0.0 outc_cod__RI 260715.0 1.373147e-03 3.703062e-02 0.0 n_outc 260715.0 1.286731e+00 5.546336e-01 1.0 wt_lbs 65916.0 1.617795e+02 5.749734e+01 0.0 age_yrs 173965.0 5.590643e+01 2.071441e+01 -3.0 i_f_code__I 260715.0 6.325605e-01 4.821085e-01 0.0 rept_cod__5DAY 260715.0 1.150682e-05 3.392157e-03 0.0 rept_cod__DIR 260715.0 4.473851e-02 2.067296e-01 0.0 rept_cod__EXP 260715.0 8.546420e-01 3.524621e-01 0.0 rept_cod__PER 260715.0 1.006041e-01 3.008044e-01 0.0 occp_cod__HP 260715.0 2.720058e-01 4.449937e-01 0.0 occp_cod__LW 260715.0 1.434517e-02 1.189094e-01 0.0 occp_cod__MD 260715.0 2.788792e-01 4.484489e-01 0.0 occp_cod__PH 260715.0 6.834666e-02 2.523403e-01 0.0 sex__M 260715.0 3.829891e-01 4.861166e-01 0.0 sex__UNK 260715.0 7.671212e-05 8.758226e-03 0.0 25% 50% 75% max primaryid 1.723185e+08 1.736196e+08 1.748495e+08 1.741600e+09 caseversion 1.000000e+00 1.000000e+00 2.000000e+00 9.200000e+01 age 4.300000e+01 6.000000e+01 7.200000e+01 4.187900e+04 wt 5.900000e+01 7.200000e+01 8.640000e+01 7.201800e+02 outc_cod__CA 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 outc_cod__DE 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 outc_cod__DS 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 outc_cod__HO 0.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 outc_cod__LT 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 outc_cod__OT 0.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 outc_cod__RI 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 n_outc 1.000000e+00 1.000000e+00 1.000000e+00 6.000000e+00 wt_lbs 1.300728e+02 1.587329e+02 1.901708e+02 1.587725e+03 age_yrs 4.300000e+01 6.000000e+01 7.100000e+01 1.200000e+02 i_f_code__I 0.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 rept_cod__5DAY 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 rept_cod__DIR 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 rept_cod__EXP 1.000000e+00 1.000000e+00 1.000000e+00 1.000000e+00 rept_cod__PER 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 occp_cod__HP 0.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 occp_cod__LW 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 occp_cod__MD 0.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 occp_cod__PH 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 sex__M 0.000000e+00 0.000000e+00 1.000000e+00 1.000000e+00 sex__UNK 0.000000e+00 0.000000e+00 0.000000e+00 1.000000e+00 primaryid caseversion event.dt1 mfr_dt init_fda_dt fda_dt \ 0 100046942 2 NaN 2020-01-08 2014-03-12 2020-01-10 1 100048206 6 NaN 2020-03-05 2014-03-12 2020-03-09 2 100048622 2 2005-12-30 2020-03-12 2014-03-12 2020-03-16 3 100051352 2 2006-09-22 2020-02-20 2014-03-12 2020-02-24 4 100051382 2 1999-01-01 2020-01-08 2014-03-12 2020-01-10 mfr_num mfr_sndr age age_cod ... rept_cod__5DAY \ 0 US-PFIZER INC-2014065112 PFIZER NaN NaN ... 0 1 US-PFIZER INC-2014029927 PFIZER 68.0 YR ... 0 2 US-PFIZER INC-2014066653 PFIZER 57.0 YR ... 0 3 US-PFIZER INC-2014072143 PFIZER 51.0 YR ... 0 4 US-PFIZER INC-2014071938 PFIZER 50.0 YR ... 0 rept_cod__DIR rept_cod__EXP rept_cod__PER occp_cod__HP occp_cod__LW \ 0 0 1 0 0 1 1 0 1 0 0 0 2 0 1 0 0 1 3 0 1 0 0 1 4 0 1 0 0 1 occp_cod__MD occp_cod__PH sex__M sex__UNK 0 0 0 0 0 1 1 0 0 0 2 0 0 0 0 3 0 0 0 0 4 0 0 0 0 [5 rows x 38 columns]
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
check sklearn for imputation options
# step 2: use means to impute the missing values of the features with missing records # calculate percent missing print(df.columns,'\n') print(f'Percent missing by column:\n{(pd.isnull(df).sum()/len(df))*100}') num_inputs = ['n_outc', 'wt_lbs', 'age_yrs'] cat_inputs = ['n_outc', 'wt_lbs', 'age_yrs', 'i_f_code__I', 'rept_cod__5DAY', 'rept_cod__DIR', 'rept_cod__EXP', 'rept_cod__PER', 'occp_cod__HP', 'occp_cod__LW', 'occp_cod__MD', 'occp_cod__PH', 'sex__M', 'sex__UNK'] inputs = num_inputs + cat_inputs print(inputs) target_labels = ['oc_cat__death', 'oc_cat__other', 'oc_cat__serious'] # calculate means means = df[inputs].mean() print(means.shape, means) # mean fill NA ''' wt_lbs 161.779543 age_yrs 55.906426 ''' df['wt_lbs_mean'] = np.where(pd.isnull(df['wt_lbs']),161.779543,df['wt_lbs']) df['age_yrs_mean'] = np.where(pd.isnull(df['age_yrs']),55.906426,df['age_yrs']) print('mean fill NA - wt_lbs & age_yrs') print(df.describe().T) print(df.columns) ### standarize features drop_cols = ['primaryid', 'caseid', 'caseversion', 'event.dt1', 'mfr_dt', 'init_fda_dt', 'fda_dt', 'auth_num', 'mfr_num', 'mfr_sndr', 'lit_ref', 'age', 'age_cod', 'age_grp', 'e_sub', 'wt', 'wt_cod', 'rept.dt1', 'to_mfr', 'reporter_country', 'occr_country', 'outc_cod__CA', 'outc_cod__DE', 'outc_cod__DS', 'outc_cod__HO', 'outc_cod__LT', 'outc_cod__OT', 'outc_cod__RI', 'oc_cat__death', 'oc_cat__other', 'oc_cat__serious', 'wt_lbs', 'age_yrs'] inputs_mean = ['n_outc', 'wt_lbs_mean', 'age_yrs_mean', 'i_f_code__I', 'rept_cod__5DAY', 'rept_cod__DIR', 'rept_cod__EXP', 'rept_cod__PER', 'occp_cod__HP', 'occp_cod__LW', 'occp_cod__MD', 'occp_cod__PH', 'sex__M'] X = df.drop(columns=drop_cols) print(X.columns) Xscaled = StandardScaler().fit_transform(X) print(Xscaled.shape) #X = pd.DataFrame(scaled, columns=inputs_mean) #.reset_index() #print(X.describe().T,'\n') #y_multilabel = np.c_[df['CA'], df['DE'], df['DS'], df['HO'], df['LT'], df['OT'], df['RI']] y_multilabel = np.c_[df['oc_cat__death'], df['oc_cat__other'], df['oc_cat__serious']] print(y_multilabel.shape) # test multilabel classifier knn_clf = KNeighborsClassifier() knn_clf.fit(Xscaled,y_multilabel) knn_clf.score(Xscaled,y_multilabel) # review sklean api - hamming_loss, jaccard_similarity_score, f1_score from sklearn.metrics import hamming_loss, jaccard_similarity_score pred_knn_multilabel = knn_clf.pred(Xscaled) f1_score(y_multilabel, pred_knn_multilabel, average='macro')
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
STOPPED HERE - 1.13.2021 ML Pipeline: Model Selection
### define functions for evaluating each of 8 types of supervised learning algorithms def evaluate_model(predictors, targets, model, param_dict, passes=500): seed = int(round(random()*1000,0)) print(seed) # specify minimum test MSE, best hyperparameter set test_err = [] min_test_err = 1e10 best_hyperparams = {} # specify MSE predicted from the full dataset by the optimal model of each type with the best hyperparameter set #full_y_err = None full_err_mintesterr = None full_err = [] # specify the final model returned ret_model = None # define MSE as the statistic to determine goodness-of-fit - the smaller the better scorer = make_scorer(mean_squared_error, greater_is_better=False) # split the data to a training-testing pair randomly by passes = n times for i in range(passes): print('Pass {}/{} for model {}'.format(i + 1, passes, model)) X_train, X_test, y_train, y_test = train_test_split(predictors, targets, test_size=0.3, random_state=(i+1)*seed ) # 3-fold CV on a training set, and returns an optimal_model with the best_params_ fit default_model = model() model_gs = GridSearchCV(default_model, param_dict, cv=3, n_jobs=-1, verbose=0, scoring=scorer) # n_jobs=16, model_gs.fit(X_train, y_train) optimal_model = model(**model_gs.best_params_) optimal_model.fit(X_train, y_train) # use the optimal_model generated above to test in the testing set and yield an MSE y_pred = optimal_model.predict(X_test) err = mean_squared_error(y_test, y_pred) test_err.extend([err]) # use the optimal_model generated above to be applied to the full data set and predict y to yield an MSE full_y_pred=optimal_model.predict(predictors) full_y_err = mean_squared_error(full_y_pred, y) full_err.extend([full_y_err]) # look for the smallest MSE yield from the testing set, # so the optimal model that meantimes yields the smallest MSE from the testing set can be considered as the final model of the type #print('MSE for {}: {}'.format(model, err)) if err < min_test_err: min_test_err = err best_hyperparams = model_gs.best_params_ full_err_mintesterr = full_y_err # return the final model of the type ret_model = optimal_model test_err_dist = pd.DataFrame(test_err, columns=["test_err"]).describe() full_err_dist = pd.DataFrame(full_err, columns=["full_err"]).describe() print('Model {} with hyperparams {} yielded \n\ttest error {} with distribution \n{} \n\ toverall error {} with distribution \n{}'. \ format(model, best_hyperparams, min_test_err, test_err_dist, full_err_mintesterr,full_err_dist)) return ret_model #%lsmagic # Random Forest #%%timeit rf = evaluate_model(X,y, RandomForestClassifier, {'n_estimators': [200, 400, 800,1000], 'max_depth': [2, 3, 4, 5], 'min_samples_leaf': [2,3], 'min_samples_split': [2, 3, 4], 'max_features' : ['auto', 'sqrt', 'log2']}, passes=1) # 250)
988 Pass 1/1 for model <class 'sklearn.ensemble.forest.RandomForestClassifier'>
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
STOPPED HERE - 1.12.2021 TODOs:1. Multicore processing: Setup Dask for multicore processing in Jupyter Notebook2. Distributed computing: Check Dask Distributed for local cluster setup
from joblib import dump, load dump(rf, 'binary_rf.obj') # rf_model features2 = pd.DataFrame(data=rf.feature_importances_, index=data.columns) features2.sort_values(by=0,ascending=False, inplace=True) print(features2[:50]) import seaborn as sns ax_rf = sns.barplot(x=features2.index, y=features2.iloc[:,0], order=features2.index) ax_rf.set_ylabel('Feature importance') fig_rf = ax_rf.get_figure() rf_top_features=features2.index[:2].tolist() print(rf_top_features) pdp, axes = partial_dependence(rf, X= data, features=[(0, 1)], grid_resolution=20) fig = plt.figure() ax = Axes3D(fig) XX, YY = np.meshgrid(axes[0], axes[1]) Z = pdp[0].T surf = ax.plot_surface(XX, YY, Z, rstride=1, cstride=1, cmap=plt.cm.BuPu, edgecolor='k') #ax.set_xlabel('% Severe Housing \nCost Burden', fontsize=12) #ax.set_ylabel('% Veteran', fontsize=15) ax.set_xlabel('% mortality diff', fontsize=12) ax.set_ylabel('% severe housing \ncost burden', fontsize=15) ax.set_zlabel('Partial dependence', fontsize=15) ax.view_init(elev=22, azim=330) plt.colorbar(surf) plt.suptitle('Partial Dependence of Top 2 Features \nRandom Forest', fontsize=15) plt.subplots_adjust(top=0.9) plt.show() print(features2.index[range(14)]) datafeatures2 = pd.concat([states,y,data[features2.index[range(38)]]],axis=1) datafeatures2.head(10) from sklearn.inspection import permutation_importance # feature names feature_names = list(features2.columns) # model - rf model = load('binary_rf.obj') # calculate permutation importance - all data - final model perm_imp_all = permutation_importance(model, data, y, n_repeats=10, random_state=42) print('Permutation Importances - mean') print(perm_imp_all.importances_mean) ''' # create dict of feature names and importances fimp_dict_all = dict(zip(feature_names,perm_imp_all.importances_mean)) # feature importance - all print('Permutation Importance for All Data') print(fimp_dict_all) # plot importances - all y_pos = np.arange(len(feature_names)) plt.barh(y_pos, fimp_dict_all.importances_mean, align='center', alpha=0.5) plt.yticks(y_pos, feature_names) plt.xlabel('Permutation Importance - All') plt.title('Feature Importance - All Data') plt.show() ''' dataused = pd.concat([states,y,data],axis=1) print(dataused.shape) print(dataused.head(10)) #from joblib import dump, load dump(perm_imp_all, 'perm_imp_rf.obj') dataused.to_excel(r'dataused_cj08292020_v2.xlsx',index=None, header=True)
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
END BG RF ANALYSIS - 8.31.2020 OTHER MODELS NOT RUN
# LASSO lasso = evaluate_model(data, Lasso, {'alpha': np.arange(0, 1.1, 0.001), 'normalize': [True], 'tol' : [1e-3, 1e-4, 1e-5], 'max_iter': [1000, 4000, 7000]}, passes=250) # Ridge regression ridge = evaluate_model(data, Ridge, {'alpha': np.arange(0, 1.1, 0.05), 'normalize': [True], 'tol' : [1e-3, 1e-4, 1e-5], 'max_iter': [1000, 4000, 7000]}, passes=250) # K-nearest neighborhood knn = evaluate_model(data, KNeighborsRegressor, {'n_neighbors': np.arange(1, 8), 'algorithm': ['ball_tree','kd_tree','brute']}, passes=250) # Gradient Boosting Machine gbm = evaluate_model(data, GradientBoostingRegressor, {'learning_rate': [0.1, 0.05, 0.02, 0.01], 'n_estimators': [100, 200, 400, 800, 1000], 'min_samples_leaf': [2,3], 'max_depth': [2, 3, 4, 5], 'max_features': ['auto', 'sqrt', 'log2']}, passes=250) # CART: classification and regression tree cart = evaluate_model(data, DecisionTreeRegressor, {'splitter': ['best', 'random'], 'criterion': ['mse', 'friedman_mse', 'mae'], 'max_depth': [2, 3, 4, 5], 'min_samples_leaf': [2,3], 'max_features' : ['auto', 'sqrt', 'log2']}, passes=250) # Neural network: multi-layer perceptron nnmlp = evaluate_model(data, MLPRegressor, {'hidden_layer_sizes': [(50,)*3, (50,)*5, (50,)*10, (50,)*30, (50,)*50], 'activation': ['identity','logistic','tanh','relu']}, passes=250) # Support Vector Machine: a linear function is an efficient model to work with svm = evaluate_model(data, LinearSVR, {'tol': [1e-3, 1e-4, 1e-5], 'C' : np.arange(0.1,3,0.1), 'loss': ['epsilon_insensitive','squared_epsilon_insensitive'], 'max_iter': [1000, 2000, 4000]}, passes=250) features1 = pd.DataFrame(data=gbm.feature_importances_, index=data.columns) features1.sort_values(by=0,ascending=False, inplace=True) print(features1[:40]) print(features1.index[range(38)]) datafeatures1 = pd.concat([states,y,data[features1.index[range(38)]]],axis=1) datafeatures1.head(10) import seaborn as sns ax_gbm = sns.barplot(x=features1.index, y=features1.iloc[:,0], order=features1.index) ax_gbm.set_ylabel('Feature importance') fig_gbm = ax_gbm.get_figure()
_____no_output_____
MIT
faers_multiclass_data_pipeline_1_18_2021.ipynb
briangriner/OSTF-FAERS
Problem Simulation Tutorial
import pyblp import numpy as np import pandas as pd pyblp.options.digits = 2 pyblp.options.verbose = False pyblp.__version__
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Before configuring and solving a problem with real data, it may be a good idea to perform Monte Carlo analysis on simulated data to verify that it is possible to accurately estimate model parameters. For example, before configuring and solving the example problems in the prior tutorials, it may have been a good idea to simulate data according to the assumed models of supply and demand. During such Monte Carlo anaysis, the data would only be used to determine sample sizes and perhaps to choose reasonable true parameters.Simulations are configured with the :class:`Simulation` class, which requires many of the same inputs as :class:`Problem`. The two main differences are:1. Variables in formulations that cannot be loaded from `product_data` or `agent_data` will be drawn from independent uniform distributions.2. True parameters and the distribution of unobserved product characteristics are specified.First, we'll use :func:`build_id_data` to build market and firm IDs for a model in which there are $T = 50$ markets, and in each market $t$, a total of $J_t = 20$ products produced by $F = 10$ firms.
id_data = pyblp.build_id_data(T=50, J=20, F=10)
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Next, we'll create an :class:`Integration` configuration to build agent data according to a Gauss-Hermite product rule that exactly integrates polynomials of degree $2 \times 9 - 1 = 17$ or less.
integration = pyblp.Integration('product', 9) integration
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
We'll then pass these data to :class:`Simulation`. We'll use :class:`Formulation` configurations to create an $X_1$ that consists of a constant, prices, and an exogenous characteristic; an $X_2$ that consists only of the same exogenous characteristic; and an $X_3$ that consists of the common exogenous characteristic and a cost-shifter.
simulation = pyblp.Simulation( product_formulations=( pyblp.Formulation('1 + prices + x'), pyblp.Formulation('0 + x'), pyblp.Formulation('0 + x + z') ), beta=[1, -2, 2], sigma=1, gamma=[1, 4], product_data=id_data, integration=integration, seed=0 ) simulation
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
When :class:`Simulation` is initialized, it constructs :attr:`Simulation.agent_data` and simulates :attr:`Simulation.product_data`.The :class:`Simulation` can be further configured with other arguments that determine how unobserved product characteristics are simulated and how marginal costs are specified.At this stage, simulated variables are not consistent with true parameters, so we still need to solve the simulation with :meth:`Simulation.replace_endogenous`. This method replaced simulated prices and market shares with values that are consistent with the true parameters. Just like :meth:`ProblemResults.compute_prices`, to do so it iterates over the $\zeta$-markup equation from :ref:`references:Morrow and Skerlos (2011)`.
simulation_results = simulation.replace_endogenous() simulation_results
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Now, we can try to recover the true parameters by creating and solving a :class:`Problem`. The convenience method :meth:`SimulationResults.to_problem` constructs some basic "sums of characteristics" BLP instruments that are functions of all exogenous numerical variables in the problem. In this example, excluded demand-side instruments are the cost-shifter `z` and traditional BLP instruments constructed from `x`. Excluded supply-side instruments are traditional BLP instruments constructed from `x` and `z`.
problem = simulation_results.to_problem() problem
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
We'll choose starting values that are half the true parameters so that the optimization routine has to do some work. Note that since we're jointly estimating the supply side, we need to provide an initial value for the linear coefficient on prices because this parameter cannot be concentrated out of the problem (unlike linear coefficients on exogenous characteristics).
results = problem.solve( sigma=0.5 * simulation.sigma, pi=0.5 * simulation.pi, beta=[None, 0.5 * simulation.beta[1], None], optimization=pyblp.Optimization('l-bfgs-b', {'gtol': 1e-5}) ) results
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
The parameters seem to have been estimated reasonably well.
np.c_[simulation.beta, results.beta] np.c_[simulation.gamma, results.gamma] np.c_[simulation.sigma, results.sigma]
_____no_output_____
MIT
docs/notebooks/tutorial/simulation.ipynb
Alalalalaki/pyblp
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt from __future__ import print_function %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading extenrnal modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500): """ Load the CIFAR-10 dataset from disk and perform preprocessing to prepare it for the linear classifier. These are the same steps as we used for the SVM, but condensed to a single function. """ # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # subsample the data mask = list(range(num_training, num_training + num_validation)) X_val = X_train[mask] y_val = y_train[mask] mask = list(range(num_training)) X_train = X_train[mask] y_train = y_train[mask] mask = list(range(num_test)) X_test = X_test[mask] y_test = y_test[mask] mask = np.random.choice(num_training, num_dev, replace=False) X_dev = X_train[mask] y_dev = y_train[mask] # Preprocessing: reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_val = np.reshape(X_val, (X_val.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) X_dev = np.reshape(X_dev, (X_dev.shape[0], -1)) # Normalize the data: subtract the mean image mean_image = np.mean(X_train, axis = 0) X_train -= mean_image X_val -= mean_image X_test -= mean_image X_dev -= mean_image # add bias dimension and transform into columns X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))]) X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))]) X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))]) X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))]) return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev # Invoke the above function to get our data. X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data() print('Train data shape: ', X_train.shape) print('Train labels shape: ', y_train.shape) print('Validation data shape: ', X_val.shape) print('Validation labels shape: ', y_val.shape) print('Test data shape: ', X_test.shape) print('Test labels shape: ', y_test.shape) print('dev data shape: ', X_dev.shape) print('dev labels shape: ', y_dev.shape)
Train data shape: (49000, 3073) Train labels shape: (49000,) Validation data shape: (1000, 3073) Validation labels shape: (1000,) Test data shape: (1000, 3073) Test labels shape: (1000,) dev data shape: (500, 3073) dev labels shape: (500,)
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
# First implement the naive softmax loss function with nested loops. # Open the file cs231n/classifiers/softmax.py and implement the # softmax_loss_naive function. from cs231n.classifiers.softmax import softmax_loss_naive import time # Generate a random softmax weight matrix and use it to compute the loss. W = np.random.randn(3073, 10) * 0.0001 loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As a rough sanity check, our loss should be something close to -log(0.1). print('loss: %f' % loss) print('sanity check: %f' % (-np.log(0.1)))
loss: 2.339283 sanity check: 2.302585
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
Inline Question 1:Why do we expect our loss to be close to -log(0.1)? Explain briefly.****Your answer:** *Because it's a random classifier. Since there are 10 classes and a random classifier will correctly classify with 10% probability.*
# Complete the implementation of softmax_loss_naive and implement a (naive) # version of the gradient that uses nested loops. loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0) # As we did for the SVM, use numeric gradient checking as a debugging tool. # The numeric gradient should be close to the analytic gradient. from cs231n.gradient_check import grad_check_sparse f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # similar to SVM case, do another gradient check with regularization loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1) f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0] grad_numerical = grad_check_sparse(f, W, grad, 10) # Now that we have a naive implementation of the softmax loss function and its gradient, # implement a vectorized version in softmax_loss_vectorized. # The two versions should compute the same results, but the vectorized version should be # much faster. tic = time.time() loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005) toc = time.time() print('naive loss: %e computed in %fs' % (loss_naive, toc - tic)) from cs231n.classifiers.softmax import softmax_loss_vectorized tic = time.time() loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005) toc = time.time() print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic)) # As we did for the SVM, we use the Frobenius norm to compare the two versions # of the gradient. grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro') print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized)) print('Gradient difference: %f' % grad_difference) # Use the validation set to tune hyperparameters (regularization strength and # learning rate). You should experiment with different ranges for the learning # rates and regularization strengths; if you are careful you should be able to # get a classification accuracy of over 0.35 on the validation set. from cs231n.classifiers import Softmax results = {} best_val = -1 best_softmax = None learning_rates = [5e-6, 1e-7, 5e-7] regularization_strengths = [1e3, 2.5e4, 5e4] ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained softmax classifer in best_softmax. # ################################################################################ for lr in learning_rates: for reg in regularization_strengths: softmax = Softmax() loss_hist = softmax.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=True) y_train_pred = softmax.predict(X_train) y_val_pred = softmax.predict(X_val) training_accuracy = np.mean(y_train == y_train_pred) validation_accuracy = np.mean(y_val == y_val_pred) #append in results results[(lr,reg)] = (training_accuracy, validation_accuracy) if validation_accuracy > best_val: best_val = validation_accuracy best_softmax = softmax ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print('lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)) print('best validation accuracy achieved during cross-validation: %f' % best_val) # evaluate on test set # Evaluate the best softmax on test set y_test_pred = best_softmax.predict(X_test) test_accuracy = np.mean(y_test == y_test_pred) print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, )) # Visualize the learned weights for each class w = best_softmax.W[:-1,:] # strip out the bias w = w.reshape(32, 32, 3, 10) w_min, w_max = np.min(w), np.max(w) classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for i in range(10): plt.subplot(2, 5, i + 1) # Rescale the weights to be between 0 and 255 wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min) plt.imshow(wimg.astype('uint8')) plt.axis('off') plt.title(classes[i])
_____no_output_____
MIT
assignment1/softmax.ipynb
rahul1990gupta/bcs231n
`timeseries` package for fastai v2> **`timeseries`** is a Timeseries Classification and Regression package for fastai v2.> It mimics the fastai v2 vision module (fastai2.vision).> This notebook is a tutorial that shows, and trains an end-to-end a timeseries dataset. > The dataset example is the NATOPS dataset (see description here beow).> First, 4 different methods of creation on how to create timeseries dataloaders are presented. > Then, we train a model based on [Inception Time] (https://arxiv.org/pdf/1909.04939.pdf) architecture Credit> timeseries for fastai v2 was inspired by by Ignacio's Oguiza timeseriesAI (https://github.com/timeseriesAI/timeseriesAI.git).> Inception Time model definition is a modified version of [Ignacio Oguiza] (https://github.com/timeseriesAI/timeseriesAI/blob/master/torchtimeseries/models/InceptionTime.py) and [Thomas Capelle] (https://github.com/tcapelle/TimeSeries_fastai/blob/master/inception.py) implementaions Installing **`timeseries`** on local machine as an editable package1- Only if you have not already installed `fastai v2` Install [fastai2](https://dev.fast.ai/Installing) by following the steps described there.2- Install timeseries package by following the instructions here below:```git clone https://github.com/ai-fast-track/timeseries.gitcd timeseriespip install -e .``` pip installing **`timeseries`** from repo either locally or in Google Colab - Start Here Installing fastai v2
!pip install git+https://github.com/fastai/fastai2.git
Collecting git+https://github.com/fastai/fastai2.git Cloning https://github.com/fastai/fastai2.git to /tmp/pip-req-build-icognque Running command git clone -q https://github.com/fastai/fastai2.git /tmp/pip-req-build-icognque Collecting fastcore Downloading https://files.pythonhosted.org/packages/5d/e4/62d66b9530a777af12049d20592854eb21a826b7cf6fee96f04bd8cdcbba/fastcore-0.1.12-py3-none-any.whl Requirement already satisfied: torch>=1.3.0 in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (1.4.0) Requirement already satisfied: torchvision>=0.5 in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (0.5.0) Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (3.1.3) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (0.25.3) Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (2.21.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (3.13) Requirement already satisfied: fastprogress>=0.1.22 in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (0.2.2) Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (6.2.2) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (0.22.1) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (1.4.1) Requirement already satisfied: spacy in /usr/local/lib/python3.6/dist-packages (from fastai2==0.0.11) (2.1.9) Requirement already satisfied: dataclasses>='0.7'; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from fastcore->fastai2==0.0.11) (0.7) Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from fastcore->fastai2==0.0.11) (1.17.5) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision>=0.5->fastai2==0.0.11) (1.12.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai2==0.0.11) (0.10.0) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai2==0.0.11) (2.6.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai2==0.0.11) (2.4.6) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->fastai2==0.0.11) (1.1.0) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->fastai2==0.0.11) (2018.9) Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2==0.0.11) (1.24.3) Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2==0.0.11) (2.8) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2==0.0.11) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2==0.0.11) (2019.11.28) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->fastai2==0.0.11) (0.14.1) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (2.0.3) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (1.0.2) Requirement already satisfied: preshed<2.1.0,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (2.0.1) Requirement already satisfied: plac<1.0.0,>=0.9.6 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (0.9.6) Requirement already satisfied: wasabi<1.1.0,>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (0.6.0) Requirement already satisfied: srsly<1.1.0,>=0.0.6 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (1.0.1) Requirement already satisfied: thinc<7.1.0,>=7.0.8 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (7.0.8) Requirement already satisfied: blis<0.3.0,>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2==0.0.11) (0.2.4) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->fastai2==0.0.11) (45.1.0) Requirement already satisfied: tqdm<5.0.0,>=4.10.0 in /usr/local/lib/python3.6/dist-packages (from thinc<7.1.0,>=7.0.8->spacy->fastai2==0.0.11) (4.28.1) Building wheels for collected packages: fastai2 Building wheel for fastai2 (setup.py) ... [?25l[?25hdone Created wheel for fastai2: filename=fastai2-0.0.11-cp36-none-any.whl size=179392 sha256=69eaf43720cb7cce9ee55b2819763266646b3804b779da3bb5729a15741b766e Stored in directory: /tmp/pip-ephem-wheel-cache-ihi2rkgx/wheels/38/fd/31/ec7df01a47c0c9fafe85a1af76b59a86caf47ec649710affa8 Successfully built fastai2 Installing collected packages: fastcore, fastai2 Successfully installed fastai2-0.0.11 fastcore-0.1.12
Apache-2.0
index.ipynb
Massachute/TS
Installing `timeseries` package from github
!pip install git+https://github.com/ai-fast-track/timeseries.git
Collecting git+https://github.com/ai-fast-track/timeseries.git Cloning https://github.com/ai-fast-track/timeseries.git to /tmp/pip-req-build-2010puda Running command git clone -q https://github.com/ai-fast-track/timeseries.git /tmp/pip-req-build-2010puda Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from timeseries==0.0.2) (3.1.3) Requirement already satisfied: fastai2 in /usr/local/lib/python3.6/dist-packages (from timeseries==0.0.2) (0.0.11) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->timeseries==0.0.2) (2.4.6) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->timeseries==0.0.2) (2.6.1) Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib->timeseries==0.0.2) (1.17.5) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->timeseries==0.0.2) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->timeseries==0.0.2) (1.1.0) Requirement already satisfied: pillow in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (6.2.2) Requirement already satisfied: pandas in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (0.25.3) Requirement already satisfied: spacy in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (2.1.9) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (1.4.1) Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (0.22.1) Requirement already satisfied: fastprogress>=0.1.22 in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (0.2.2) Requirement already satisfied: torchvision>=0.5 in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (0.5.0) Requirement already satisfied: torch>=1.3.0 in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (1.4.0) Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (2.21.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (3.13) Requirement already satisfied: fastcore in /usr/local/lib/python3.6/dist-packages (from fastai2->timeseries==0.0.2) (0.1.12) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->timeseries==0.0.2) (1.12.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->timeseries==0.0.2) (45.1.0) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas->fastai2->timeseries==0.0.2) (2018.9) Requirement already satisfied: srsly<1.1.0,>=0.0.6 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (1.0.1) Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (1.0.2) Requirement already satisfied: thinc<7.1.0,>=7.0.8 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (7.0.8) Requirement already satisfied: plac<1.0.0,>=0.9.6 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (0.9.6) Requirement already satisfied: preshed<2.1.0,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (2.0.1) Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (2.0.3) Requirement already satisfied: blis<0.3.0,>=0.2.2 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (0.2.4) Requirement already satisfied: wasabi<1.1.0,>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from spacy->fastai2->timeseries==0.0.2) (0.6.0) Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn->fastai2->timeseries==0.0.2) (0.14.1) Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2->timeseries==0.0.2) (2.8) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2->timeseries==0.0.2) (3.0.4) Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2->timeseries==0.0.2) (1.24.3) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->fastai2->timeseries==0.0.2) (2019.11.28) Requirement already satisfied: dataclasses>='0.7'; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from fastcore->fastai2->timeseries==0.0.2) (0.7) Requirement already satisfied: tqdm<5.0.0,>=4.10.0 in /usr/local/lib/python3.6/dist-packages (from thinc<7.1.0,>=7.0.8->spacy->fastai2->timeseries==0.0.2) (4.28.1) Building wheels for collected packages: timeseries Building wheel for timeseries (setup.py) ... [?25l[?25hdone Created wheel for timeseries: filename=timeseries-0.0.2-cp36-none-any.whl size=349967 sha256=5c4dc9e779bf83f095cdb40069fe8c488f541b8154daaad64ab1b3f9d8fe380f Stored in directory: /tmp/pip-ephem-wheel-cache-dgali9hg/wheels/35/01/01/4fdd69c029e9537c05914ee49520e9d36edaa9b2636f089bfc Successfully built timeseries Installing collected packages: timeseries Successfully installed timeseries-0.0.2
Apache-2.0
index.ipynb
Massachute/TS
*pip Installing - End Here* `Usage`
%reload_ext autoreload %autoreload 2 %matplotlib inline from fastai2.basics import * # hide # Only for Windows users because symlink to `timeseries` folder is not recognized by Windows import sys sys.path.append("..") from timeseries.all import *
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Tutorial on timeseries package for fastai v2 Example : NATOS dataset Right Arm vs Left Arm (3: 'Not clear' Command (see picture here above)) DescriptionThe data is generated by sensors on the hands, elbows, wrists and thumbs. The data are the x,y,z coordinates for each of the eight locations. The order of the data is as follows: Channels (24)0. Hand tip left, X coordinate1. Hand tip left, Y coordinate2. Hand tip left, Z coordinate3. Hand tip right, X coordinate4. Hand tip right, Y coordinate5. Hand tip right, Z coordinate6. Elbow left, X coordinate7. Elbow left, Y coordinate8. Elbow left, Z coordinate9. Elbow right, X coordinate10. Elbow right, Y coordinate11. Elbow right, Z coordinate12. Wrist left, X coordinate13. Wrist left, Y coordinate14. Wrist left, Z coordinate15. Wrist right, X coordinate16. Wrist right, Y coordinate17. Wrist right, Z coordinate18. Thumb left, X coordinate19. Thumb left, Y coordinate20. Thumb left, Z coordinate21. Thumb right, X coordinate22. Thumb right, Y coordinate23. Thumb right, Z coordinate Classes (6)The six classes are separate actions, with the following meaning: 1: I have command 2: All clear 3: Not clear 4: Spread wings 5: Fold wings 6: Lock wings Download data using `download_unzip_data_UCR(dsname=dsname)` method
dsname = 'NATOPS' #'NATOPS', 'LSST', 'Wine', 'Epilepsy', 'HandMovementDirection' # url = 'http://www.timeseriesclassification.com/Downloads/NATOPS.zip' path = unzip_data(URLs_TS.NATOPS) path
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Why do I have to concatenate train and test data?Both Train and Train dataset contains 180 samples each. We concatenate them in order to have one big dataset and then split into train and valid dataset using our own split percentage (20%, 30%, or whatever number you see fit)
fname_train = f'{dsname}_TRAIN.arff' fname_test = f'{dsname}_TEST.arff' fnames = [path/fname_train, path/fname_test] fnames data = TSData.from_arff(fnames) print(data) items = data.get_items() idx = 1 x1, y1 = data.x[idx], data.y[idx] y1 # You can select any channel to display buy supplying a list of channels and pass it to `chs` argument # LEFT ARM # show_timeseries(x1, title=y1, chs=[0,1,2,6,7,8,12,13,14,18,19,20]) # RIGHT ARM # show_timeseries(x1, title=y1, chs=[3,4,5,9,10,11,15,16,17,21,22,23]) # ?show_timeseries(x1, title=y1, chs=range(0,24,3)) # Only the x axis coordinates seed = 42 splits = RandomSplitter(seed=seed)(range_of(items)) #by default 80% for train split and 20% for valid split are chosen splits
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Using `Datasets` class Creating a Datasets object
tfms = [[ItemGetter(0), ToTensorTS()], [ItemGetter(1), Categorize()]] # Create a dataset ds = Datasets(items, tfms, splits=splits) ax = show_at(ds, 2, figsize=(1,1))
3.0
Apache-2.0
index.ipynb
Massachute/TS
Create a `Dataloader` objects 1st method : using `Datasets` object
bs = 128 # Normalize at batch time tfm_norm = Normalize(scale_subtype = 'per_sample_per_channel', scale_range=(0, 1)) # per_sample , per_sample_per_channel # tfm_norm = Standardize(scale_subtype = 'per_sample') batch_tfms = [tfm_norm] dls1 = ds.dataloaders(bs=bs, val_bs=bs * 2, after_batch=batch_tfms, num_workers=0, device=default_device()) dls1.show_batch(max_n=9, chs=range(0,12,3))
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Using `DataBlock` class 2nd method : using `DataBlock` and `DataBlock.get_items()`
getters = [ItemGetter(0), ItemGetter(1)] tsdb = DataBlock(blocks=(TSBlock, CategoryBlock), get_items=get_ts_items, getters=getters, splitter=RandomSplitter(seed=seed), batch_tfms = batch_tfms) tsdb.summary(fnames) # num_workers=0 is Microsoft Windows dls2 = tsdb.dataloaders(fnames, num_workers=0, device=default_device()) dls2.show_batch(max_n=9, chs=range(0,12,3))
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
3rd method : using `DataBlock` and passing `items` object to the `DataBlock.dataloaders()`
getters = [ItemGetter(0), ItemGetter(1)] tsdb = DataBlock(blocks=(TSBlock, CategoryBlock), getters=getters, splitter=RandomSplitter(seed=seed)) dls3 = tsdb.dataloaders(data.get_items(), batch_tfms=batch_tfms, num_workers=0, device=default_device()) dls3.show_batch(max_n=9, chs=range(0,12,3))
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
4th method : using `TSDataLoaders` class and `TSDataLoaders.from_files()`
dls4 = TSDataLoaders.from_files(fnames, batch_tfms=batch_tfms, num_workers=0, device=default_device()) dls4.show_batch(max_n=9, chs=range(0,12,3))
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Train Model
# Number of channels (i.e. dimensions in ARFF and TS files jargon) c_in = get_n_channels(dls2.train) # data.n_channels # Number of classes c_out= dls2.c c_in,c_out
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Create model
model = inception_time(c_in, c_out).to(device=default_device()) model
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Create Learner object
#Learner opt_func = partial(Adam, lr=3e-3, wd=0.01) loss_func = LabelSmoothingCrossEntropy() learn = Learner(dls2, model, opt_func=opt_func, loss_func=loss_func, metrics=accuracy) print(learn.summary())
Sequential (Input shape: ['64 x 24 x 51']) ================================================================ Layer (type) Output Shape Param # Trainable ================================================================ Conv1d 64 x 32 x 51 29,952 True ________________________________________________________________ Conv1d 64 x 32 x 51 14,592 True ________________________________________________________________ Conv1d 64 x 32 x 51 6,912 True ________________________________________________________________ MaxPool1d 64 x 24 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 768 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,128 True ________________________________________________________________ Conv1d 64 x 32 x 51 39,936 True ________________________________________________________________ Conv1d 64 x 32 x 51 19,456 True ________________________________________________________________ Conv1d 64 x 32 x 51 9,216 True ________________________________________________________________ MaxPool1d 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,096 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,128 True ________________________________________________________________ Conv1d 64 x 32 x 51 39,936 True ________________________________________________________________ Conv1d 64 x 32 x 51 19,456 True ________________________________________________________________ Conv1d 64 x 32 x 51 9,216 True ________________________________________________________________ MaxPool1d 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,096 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 128 x 51 16,384 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ Conv1d 64 x 32 x 51 4,128 True ________________________________________________________________ Conv1d 64 x 32 x 51 39,936 True ________________________________________________________________ Conv1d 64 x 32 x 51 19,456 True ________________________________________________________________ Conv1d 64 x 32 x 51 9,216 True ________________________________________________________________ MaxPool1d 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,096 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,128 True ________________________________________________________________ Conv1d 64 x 32 x 51 39,936 True ________________________________________________________________ Conv1d 64 x 32 x 51 19,456 True ________________________________________________________________ Conv1d 64 x 32 x 51 9,216 True ________________________________________________________________ MaxPool1d 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,096 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,128 True ________________________________________________________________ Conv1d 64 x 32 x 51 39,936 True ________________________________________________________________ Conv1d 64 x 32 x 51 19,456 True ________________________________________________________________ Conv1d 64 x 32 x 51 9,216 True ________________________________________________________________ MaxPool1d 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 32 x 51 4,096 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ ReLU 64 x 128 x 51 0 False ________________________________________________________________ Conv1d 64 x 128 x 51 16,384 True ________________________________________________________________ BatchNorm1d 64 x 128 x 51 256 True ________________________________________________________________ AdaptiveAvgPool1d 64 x 128 x 1 0 False ________________________________________________________________ AdaptiveMaxPool1d 64 x 128 x 1 0 False ________________________________________________________________ Flatten 64 x 256 0 False ________________________________________________________________ Linear 64 x 6 1,542 True ________________________________________________________________ Total params: 472,742 Total trainable params: 472,742 Total non-trainable params: 0 Optimizer used: functools.partial(<function Adam at 0x7fb6eb402e18>, lr=0.003, wd=0.01) Loss function: LabelSmoothingCrossEntropy() Callbacks: - TrainEvalCallback - Recorder - ProgressCallback
Apache-2.0
index.ipynb
Massachute/TS
LR find
lr_min, lr_steep = learn.lr_find() lr_min, lr_steep
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Train
#lr_max=1e-3 epochs=30; lr_max=lr_steep; pct_start=.7; moms=(0.95,0.85,0.95); wd=1e-2 learn.fit_one_cycle(epochs, lr_max=lr_max, pct_start=pct_start, moms=moms, wd=wd) # learn.fit_one_cycle(epochs=20, lr_max=lr_steep)
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Plot loss function
learn.recorder.plot_loss()
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
Show results
learn.show_results(max_n=9, chs=range(0,12,3)) #hide from nbdev.export import notebook2script # notebook2script() notebook2script(fname='index.ipynb') # #hide # from nbdev.export2html import _notebook2html # # notebook2script() # _notebook2html(fname='index.ipynb')
_____no_output_____
Apache-2.0
index.ipynb
Massachute/TS
import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.linear_model import LinearRegression from sklearn.tree import DecisionTreeRegressor from sklearn.metrics import mean_absolute_error from sklearn.metrics import r2_score from sklearn.preprocessing import RobustScaler from sklearn.linear_model import Lasso from sklearn.ensemble import GradientBoostingRegressor from sklearn.ensemble import StackingRegressor import warnings import random seed = 42 random.seed(seed) import numpy as np np.random.seed(seed) warnings.filterwarnings('ignore') plt.style.use('ggplot') df = pd.read_csv('https://raw.githubusercontent.com/mouctarbalde/concrete-strength-prediction/main/Train.csv') df.head() columns_name = df.columns.to_list() columns_name =['Cement', 'Blast_Furnace_Slag', 'Fly_Ash', 'Water', 'Superplasticizer', 'Coarse Aggregate', 'Fine Aggregate', 'Age_day', 'Concrete_compressive_strength'] df.columns = columns_name df.info() df.shape import missingno as ms ms.matrix(df) df.isna().sum() df.describe().T df.corr()['Concrete_compressive_strength'].sort_values().plot(kind='barh') plt.title("Correlation based on the target variable.") plt.show() sns.heatmap(df.corr(),annot=True) sns.boxplot(x='Water', y = 'Cement',data=df) plt.figure(figsize=(15,9)) df.boxplot() sns.regplot(x='Water', y = 'Cement',data=df)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
As we can see from the above cell there is not correlation between **water** and our target variable.
sns.boxplot(x='Age_day', y = 'Cement',data=df) sns.regplot(x='Age_day', y = 'Cement',data=df) X = df.drop('Concrete_compressive_strength',axis=1) y = df.Concrete_compressive_strength X.head() y.head() X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=.2, random_state=seed) X_train.shape ,y_train.shape
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
In our case we notice from our analysis the presence of outliers although they are not many we are going to use Robustscaler from sklearn to scale the data.Robust scaler is going to remove the median and put variance to 1 it will also transform the data by removing outliers(24%-75%) is considered.
scale = RobustScaler() # note we have to fit_transform only on the training data. On your test data you only have to transform. X_train = scale.fit_transform(X_train) X_test = scale.transform(X_test) X_train
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Model creation Linear Regression
lr = LinearRegression() lr.fit(X_train,y_train) pred_lr = lr.predict(X_test) pred_lr[:10] mae_lr = mean_absolute_error(y_test,pred_lr) r2_lr = r2_score(y_test,pred_lr) print(f'Mean absolute error of linear regression is {mae_lr}') print(f'R2 score of Linear Regression is {r2_lr}')
Mean absolute error of linear regression is 7.745559243921439 R2 score of Linear Regression is 0.6275531792314843
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
**Graph for linear regression** the below graph is showing the relationship between the actual and the predicted values.
fig, ax = plt.subplots() ax.scatter(pred_lr, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction
Decision tree Regression
dt = DecisionTreeRegressor(criterion='mae') dt.fit(X_train,y_train) pred_dt = dt.predict(X_test) mae_dt = mean_absolute_error(y_test,pred_dt) r2_dt = r2_score(y_test,pred_dt) print(f'Mean absolute error of linear regression is {mae_dt}') print(f'R2 score of Decision tree regressor is {r2_dt}') fig, ax = plt.subplots() plt.title('Linear relationship for decison tree') ax.scatter(pred_dt, y_test) ax.plot([y_test.min(), y_test.max()],[y_test.min(), y_test.max()], color = 'red', marker = "*", markersize = 10)
_____no_output_____
MIT
Cement_prediction_.ipynb
mouctarbalde/concrete-strength-prediction