markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Question 6:A website requires the users to input username and password to register. Write a program to check the validity of password input by users.Following are the criteria for checking the password:1. At least 1 letter between [a-z]2. At least 1 number between [0-9]1. At least 1 letter between [A-Z]3. At least 1 character from [$@]4. Minimum length of transaction password: 65. Maximum length of transaction password: 12Your program should accept a sequence of comma separated passwords and will check them according to the above criteria. Passwords that match the criteria are to be printed, each separated by a comma.ExampleIf the following passwords are given as input to the program:ABd1234@1,a F1,2w3E*,2We3345Then, the output of the program should be:ABd1234@1
import re password= input('Enter your password: ').split(',') valid = [] for i in password: if len(i) < 6 or len(i) > 12: break elif not re.search('([a-z])+', i): break elif not re.search("([A-Z])+", i): break elif not re.search("([0-9])+", i): break elif not re.search("([!@$%^&])+", i): break else: valid.append(i) print(' '.join(valid)) break else: print('Invalid Password')
Enter your password: ABd1234@1,a F1#,2w3E*,2We3345 ABd1234@1
CNRI-Python
Programming_Assingment13.ipynb
14vpankaj/iNeuron_Programming_Assignments
ابتدا باید کتابخانه های زیر را وارد کنیم: numpy: برای کار با ماتریس ها matplotlib: برای رسم نمودار PCA: برای کاهش بعد OpenCV: برای کار با عکس special_ortho_group: برای تولید پایه اورتونرمال تذکر: اگر کتابخانه cv2 اجرا نشد باید آن را نصب کنید. در command prompt دستور زیر را اجرا کنید. pip install opencv-python
!pip install opencv-python import numpy as np import matplotlib.pyplot as plt from sklearn.decomposition import PCA import cv2 from scipy.stats import special_ortho_group as sog
_____no_output_____
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
پروژه ۲: استفاده از کاهش بعد قسمت ۱.۱: تولید دیتا با استفاده از پایه اورتونرمال عملیات زیر را انجام دهید: ابتدا با استفاده از تابع np.zeros آلفا وکتور هایی با ابعاد dim و N بسازید. سعی کنید متغیر آلفا وکتور را طوری پر کنید که به ازای هر اندیس از بعد صفر آن، آرایه ای از توزیع نرمال با میانگین ۰ و انحراف معیار i+1 قرار گیرد. بردار پایه V را با استفاده از تابع special_ortho_group.rvs(dim) بسازید. مشخص کنید که در ده مولفه اول چند درصد دیتا برای هر کدام از ماتریس ها حفظ شده اند. حال بردار زیر را تولید کنید و در alpha_v قرار دهید. $$\alpha_1 V_1 + \alpha_2 V_2 + ... + \alpha_d V_d $$
dim = 20 N = 1000 alpha_vectors = np.zeros((N, dim)) for i in range(N): alpha_vectors[i] = np.random.normal(0, i + 1, dim) V = sog.rvs(dim) alpha_v = np.matmul(alpha_vectors, V) print(alpha_v)
[[ 1.85796782e-01 5.95693503e-01 -1.06141413e+00 ... 1.25360933e+00 -1.49196549e+00 -1.71645212e+00] [-5.52355256e-01 -2.32208128e-01 -1.45257747e+00 ... -5.75353815e-01 5.32574186e-01 -1.13204072e+00] [ 1.62991497e+00 3.05093383e-01 1.85496227e+00 ... -2.02565163e+00 3.66097985e+00 3.27154903e+00] ... [ 1.37340414e+03 1.19011377e+02 -1.16850578e+02 ... 1.26333703e+03 -6.42699005e+02 -1.42814042e+03] [ 6.25504714e+01 -1.17225432e+03 2.19318481e+03 ... -6.21643322e+02 6.56490839e+02 -1.19807228e+03] [-3.01445100e+02 5.06836655e+02 -1.09693110e+03 ... 7.73835725e+02 -1.90212911e+03 6.61624113e+02]]
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۱.۲:استفاده از PCA برای کاهش بعد عملیات زیر را انجام دهید: ابتدا یک شیی از PCA بسازید. با استفاده از تابع fit موجود در شیی PCA عملیات pca را روی دیتا alpha_v انجام دهید. با استفاده از تابع components_ موجود در شیی pca بردار های تکین را مشاهده کنید. با استفاده از تابع explained_variance_ موجود در شیی pca مقدار های تکین را مشاهده کنید.
pca = PCA() pca.fit(alpha_v) print(pca.components_) print(pca.explained_variance_)
[[ 3.34017742e-02 -1.18904961e-01 -4.25147294e-01 1.26947115e-01 3.13449083e-01 2.54046525e-01 2.71726916e-01 1.80475336e-01 -1.14252489e-01 -5.92359744e-02 -7.92421226e-02 -3.44459207e-01 2.23677760e-01 4.90444010e-01 -1.12590124e-01 1.54180027e-01 -5.84606532e-03 1.51139626e-01 -6.85230914e-02 1.50337823e-01] [-2.79972536e-01 -3.81875071e-04 -3.04331575e-01 3.97839316e-03 6.49960142e-02 2.92714369e-02 -3.76792291e-01 -3.10187582e-02 -2.83348780e-01 -1.72045979e-01 4.50357740e-01 1.19864404e-01 -2.44852022e-01 2.43681415e-02 -4.27746871e-01 -2.69710242e-01 1.16043399e-01 1.08055343e-01 -9.89675561e-02 5.19226613e-02] [ 5.66013430e-02 1.62255138e-01 1.89629385e-01 -7.90057936e-02 -4.49016462e-01 -2.72328881e-03 -2.47581955e-01 -2.08425381e-01 2.48299359e-01 -7.02058223e-02 -1.62835979e-01 -2.72148420e-01 -2.34592829e-01 4.40363626e-01 -1.12763612e-01 4.19896907e-03 -7.33795257e-03 3.30866712e-01 -2.58390114e-02 2.87861015e-01] [ 1.16596502e-01 2.34476477e-01 2.22083735e-01 2.05160752e-01 2.69413528e-01 7.75101413e-02 1.37592325e-01 1.27251116e-01 5.81002707e-02 2.35254818e-01 3.90541178e-02 9.31818119e-02 7.40481424e-02 -1.54927152e-01 -1.80188308e-01 -4.24604153e-01 -2.67317884e-01 3.11901416e-01 2.96512308e-01 3.98078448e-01] [ 3.59855417e-02 -5.17626653e-02 1.40630251e-01 -8.94043250e-03 -4.58134241e-02 2.05479443e-01 5.48871842e-06 -3.20580899e-01 -2.65748599e-01 1.47602854e-01 2.26601677e-01 -1.99249596e-01 1.99395675e-02 1.42541763e-02 1.17379666e-01 -3.99412283e-02 -3.69565553e-01 -4.88639589e-01 -3.12771623e-01 3.95564094e-01] [-3.20531938e-01 -1.68911883e-01 1.35162555e-01 -3.63467493e-01 1.90352682e-01 2.25246225e-01 -1.76695982e-01 2.20292901e-01 1.91407295e-01 1.20663205e-01 -1.01932209e-02 -3.25768669e-02 7.86288460e-02 -5.91545549e-02 3.12916229e-01 -1.73479473e-01 -1.32131024e-01 3.50213609e-01 -4.66191163e-01 -6.42863412e-02] [ 2.32335584e-02 -3.54696901e-01 1.06423776e-01 9.98315966e-02 1.20139914e-01 -1.55926954e-01 3.65203573e-01 -1.28949230e-01 -1.42193192e-01 2.87579897e-01 -1.20196670e-01 1.39835624e-01 -5.52001633e-01 -6.27135404e-02 -1.02557983e-01 1.59260237e-01 1.90991522e-01 2.18048498e-01 -2.75346628e-01 1.49424910e-01] [ 2.66607071e-01 -1.31786634e-01 9.42340353e-02 -1.10547064e-01 4.24367897e-01 -4.56475974e-01 -2.47516296e-01 -3.38312843e-01 -6.68681288e-02 6.82036170e-02 5.14200233e-02 1.64392102e-01 1.34767759e-01 4.03926834e-01 5.40829871e-02 -4.29992410e-02 -2.25672584e-01 8.94344569e-02 2.53358839e-02 -2.04500309e-01] [-4.08446169e-02 9.59503796e-03 -6.59448189e-02 -6.29938856e-02 1.67424070e-01 9.39861666e-02 -1.08784937e-01 1.44636550e-01 3.08735874e-01 -2.76628434e-01 2.11080379e-02 4.30848437e-01 -1.86817356e-01 -2.15032615e-03 -8.65690559e-02 5.27572346e-01 -3.96089734e-01 -7.74597731e-02 3.25106951e-02 2.78767583e-01] [ 3.45583546e-01 3.81938476e-01 -4.41654295e-02 -1.11420514e-01 -1.00706496e-01 3.77177770e-02 8.90994314e-02 4.27785135e-01 -9.20911845e-02 2.76733882e-01 3.83200526e-01 5.89571753e-02 -3.39218846e-01 2.62329787e-01 2.02288902e-01 4.24411959e-02 -6.28788470e-02 -2.77013631e-02 -7.94458466e-02 -2.11670649e-01] [-4.01362930e-01 2.14021551e-01 1.80599136e-02 1.40353418e-01 -3.62893585e-01 -1.69450347e-01 2.38455836e-01 -3.81773246e-02 -3.66398069e-01 5.67647017e-02 -2.15380061e-02 3.51446668e-01 3.13098232e-01 1.57750842e-01 3.37929823e-02 1.43191165e-01 -2.52062742e-01 2.55866994e-01 -1.32335316e-01 -4.27475643e-02] [ 4.69884055e-01 -2.47952452e-01 7.82992081e-02 2.17049833e-01 -1.87178286e-01 2.90337555e-01 -6.10153074e-02 1.04029127e-01 -1.50266499e-01 -3.74698139e-01 -1.56568656e-01 4.13465140e-01 7.89671357e-02 9.58532374e-02 1.07083606e-01 -2.88326384e-01 6.86962104e-02 4.73320970e-02 -2.34077780e-01 1.97531559e-02] [-2.41003628e-01 5.71705090e-02 2.47957958e-01 -2.55132140e-01 1.02457970e-01 6.49639303e-02 1.03406771e-01 9.02330524e-02 1.04937013e-01 1.46428165e-01 -4.13585987e-02 3.47270001e-01 1.32369584e-01 4.17429785e-01 -9.97202666e-02 -1.26198961e-01 4.65691046e-01 -3.69967967e-01 5.90622794e-02 2.21106089e-01] [ 6.21256199e-02 4.13005918e-01 -2.41887713e-01 -3.43042911e-01 1.67873370e-01 -1.75186686e-01 -7.36081558e-02 5.00482981e-02 -3.56101437e-01 -1.35878533e-01 -5.59437282e-01 2.09597019e-03 -1.55355825e-01 -1.33788858e-01 9.32025284e-02 -1.18009025e-01 2.91333398e-04 -7.31636155e-02 -1.14657702e-01 2.02955129e-01] [ 1.13340224e-01 -2.90703634e-01 -3.30730196e-01 7.88363473e-03 -2.81952095e-01 -5.09164234e-01 -1.34108566e-01 3.63603497e-01 1.91448426e-01 2.50411291e-01 1.40108620e-02 8.74435726e-03 2.04821198e-01 -5.62295383e-02 -5.22170894e-02 -1.10040652e-01 -3.90952853e-02 -1.15627321e-01 -1.44339731e-01 3.30076886e-01] [ 9.08519506e-02 2.49577158e-01 7.14979119e-02 2.52513184e-01 1.76386132e-01 -1.17496871e-01 -2.12082684e-01 -6.40664908e-02 -4.39238249e-02 -6.92512954e-02 2.67867830e-01 -1.78795097e-02 2.03895436e-01 -1.36030773e-01 3.69568817e-01 3.08116672e-01 4.62306457e-01 2.00492029e-01 -1.47473729e-01 3.47153741e-01] [-1.93508996e-02 -1.89717339e-01 3.98482905e-01 8.09161586e-02 1.25367916e-02 6.57541972e-02 -4.38008703e-01 4.06516596e-01 -4.45337861e-01 1.77397772e-01 -1.91001312e-01 -1.44560579e-01 3.16491616e-02 8.49143890e-03 -1.80651355e-01 2.91253728e-01 -1.81369450e-02 -2.55535792e-02 1.71126756e-01 -2.70168328e-02] [ 1.29105615e-01 -2.93623107e-01 -7.66601312e-03 -5.90494645e-01 -1.61729863e-01 2.41921287e-02 1.91687616e-01 -2.96486105e-02 -2.67072290e-01 -1.53130362e-01 2.62630756e-01 2.05393792e-03 7.47165945e-02 -8.15624057e-02 1.51842084e-01 7.16019864e-02 3.95853045e-02 2.36257499e-01 4.12561592e-01 2.28570232e-01] [-1.03071231e-01 -1.07042472e-01 -4.02887507e-01 1.46197268e-01 -7.13175209e-02 3.05054740e-01 -2.90292365e-01 -2.12756804e-01 -2.14447218e-02 4.42104580e-01 -1.69544225e-01 2.26855307e-01 -1.26069659e-01 7.62996882e-02 3.83517564e-01 -1.64042413e-02 1.74308774e-02 2.95504760e-02 3.43425715e-01 4.72580985e-02] [-3.38168767e-01 -1.50358236e-01 1.21204491e-01 2.73435240e-01 6.98776545e-02 -2.71569663e-01 7.62004119e-02 2.23359419e-01 -5.04083794e-02 -3.58344131e-01 3.48821749e-02 -1.41264091e-01 -3.13295579e-01 2.02242187e-01 4.60755751e-01 -2.31049642e-01 -1.00620747e-01 -1.18946628e-01 2.21317324e-01 9.55729060e-02]] [459338.51256464 426099.91399403 410847.21917205 392410.42768038 380751.72253453 369528.46403096 362403.79314679 351653.36336259 344206.96335909 327495.31605779 315513.97763167 309901.14676994 300603.60437195 295271.1739031 287504.79954667 271718.6479452 264138.64297627 254663.79547324 236554.66406725 223683.16717356]
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۱.۳: کاهش بعد به ۳ بعد ابتدا یک شیی از PCA با ورودی n_components=3 بسازید. با استفاده از تابع fit موجود در شیی PCA عملیات pca را روی دیتا alpha_v انجام دهید. تابع explained_variance_ratio_ موجود در شیی pca درصد حفظ دیتا به ازای هر کدام از بعد ها را می دهد. با کاهش بعد به ۳، چند درصد از اطلاعات حفظ می شود؟
pca = PCA(n_components = 3) pca.fit(alpha_v) print(str(100 * np.sum(pca.explained_variance_ratio_)) + " percent of data is preserved in 3 dimensions!")
19.544901432955598 percent of data is preserved in 3 dimensions!
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
برای حفظ ۹۰ درصد از اطلاعات به چند بعد نیاز داریم؟
min_dim = 0 for i in range(1, dim): pca = PCA(n_components = i) pca.fit(alpha_v) if (np.sum(pca.explained_variance_ratio_) >= 0.9): min_dim = i break print("Almost " + str(100 * np.sum(pca.explained_variance_ratio_)) + " percent of data is preserved in at least " + str(min_dim) + " dimensions!")
Almost 93.01006062812166 percent of data is preserved in at least 18 dimensions!
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۲.۱: خواندن فایل تصویرابتدا فایل تصویری رنگی باکیفیتی را از گوگل دانلود کنید.با استفاده از تابع imread موجود در کتابخانه OpenCV عکس مربوطه را فراخوانی کنید:
image1 = cv2.imread("mona.jpg")
_____no_output_____
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
عکس خوانده شده را به فرمت RGB در می آوریم:
image = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)
_____no_output_____
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
همانطور که می بینید عکس خوانده شده به ازای هر پیکسل ۳ عدد دارد: بنابراین برای هر عکس رنگی x*y یک آرایه x*y*3 خواهیم داشت.
dim = image.shape print('Image shape =', dim)
Image shape = (720, 483, 3)
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۲.۲: نمایش تصویربا استفاده از تابع imshow موجود در matplotlib تصویر خوانده شده را نمایش دهید:
plt.imshow(image) plt.show()
_____no_output_____
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۲.۳: آماده سازی تصویر برای کاهش بعدسه ماتریس رنگ را در ماتریس های R,G,B ذخیره کنید:
R = image[:, :, 0] G = image[:, :, 1] B = image[:, :, 2] print(R.shape) print(G.shape) print(B.shape)
(720, 483) (720, 483) (720, 483)
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۲.۴:استفاده از PCA برای کاهش بعد با استفاده از کلاس PCA در کتابخانه sklearn کاهش بعد را انجام میدهیم. عملیات زیر را انجام دهید: راهنمایی برای هر یک از ماتریس های R,G,B یک شی PCA ایجاد کنید. تعداد مولفه ها را ۱۰ قرار دهید. با استفاده از تابع fit موجود در pca الگوریتم را روی ماتریس ها فیت کنید. با استفاده از دستور _explained_variance_ratio میتوانید ببینید هرکدام از مولفه ها چند درصد دیتای ماتریس را دارند. مشخص کنید که در ده مولفه اول چند درصد دیتا برای هر کدام از ماتریس ها حفظ شده اند. با استفاده از دستور bar مقادیر _explained_variance_ratio را رسم کنید
k = 10 rpca = PCA(n_components = k) gpca = PCA(n_components = k) bpca = PCA(n_components = k) rpca.fit(R) gpca.fit(G) bpca.fit(B) print("First " + str(k) + " components of Red Matrix have " + str(100 * np.sum(rpca.explained_variance_ratio_)) + " percent of data.") print("First " + str(k) + " components of Green Matrix have " + str(100 * np.sum(gpca.explained_variance_ratio_)) + " percent of data.") print("First " + str(k) + " components of Blue Matrix have " + str(100 * np.sum(bpca.explained_variance_ratio_)) + " percent of data.") plt.bar([i for i in range(k)], rpca.explained_variance_ratio_, color ='red', width = 0.4) plt.xlabel("Red Components") plt.ylabel("Variance %") plt.show() plt.bar([i for i in range(k)], gpca.explained_variance_ratio_, color ='green', width = 0.4) plt.xlabel("Green Components") plt.ylabel("Variance %") plt.show() plt.bar([i for i in range(k)], bpca.explained_variance_ratio_, color ='blue', width = 0.4) plt.xlabel("Blue Components") plt.ylabel("Variance %") plt.show()
_____no_output_____
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
عملیات زیر را انجام دهید: با استفاده از تابع transform موجود در pca دیتا با بعد کمتر را تولید کنید با استفاده از تابع inverse_transform دیتا را به بعد اولیه برگردانید
Transform_R = rpca.transform(R) Transform_B = gpca.transform(G) Transform_G = bpca.transform(B) Reduced_R = rpca.inverse_transform(Transform_R) Reduced_G = gpca.inverse_transform(Transform_G) Reduced_B = bpca.inverse_transform(Transform_B) print('Transform Matrix Shape = ', Transform_R.shape) print('Inverse Transform Matrix Shape = ', Reduced_R.shape)
Transform Matrix Shape = (720, 10) Inverse Transform Matrix Shape = (720, 483)
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
با استفاده از دستور concatenate سه ماتریس ً Reduced_R,Reduced_G,Reduced_B را کنار هم قرار دهید تا یک آرایه x*y*3 ایجاد شود. x , y همان ابعاد تصویر اولیه (image) هستند با استفاده از دستور astype ماتریس بدست آمده را به عدد صحیح تبدیل کنید.عکس بدست آمده را با imshow نمایش دهید.
Reduced_R = Reduced_R.reshape((dim[0], dim[1], 1)) Reduced_G = Reduced_G.reshape((dim[0], dim[1], 1)) Reduced_B = Reduced_B.reshape((dim[0], dim[1], 1)) reduced_image = np.dstack((Reduced_R, Reduced_G, Reduced_B)) final_image = reduced_image.astype(int) print('final_image shape = ', final_image.shape) plt.imshow(final_image) plt.show()
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
قسمت ۲.۵:استفاده از PCA برای کاهش بعد و حفظ ۹۹ درصد داده ها کل قسمت ۲.۴ را مجددا اجرا کنید. این بار تعداد مولفه ها را عددی قرار دهید که در هر سه ماتریس R,G,B حداقل ۹۹ درصد داده ها حفظ شود.
k = 188 rpca = PCA(n_components = k) gpca = PCA(n_components = k) bpca = PCA(n_components = k) rpca.fit(R) gpca.fit(G) bpca.fit(B) Transform_R = rpca.transform(R) Transform_B = gpca.transform(G) Transform_G = bpca.transform(B) Reduced_R = rpca.inverse_transform(Transform_R) Reduced_G = gpca.inverse_transform(Transform_G) Reduced_B = bpca.inverse_transform(Transform_B) print('Transform Matrix Shape = ', Transform_R.shape) print('Inverse Transform Matrix Shape = ', Reduced_R.shape) Reduced_R = Reduced_R.reshape((dim[0], dim[1], 1)) Reduced_G = Reduced_G.reshape((dim[0], dim[1], 1)) Reduced_B = Reduced_B.reshape((dim[0], dim[1], 1)) reduced_image = np.dstack((Reduced_R, Reduced_G, Reduced_B)) final_image = reduced_image.astype(int) print('final_image shape = ', final_image.shape) plt.imshow(final_image) plt.show() print(np.sum(rpca.explained_variance_ratio_))
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
MIT
mini-project 2/CSPRJ2_9816603_abrehforoush.ipynb
Alireza-Abrehforoush/Mathematical-Foundations-of-Data-Science
**Create Database in MongoDB**![title](Images/mongo.png) **Connect to Mongo DB Mars DB**
conn = 'mongodb://localhost:27017' client = pymongo.MongoClient(conn) # Define database and collection db = client.mars collection = db.items
_____no_output_____
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
**Get executable_path**
!which chromedriver
/usr/local/bin/chromedriver
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
**Step 1 - Scraping** **NASA Mars News**Scrape the NASA Mars News Site and collect the latest News Title and Paragraph Text. Assign the text to variables that you can reference later.
def latest_nasa_news(): executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path, headless=False) url = "https://mars.nasa.gov/news/?page=0&per_page=40&order=publish_date+desc%2Ccreated_at+desc&search=&category=19%2C165%2C184%2C204&blank_scope=Latest" browser.visit(url) #need timer to ensure page has load before scraping? time.sleep(5) html = browser.html soup = BeautifulSoup(html, 'html.parser') news_date = soup.find('div', class_='list_date').text news_title = soup.find('div', class_='content_title').text news_p = soup.find('div', class_='article_teaser_body').text print(news_date) print(news_title) print(news_p) #how to print multiple variables? latest_nasa_news()
November 27, 2019 NASA's Briefcase-Size MarCO Satellite Picks Up Honors The twin spacecraft, the first of their kind to fly into deep space, earn a Laureate from Aviation Week & Space Technology.
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
**JPL Mars Space Images - Featured Image**Latest Mars image
def latest_mars_image(): executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path, headless=False) url_mars_image = "https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars" browser.visit(url_mars_image) #need timer to ensure page has load before scraping? time.sleep(5) html = browser.html soup = BeautifulSoup(html, 'html.parser') image = soup.find('img', class_='thumb') #image output <img alt="Indus Vallis" class="thumb" src="/spaceimages/images/wallpaper/PIA23573-640x350.jpg" title="Indus Vallis"/> #how to save image url and path to diplay in webpage? #need to call image? latest_mars_image()
_____no_output_____
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
**Twitter Latest Mars Weather**
def latest_mars_weather(): executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path, headless=False) url_mars_weather = "https://twitter.com/marswxreport?lang=en" browser.visit(url_mars_weather) #need timer to ensure page has load before scraping? time.sleep(5) soup = BeautifulSoup(browser.html, 'html.parser') latest_weather = soup.find('p', class_='TweetTextSize').text print('Current Weather on Mars') print(latest_weather) #how to print multiple variables? latest_mars_weather() import requests import lxml.html as lh import pandas as pd def mars_facts(): executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path, headless=False) url_mars_facts = "http://space-facts.com/mars/" browser.visit(url_mars_facts) #need timer to ensure page has load before scraping? time.sleep(5) soup = BeautifulSoup(html, 'html.parser') mars_facts_table = soup.find("table", {"class": "tablepress tablepress-id-p-mars"}) df_mars_facts = pd.read_html(str(mars_facts_table)) print(df_mars_facts) mars_facts() latest_weather = soup.find('td', class_='column-2') for weather in latest_weather: print('----------------------------------') print(weather)
---------------------------------- 6,792 km ---------------------------------- <br/>
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
**Mars Hemispheres**Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar's hemispheres.You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
def mars_image(): executable_path = {'executable_path': '/usr/local/bin/chromedriver'} browser = Browser('chrome', **executable_path, headless=False) url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars" browser.visit(url) #need a pause to ensure page has load before scraping? soup = BeautifulSoup(browser.html, 'html.parser') div = soup.find('div', class_='results').findAll('div', class_='description') print(div) mars_image()
[<div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/cerberus_enhanced"><h3>Cerberus Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 21 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Cerberus hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. This mosaic is composed of 104 Viking Orbiter images acquired…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/schiaparelli_enhanced"><h3>Schiaparelli Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 35 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Schiaparelli hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. The images were acquired in 1980 during early northern…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/syrtis_major_enhanced"><h3>Syrtis Major Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 25 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Syrtis Major hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. This mosaic is composed of about 100 red and violet…</p></div>, <div class="description"><a class="itemLink product-item" href="/search/map/Mars/Viking/valles_marineris_enhanced"><h3>Valles Marineris Hemisphere Enhanced</h3></a><span class="subtitle" style="float:left">image/tiff 27 MB</span><span class="pubDate" style="float:right"></span><br/><p>Mosaic of the Valles Marineris hemisphere of Mars projected into point perspective, a view similar to that which one would see from a spacecraft. The distance is 2500 kilometers from the surface of…</p></div>]
ADSL
Missions_to_Mars/.ipynb_checkpoints/mission_to_mars-checkpoint.ipynb
goldenMJ/web-scraping-challenge
Generate a realization of 5000 points within a single halo of conc=5
from ellipsoidal_nfw import random_nfw_ellipsoid npts = 5_000 conc = np.zeros(npts)+5. x, y, z = random_nfw_ellipsoid(conc, b=2, c=3) fig, (ax0, ax1, ax2) = plt.subplots(1, 3, figsize=(12, 4)) fig.tight_layout(pad=3.0) for ax in ax0, ax1, ax2: xlim = ax.set_xlim(-4, 4) ylim = ax.set_ylim(-4, 4) __=ax0.scatter(x, y, s=1) __=ax1.scatter(x, z, s=1) __=ax2.scatter(y, z, s=1) xlabel = ax0.set_xlabel(r'$x$') ylabel = ax0.set_ylabel(r'$y$') xlabel = ax1.set_xlabel(r'$x$') ylabel = ax1.set_ylabel(r'$z$') xlabel = ax2.set_xlabel(r'$y$') ylabel = ax2.set_ylabel(r'$z$') fig.savefig('ellipsoidal_nfw.png', bbox_extra_artists=[xlabel, ylabel], bbox_inches='tight', dpi=200)
_____no_output_____
BSD-3-Clause
notebooks/demo_ellipsoidal_nfw.ipynb
aphearin/ellipsoidal_nfw
Generate a realization of a collection of 10 halos, each with 5000 points, each with different concentrations
npts_per_halo = 5_000 n_halos = 10 conc = np.linspace(5, 25, n_halos) conc_halopop = np.repeat(conc, npts_per_halo) x, y, z = random_nfw_ellipsoid(conc_halopop, b=2, c=3) x = x.reshape((n_halos, npts_per_halo)) y = y.reshape((n_halos, npts_per_halo)) z = z.reshape((n_halos, npts_per_halo))
_____no_output_____
BSD-3-Clause
notebooks/demo_ellipsoidal_nfw.ipynb
aphearin/ellipsoidal_nfw
This is Task 2 of GRIP internshipTo Explore Supervised Machine Learning
#Importing all the libraries required for the code import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Loading Data from the given URL url = "http://bit.ly/w-data" data = pd.read_csv(url) print("shape of dataset: {}".format(data.shape)) print(data.head(5)) # Plotting the distribution of score using matplotlib plt.figure(figsize=(10, 6), dpi=100) plt.title("Distribution of Score") plt.xlabel("Hours") plt.ylabel("Scores") plt.scatter(data.Hours,data.Scores,color="b",marker="*") plt.show()
_____no_output_____
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Preparing the data for training Dividing the data into attributes (Inputs) and labels (Outputs)
# Dividing the data into attributes(Inputs) and label(Outputs) x = data.iloc[:, :-1].values y = data.iloc[:, 1].values
_____no_output_____
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
splitting the data set into tranining and testing data
from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) print("training dataset shape: {}".format(x_train.shape)) print("testing dataset shape: {}".format(x_test.shape))
training dataset shape: (20, 1) testing dataset shape: (5, 1)
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Training the model
# Importing library for linear regression from sklearn.linear_model import LinearRegression model = LinearRegression() model.fit(x_train, y_train)
_____no_output_____
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Plotting the linear regression line with training data
# Defining the equation of line print("coefficient: {}, intercept: {}".format(model.coef_, model.intercept_)) line = model.coef_*x + model.intercept_ # plotting line with data plt.figure(figsize=(10, 6), dpi=100) #plotting training data plt.scatter(x_train, y_train, color="c",marker="*") #plotting testing data plt.scatter(x_test, y_test, color="m",marker="+") plt.plot(x, line) plt.title("Distribution of Score and Regression line") plt.xlabel("Hours") plt.ylabel("Scores") plt.show()
coefficient: [9.91065648], intercept: 2.018160041434662
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Results
# getting predictions for test data y_predicted = model.predict(x_test) # Comparing Actual vs Predicted df = pd.DataFrame({'Actual': y_test, 'Predicted': y_predicted}) print(df)
Actual Predicted 0 20 16.884145 1 27 33.732261 2 69 75.357018 3 30 26.794801 4 62 60.491033
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
predicted score if a student study for 9.25 hrs in a day
s_hours = 9.25 s_score = model.predict([[shours]]) print("predicted score if a student study for {} hrs in a day is {}".format(s_hours, s_score[0]))
predicted score if a student study for 9.25 hrs in a day is 93.69173248737539
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Calculating error for the model
from sklearn import metrics print('Mean Absolute Error: {}'.format(metrics.mean_absolute_error(y_test, y_predicted))) accuracy = 100 * model.score(x_test, y_test) print("Accuracy(%): ", accuracy)
_____no_output_____
MIT
#Task_3/task_3.ipynb
ViKrAm-Bais/sparks_foundation_grip
Project 2: Spotify Table of ContentsBackground Knowledge: Topic1. The Data Science Life Cycle a. Formulating a question or problem b. Acquiring and cleaning data c. Conducting exploratory data analysis d. Using prediction and inference to draw conclusions Background Knowledge If you listen to music, chances are you use Spotify, Apple Music, or another similar streaming service. This new era of the music industry curates playlists, recommends new artists, and is based on the number of streams more than the number of albums sold. The way these streaming services do this is (you guessed it) data!Spotify, like many other companies, hire many full-time data scientists to analyze all the incoming user data and use it to make predictions and recommendations for users. If you're interested, feel free to check out [Spotify's Engineering Page](https://engineering.atspotify.com/) for more information! Image Reference The Data Science Life Cycle Formulating a Question or Problem It is important to ask questions that will be informative and can be answered using the data. There are many different questions we could ask about music data. For example, there are many artists who want to find out how to get their music on Spotify's Discover Weekly playlist in order to gain exposure. Similarly, users love to see their *Spotify Wrapped* listening reports at the end of each year. Question: Recall the questions you developed with your group on Tuesday. Write down that question below, and try to add on to it with the context from the articles from Wednesday. Think about what data you would need to answer your question. You can review the articles on the bCourses page under Module 4.3. **Original Question(s):** *here***Updated Question(s):** *here***Data you would need:** *here* Acquiring and Cleaning Data We'll be looking at song data from Spotify. You can find the raw data [here](https://github.com/rfordatascience/tidytuesday/tree/master/data/2020/2020-01-21). We've cleaned up the datasets a bit, and we will be investigating the popularity and the qualities of songs from this dataset.The following table, `spotify`, contains a list of tracks identified by their unique song ID along with attributes about that track.Here are the descriptions of the columns for your reference. (We will not be using all of these fields):|Variable Name | Description ||--------------|------------||`track_id` | Song unique ID ||`track_name` | Song Name ||`track_artist `| Song Artist ||`track_popularity` | Song Popularity (0-100) where higher is better ||`track_album_id`| Album unique ID ||`track_album_name` | Song album name ||`track_album_release_date`| Date when album released ||`playlist_name`| Name of playlist ||`playlist_id`| Playlist ID ||`playlist_genre`| Playlist genre ||`playlist_subgenre `| Playlist subgenre ||`danceability`| Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. ||`energy`| Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. ||`key`| The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation . E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1. ||`loudness`| The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db. ||`mode`| Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0. ||`speechiness`| Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audio book, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. ||`acousticness`| A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. ||`instrumentalness`| Predicts whether a track contains no vocals. “Ooh” and “aah” sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly “vocal”. The closer the instrumentalness value is to 1.0, the greater likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. ||`liveness`| Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides strong likelihood that the track is live. ||`valence`| A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). ||`tempo`| The overall estimated tempo of a track in beats per minute (BPM). In musical terminology, tempo is the speed or pace of a given piece and derives directly from the average beat duration. ||`duration_ms`| Duration of song in milliseconds ||`creation_year`| Year when album was released |
spotify = Table.read_table('data/spotify.csv') spotify.show(10)
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: It's important to evalute our data source. What do you know about the source? What motivations do they have for collecting this data? What data is missing? *Insert answer here* Question: Do you see any missing (nan) values? Why might they be there? *Insert answer here* Question: We want to learn more about the dataset. First, how many total rows are in this table? What does each row represent?
total_rows = spotify.num_rows total_rows
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
*Insert answer here* Conducting Exploratory Data Analysis Visualizations help us to understand what the dataset is telling us. We will be using bar charts, scatter plots, and line plots to try to answer questions like the following:> What audio features make a song popular and which artists have these songs? How have features changed over time? Part 1: We'll start by looking at the length of songs using the `duration_ms` column. Right now, the `duration` array contains the length of each song in milliseconds. However, that's not a common measurement when describing the length of a song - often, we use minutes and seconds. Using array arithmetic, we can find the length of each song in seconds and in minutes. There are 1000 milliseconds in a second, and 60 seconds in a minute. First, we will convert milliseconds to seconds.
#Access the duration column as an array. duration = spotify.column("duration_ms") duration #Divide the milliseconds by 1000 duration_seconds = duration / 1000 duration_seconds #Now convert duration_seconds to minutes. duration_minutes = duration_seconds / 60 duration_minutes
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: How would we find the average duration (in minutes) of the songs in this dataset?
avg_song_length_mins = np.mean(duration_minutes) avg_song_length_mins
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Now, we can add in the duration for each song (in minutes) by adding a column to our `spotify` table called `duration_min`. Run the following cell to do so.
#This cell will add the duration in minutes column we just created to our dataset. spotify = spotify.with_columns('duration_min', duration_minutes) spotify
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Artist Comparison Let's see if we can find any meaningful difference in the average length of song for different artists. Note: Now that we have the average duration for each song, you can compare average song length between two artists. Below is an example!
sam_smith = spotify.where("track_artist", are.equal_to("Sam Smith")) sam_smith sam_smith_mean = sam_smith.column("duration_min").mean() sam_smith_mean #In this cell, choose an artist you want to look at. artist_name = spotify.where("track_artist", "Kanye West").column("duration_min").mean() artist_name #In this cell, choose another artist you want to compare it to. artist_name_2 = spotify.where("track_artist", "Justin Bieber").column("duration_min").mean() artist_name_2
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
This exercise was just one example of how you can play around with data and answer questions. Top Genres and ArtistsIn this section, we are interested in the categorical information in our dataset, such as the playlist each song comes from or the genre. There are almost 33,000 songs in our dataset, so let's do some investigating. What are the most popular genres? We can figure this out by grouping by the playlist genre. Question: How can we group our data by unique genres?
genre_counts = spotify.group('playlist_genre') genre_counts
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: In our dataset, it looks like the most popular genre is EDM. Make a barchart below to show how the other genres compare.
genre_counts.barh('playlist_genre', 'count')
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Notice that it was difficult to analyze the above bar chart because the data wasn't sorted first. Let's sort our data and make a new bar chart so that it is much easier to make comparisons.
genre_counts_sorted = genre_counts.sort('count', descending = True) genre_counts_sorted genre_counts_sorted.barh('playlist_genre', 'count')
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: Was this what you expected? Which genre did you think would be the most popular? *Insert answer here.* Question: Let's take a look at all the artists in the dataset. We can take a look at the top 25 artists based on the number of songs they have in our dataset. We'll follow a similar method as we did when grouping by genre above. First, we will group our data by artist and sort by count.
#Here, we will group and sort in the same line. artists_grouped = spotify.group('track_artist').sort('count', descending=True) artists_grouped top_artists = artists_grouped.take(np.arange(0, 25)) top_artists top_artists.barh('track_artist', 'count')
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: What do you notice about the top 25 artists in our dataset? *insert answer here* Playlist Popularity In our dataset, each song is listed as belonging to a particular playlist, and each song is given a "popularity score", called the `track_popularity`. Using the `track_popularity`, we can calculate an *aggregate popularity* for each playlist, which is just the sum of all the popularity scores for the songs on the playlist.In order to create this aggregate popularity score, we need to group our data by playlist, and sum all of the popularity scores. First, we will create a subset of our `spotify` table using the `select` method. This lets us create a table with only the relevant columns we want. In this case, we only care about the name of the playlist and the popularity of each track. Keep in mind that each row still represents one track, even though we no longer have the track title in our table.
spotify_subset = spotify.select(['playlist_name', 'track_popularity']) spotify_subset
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Note: By grouping, we can get the number of songs from each playlist.
playlists = spotify_subset.group('playlist_name') playlists
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: We can use the group method again, this time passing in a second argument collect, which says that we want to take the sum rather than the count when grouping. This results in a table with the total aggregate popularity of each playlist.
#Run this cell. total_playlist_popularity = spotify_subset.group('playlist_name', collect = sum) total_playlist_popularity
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Similar to when we found duration in minutes, we can once again use the `column` method to access just the `track_popularity sum` column, and add it to our playlists table using the `with_column` method.
agg_popularity = total_playlist_popularity.column('track_popularity sum') playlists = playlists.with_column('aggregate_popularity', agg_popularity) playlists
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: Do you think that the most popular playlist would be the one with the highest aggregate_popularity score, or the one with the highest number of songs? We can sort our playlists table and compare the outputs.
playlists.sort('count', descending=True)
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: Now sort by aggregate popularity.
playlists.sort('aggregate_popularity', descending=True)
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Comparing these two outputs shows us that the "most popular playlist" depends on how we judge popularity. If we have a playlist that has only a few songs, but each of those songs are really popular, should that playlist be higher on the popularity rankings? By way of calculation, playlists with more songs will have a higher aggregate popularity, since more popularity values are being added together. We want a metric that will let us judge the actual quality and popularity of a playlist, not just how many songs it has.In order to take into account the number of songs on each playlist, we can calculate the "average popularity" of each song on the playlist, or the proportion of aggregate popularity that each song takes up. We can do this by dividing `aggregate_popularity` by `count`. Remember, since the columns are just arrays, we can use array arithmetic to calculate these values.
#Run this cell to get the average. avg_popularity = playlists.column('aggregate_popularity') / playlists.column('count') #Now add it to the playlists table. playlists = playlists.with_column('average_popularity', avg_popularity) playlists
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Let's see if our "most popular playlist" changes when we judge popularity by the average popularity of the songs on a playlist.
playlists.sort('average_popularity', descending=True)
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Looking at the table above, we notice that 8/10 of the top 10 most popular playlists by the `average_popularity` metric are playlists with less than 100 songs. Just because a playlist has a lot of songs, or a high aggregate popularity, doesn't mean that the average popularity of a song on that playlist is high. Our new metric of `average_popularity` lets us rank playlists where the size of a playlist has no effect on it's overall score. We can visualize the top 25 playlists by average popularity in a bar chart.
top_25_playlists = playlists.sort('average_popularity', descending=True).take(np.arange(25)) top_25_playlists.barh('playlist_name', 'average_popularity')
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Creating a new metric like `average_popularity` helps us more accurately and fairly measure the popularity of a playlist. We saw before when looking at the top 25 artists that they were all male. Now looking at the top playlists, we see that the current landscape of popular playlists and music may have an effect on the artists that are popular. For example, the RapCaviar is the second most popular playlist, and generally there tends to be fewer female rap artists than male. This shows that the current landscape of popular music can affect the types of artists topping the charts. Using prediction and inference to draw conclusions Now that we have some experience making these visualizations, let's go back to the visualizations others are working on to analyze Spotify data using more complex techniques.[Streaming Dashboard](https://public.tableau.com/profile/vaibhavi.gaekwad!/vizhome/Spotify_15858686831320/Dashboard1)[Audio Analysis Visualizer](https://developer.spotify.com/community/showcase/spotify-audio-analysis/) Music and culture are very intertwined so it's interesting to look at when songs are released and what is popular during that time. In this last exercise, you will be looking at the popularity of artists and tracks based on the dates you choose.Let's look back at the first five rows of our `spotify` table once more.
spotify.show(5)
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: Fill in the following cell the data according to the creation_year you choose.
#Fill in the year as an integer. by_year = spotify.where("creation_year", are.equal_to(2018)) by_year
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Based on the dataset you have now, use previous techniques to find the most popular song during that year. First group by what you want to look at, for example, artist/playlist/track.
your_grouped = by_year.group("playlist_name") pop_track = your_grouped.sort("count", descending = True) pop_track pop_track.take(np.arange(25)).barh("playlist_name", "count")
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
Question: Finally, use this cell if you want to look at the popularity of a track released on a specific date. It's very similar to the process above.
by_date = spotify.where("track_album_release_date", are.equal_to("2019-06-14")) your_grouped = by_date.group("track_artist") pop_track = your_grouped.sort("count", descending = True) pop_track.take(np.arange(10)).barh("track_artist", "count")
_____no_output_____
BSD-3-Clause
Project_2/Spotify/Spotify_Solutions.ipynb
ds-modules/BUDS-SU21-Dev
[01/02/22] LTH on a Data Diet -- 2 Pass Initial Results
import matplotlib.pyplot as plt import numpy as np import pandas as pd from pathlib import Path import seaborn as sns plt.style.use('default') sns.set_theme( style='ticks', font_scale=1.2, rc={ 'axes.linewidth': '0.8', 'axes.grid': True, 'figure.constrained_layout.use': True, 'grid.linewidth': '0.8', 'legend.edgecolor': '1.0', 'legend.fontsize': 'small', 'legend.title_fontsize': 'small', 'xtick.major.width': '0.8', 'ytick.major.width': '0.8' }, )
_____no_output_____
MIT
nbs/22_01_02__LTH_Data_Diet_2_Pass_Initial.ipynb
mansheej/open_lth
Figure 0016
exp_meta_paths = [ Path(f'/home/mansheej/open_lth_data/lottery_b279562b990bac9b852b17b287fca1ef/'), Path(f'/home/mansheej/open_lth_data/lottery_78a119e24960764e0de0964887d2597f/'), Path(f'/home/mansheej/open_lth_data/lottery_2cb77ad7e940a06d07b04a4b63fd718d/'), Path(f'/home/mansheej/open_lth_data/lottery_511428cbe43275064244db39edd0f60f/'), ] exp_paths = [[emp / f'replicate_{i}' for i in range(1, 5)] for emp in exp_meta_paths] plt.figure(figsize=(8.4, 4.8)) ls = [] for i, eps in enumerate(exp_paths): num_levels = 15 acc_run_level = [] for p in eps: acc_level = [] for l in range(num_levels + 1): df = pd.read_csv(p / f'level_{l}/main/logger', header=None) acc_level.append(df[2].iloc[-2]) acc_run_level.append(acc_level) acc_run_level = np.array(acc_run_level) x = np.arange(16) ys = acc_run_level y_mean, y_std = ys.mean(0), ys.std(0) c = f'C{i}' l = plt.plot(x, y_mean, c=c, alpha=0.9, linewidth=2) ls.append(l[0]) plt.fill_between(x, y_mean + y_std, y_mean - y_std, color=c, alpha=0.2) plt.legend( ls, [ 'Pre-train 1 Pass -> All 50000 Examples', 'Pre-train 2 Passes -> All 50000 Examples', 'Pre-train 2 Passes -> 32000 Smallest Scores at Epoch 3', 'Pre-train 2 Passes -> 12800 Smallest Scores at Epoch 3', ], ) plt.xlim(0, 15) plt.ylim(0.815, 0.925) plt.xticks(np.arange(0, 16, 2), [f'{f*100:.1f}' for f in 0.8**np.arange(0, 16, 2)]) plt.xlabel('% Weights Remaining') plt.ylabel('Test Accuracy') plt.title('CIFAR10 ResNet20: Pre-train LR = 0.4') sns.despine() plt.savefig('/home/mansheej/open_lth/figs/0016.svg') plt.show()
_____no_output_____
MIT
nbs/22_01_02__LTH_Data_Diet_2_Pass_Initial.ipynb
mansheej/open_lth
Naive Bayes and Logistic Regression In this tutorial, we'll explore training and evaluation of Naive Bayes and Logitistic Regression Classifiers.To start, we import the standard BIDMach class definitions.
import $exec.^.lib.bidmach_notebook_init
1 CUDA device found, CUDA version 8.0
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Now we load some training and test data, and some category labels. The data come from a news collection from Reuters, and is a "classic" test set for classification. Each article belongs to one or more of 103 categories. The articles are represented as Bag-of-Words (BoW) column vectors. For a data matrix A, element A(i,j) holds the count of word i in document j. The category matrices have 103 rows, and a category matrix C has a one in position C(i,j) if document j is tagged with category i, or zero otherwise. To reduce the computing time and memory footprint, the training data have been sampled. The full collection has about 700k documents. Our training set has 60k. Since the document matrices contain counts of words, we use a min function to limit the count to "1", i.e. because we need binary features for naive Bayes.
val dict = "../data/rcv1/" val traindata = loadSMat(dict+"docs.smat.lz4") val traincats = loadFMat(dict+"cats.fmat.lz4") val testdata = loadSMat(dict+"testdocs.smat.lz4") val testcats = loadFMat(dict+"testcats.fmat.lz4") min(traindata, 1, traindata) // the first "traindata" argument is the input, the other is output min(testdata, 1, testdata)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Get the word and document counts from the data. This turns out to be equivalent to a matrix multiply. For a data matrix A and category matrix C, we want all (cat, word) pairs (i,j) such that C(i,k) and A(j,k) are both 1 - this means that document k contains word j, and is also tagged with category i. Summing over all documents gives us$${\rm wordcatCounts(i,j)} = \sum_{k=1}^N C(i,k) A(j,k) = C * A^T$$Because we are doing independent binary classifiers for each class, we need to construct the counts for words not in the class (negwcounts).Finally, we add a smoothing count 0.5 to counts that could be zero.
val truecounts = traincats *^ traindata val wcounts = truecounts + 0.5 val negwcounts = sum(truecounts) - truecounts + 0.5 val dcounts = sum(traincats,2)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Now compute the probabilities * pwordcat = probability that a word is in a cat, given the cat.* pwordncat = probability of a word, given the complement of the cat.* pcat = probability that doc is in a given cat. * spcat = sum of pcat probabilities (> 1 because docs can be in multiple cats)
val pwordcat = wcounts / sum(wcounts,2) // Normalize the rows to sum to 1. val pwordncat = negwcounts / sum(negwcounts,2) // Each row represents word probabilities conditioned on one cat. val pcat = dcounts / traindata.ncols val spcat = sum(pcat)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Now take the logs of those probabilities. Here we're using the formula presented here to match Naive Bayes to Logistic Regression for independent data.For each word, we compute the log of the ratio of the complementary word probability over the in-class word probability. For each category, we compute the log of the ratio of the complementary category probability over the current category probability.lpwordcat(j,i) represents $\log\left(\frac{{\rm Pr}(X_i|\neg c_j)}{{\rm Pr}(X_i|c_j)}\right)$while lpcat(j) represents $\log\left(\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\right)$
val lpwordcat = ln(pwordncat/pwordcat) // ln is log to the base e (natural log) val lpcat = ln((spcat-pcat)/pcat)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Here's where we apply Naive Bayes. The formula we're using is borrowed from here.$${\rm Pr}(c|X_1,\ldots,X_k) = \frac{1}{1 + \frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\prod_{i-1}^k\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}}$$and we can rewrite$$\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\prod_{i-1}^k\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}$$as$$\exp\left(\log\left(\frac{{\rm Pr}(\neg c)}{{\rm Pr}(c)}\right) + \sum_{i=1}^k\log\left(\frac{{\rm Pr}(X_i|\neg c)}{{\rm Pr}(X_i|c)}\right)\right) = \exp({\rm lpcat(j)} + {\rm lpwordcat(j,?)} * X)$$for class number j and an input column $X$. This follows because an input column $X$ is a sparse vector with ones in the positions of the input features. The product ${\rm lpwordcat(i,?)} * X$ picks out the features occuring in the input document and adds the corresponding logs from lpwordcat. Finally, we take the exponential above and fold it into the formula $P(c_j|X_1,\ldots,X_k) = 1/(1+\exp(\cdots))$. This gives us a matrix of predictions. preds(i,j) = prediction of membership in category i for test document j.
val logodds = lpwordcat * testdata + lpcat val preds = 1 / (1 + exp(logodds))
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
To measure the accuracy of the predictions above, we can compute the probability that the classifier outputs the right label. We used this formula in class for the expected accuracy for logistic regression. The "dot arrow" operator takes dot product along rows:
val acc = ((preds ∙→ testcats) + ((1-preds) ∙→ (1-testcats)))/preds.ncols acc.t
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Raw accuracy is not a good measure in most cases. When there are few positives (instances in the class vs. its complement), accuracy simply drives down false-positive rate at the expense of false-negative rate. In the worst case, the learner may always predict "no" and still achieve high accuracy. ROC curves and ROC Area Under the Curve (AUC) are much better. Here we compute the ROC curves from the predictions above. We need:* scores - the predicted quality from the formula above.* good - 1 for positive instances, 0 for negative instances.* bad - complement of good. * npoints (100) - specifies the number of X-axis points for the ROC plot. itest specifies which of the categories to plot for. We chose itest=6 because that category has one of the highest positive rates, and gives the most stable accuracy plots.
val itest = 6 val scores = preds(itest,?) val good = testcats(itest,?) val bad = 1-testcats(itest,?) val rr =roc(scores,good,bad,100)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
> TODO 1: In the cell below, write an expression to derive the ROC Area under the curve (AUC) given the curve rr. rr gives the ROC curve y-coordinates at 100 evenly-spaced X-values from 0 to 1.0.
// auc =
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
> TODO 2: In the cell below, write the value of AUC returned by the expression above. Logistic Regression Now lets train a logistic classifier on the same data. BIDMach has an umbrella classifier called GLM for Generalized Linear Model. GLM includes linear regression, logistic regression (with log accuracy or direct accuracy optimization), and SVM. The learner function accepts these arguments:* traindata: the training data in the same format as for Naive Bayes* traincats: the training category labels* testdata: the test input data* predcats: a container for the predictions generated by the model* modeltype (GLM.logistic here): an integer that specifies the type of model (0=linear, 1=logistic log accuracy, 2=logistic accuracy, 3=SVM). We'll construct the learner and then look at its options:
val predcats = zeros(testcats.nrows, testcats.ncols) val (mm,mopts) = GLM.learner(traindata, traincats, GLM.maxp) mopts.what
Option Name Type Value =========== ==== ===== addConstFeat boolean false aopts Opts null autoReset boolean true batchSize int 10000 checkPointFile String null checkPointInterval float 0.0 clipByValue float -1.0 cumScore int 0 debug int 0 debugCPUmem boolean false debugMem boolean false dim int 256 doAllReduce boolean false doubleScore boolean false doVariance boolean false epsilon float 1.0E-5 evalStep int 11 featThreshold Mat null featType int 1 gsq_decay float -1.0 hashBound1 int 1000000 hashBound2 int 1000000 hashFeatures int 0 initsumsq float 1.0E-5 iweight FMat null l2reg FMat null langevin float 0.0 lim float 0.0 links IMat 2,2,2,2,2,2,2,2,2,2,... logDataSink DataSink null logfile String log.txt logFuncs Function2[] null lr_policy Function3 null lrate FMat 1 mask FMat null matrixOfScores boolean false max_grad_norm float -1.0 mixinInterval int 1 naturalLambda float 0.0 nesterov_vel_decay FMat null nNatural int 1 npasses int 2 nzPerColumn int 0 pauseAt long -1 pexp FMat 0 pstep float 0.01 putBack int -1 r1nmats int 1 reg1weight FMat 1.0000e-07 resFile String null rmask FMat null sample float 1.0 sizeMargin float 3.0 startBlock int 8000 targets FMat null targmap FMat null texp FMat 0.50000 trace int 0 updateAll boolean false useCache boolean true useDouble boolean false useGPU boolean true useGPUcache boolean true vel_decay FMat null vexp FMat 0.50000 waitsteps int 3
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
The most important options are:* lrate: the learning rate* batchSize: the minibatch size* npasses: the number of passes over the datasetWe'll use the following parameters for this training run.
mopts.lrate=1.0 mopts.batchSize=1000 mopts.npasses=2 mm.train val (nn, nopts) = GLM.predictor(mm.model, testdata) nn.predict val predcats = FMat(nn.preds(0)) val lacc = (predcats ∙→ testcats + (1-predcats) ∙→ (1-testcats))/preds.ncols lacc.t mean(lacc)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
Since we have the accuracy scores for both Naive Bayes and Logistic regression, we can plot both of them on the same axes. Naive Bayes is red, Logistic regression is blue. The x-axis is the category number from 0 to 102. The y-axis is the absolute accuracy of the predictor for that category.
val axaxis = row(0 until 103) plot(axaxis, acc, axaxis, lacc)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
> TODO 3: With the full training set (700k training documents), Logistic Regression is noticeably more accurate than Naive Bayes in every category. What do you observe in the plot above? Why do you think this is? Next we'll compute the ROC plot and ROC area (AUC) for Logistic regression for category itest.
val lscores = predcats(itest,?) val lrr =roc(lscores,good,bad,100) val auc = mean(lrr) // Fill in using the formula you used before
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
We computed the ROC curve for Naive Bayes earlier, so now we can plot them on the same axes. Naive Bayes is once again in red, Logistic regression in blue.
val rocxaxis = row(0 until 101) plot(rocxaxis, rr, rocxaxis, lrr)
_____no_output_____
BSD-3-Clause-No-Nuclear-License-2014
tutorials/NBandLR.ipynb
oeclint/BIDMach
I'll be demonstrating just the classification problems , you can build regression following a very similar process
# data prep from previous module ci_train=pd.read_csv('census_income.csv') # if you have a test data, you can combine as shown in the earlier modules ci_train.head() pd.crosstab(ci_train['education'],ci_train['education.num']) ci_train.drop(['education'],axis=1,inplace=True) ci_train['Y'].value_counts().index ci_train['Y']=(ci_train['Y']==' >50K').astype(int) cat_cols=ci_train.select_dtypes(['object']).columns cat_cols ci_train.shape for col in cat_cols: freqs=ci_train[col].value_counts() k=freqs.index[freqs>500][:-1] for cat in k: name=col+'_'+cat ci_train[name]=(ci_train[col]==cat).astype(int) del ci_train[col] print(col) ci_train.shape ci_train.isnull().sum() x_train=ci_train.drop(['Y'],1) y_train=ci_train['Y']
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Hyper Parameters For Decision Trees* criterion : there are two options available , "entropy" and "gini". These are the homogeneity measures that we discussed. By default "gini" is used * The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Ignored if ``max_leaf_nodes`` is not None. We'll finding optimal value for max_leaf_nodes [which is basically size of the tree] through cross validation.* min_sample_split : The minimum number of samples required to split an internal node. defaults to too, good idea is to keep it slightly higher in order to reduce overfitting of the data. recommended values is between 5 to 10* min_sample_leaf : The minimum number of samples required to be at a leaf node. This defaults to 1. If this number is higher and a split results in a leaf node having lesser number of samples than specified then that split is cancelled.* max_leaf_node : this parameter controlls size of the tree, we'll be finding optimal value of this through cross validation* class_weight : this default to None in which case each class is given equal weightage. If the goal of the problem is good classification instead of accuracy then you should set this to "balanced", in which case class weights are assigned inversely proportional to class frequencies in the input data.* random_state : this is used to reproduce random result
from sklearn.model_selection import RandomizedSearchCV params={ 'class_weight':[None,'balanced'], 'criterion':['entropy','gini'], 'max_depth':[None,5,10,15,20,30,50,70], 'min_samples_leaf':[1,2,5,10,15,20], 'min_samples_split':[2,5,10,15,20] } 2*2*8*6*5 from sklearn.tree import DecisionTreeClassifier clf=DecisionTreeClassifier() random_search=RandomizedSearchCV(clf,cv=10, param_distributions=params, scoring='roc_auc', n_iter=10 ) x_train["capital.gain"]=x_train["capital.gain"].fillna(0) x_train["capital.loss"]=x_train["capital.loss"].fillna(0) x_train["hours.per.week"]=x_train["hours.per.week"].fillna(40) x_train.isna().sum() random_search.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Printing the tree model is a little tricky in python. We'll have to output our tree to a .dot file using graphviz package. From there using graphviz.Source function we can print our tree for display. Here is how :
random_search.best_estimator_ def report(results, n_top=3): for i in range(1, n_top + 1): candidates = np.flatnonzero(results['rank_test_score'] == i) for candidate in candidates: print("Model with rank: {0}".format(i)) print("Mean validation score: {0:.3f} (std: {1:.5f})".format( results['mean_test_score'][candidate], results['std_test_score'][candidate])) print("Parameters: {0}".format(results['params'][candidate])) print("") report(random_search.cv_results_,5) dtree=random_search.best_estimator_ dtree.fit(x_train,y_train) import pickle filename = 'census_income' outfile = open(filename,'wb') pickle.dump(dtree,outfile) outfile.close() filename = 'census_income' infile = open(filename,'rb') new_census_model = pickle.load(infile) infile.close() predict=new_census_model.predict(x_train) from sklearn.metrics import accuracy_score accuracy_score(predict,y_train) dotfile = open("mytree.dot", 'w') tree.export_graphviz(dtree,out_file=dotfile, feature_names=x_train.columns, class_names=["0","1"], proportion=True) dotfile.close()
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Open mytree.dot file in a simple text editor and copy and paste the code here to visualise your tree : http://webgraphviz.com Additional Hyper paprameters for RandomForests* n_estimators : number of trees in the forest . defaults to 10. good starting point will be 100. Its one of the hyper parameters. We'll see how to search through mutidimensional hyper parameter space in order to find optimal combination through randomised grid search* max_features : Number of features being considered for rule selection at each split. Look at the documentation for defaults* bootstrap : boolean values, Whether bootstrap samples are used when building trees.
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier() # this here is the base classifier we are going to try # we will be supplying different parameter ranges to our randomSearchCV which in turn # will pass it on to this classifier # Utility function to report best scores. This simply accepts grid scores from # our randomSearchCV/GridSearchCV and picks and gives top few combination according to # their scores # RandomSearchCV/GridSearchCV accept parameters values as dictionaries. # In example given below we have constructed dictionary for #different parameter values that we want to # try for randomForest model param_dist = {"n_estimators":[100,200,300,500,700,1000], "max_features": [5,10,20,25,30,35], "bootstrap": [True, False], 'class_weight':[None,'balanced'], 'criterion':['entropy','gini'], 'max_depth':[None,5,10,15,20,30,50,70], 'min_samples_leaf':[1,2,5,10,15,20], 'min_samples_split':[2,5,10,15,20] } x_train.shape 960*6*6*2 # run randomized search n_iter_search = 10 # n_iter parameter of RandomizedSeacrhCV controls, how many # parameter combination will be tried; out of all possible given values random_search = RandomizedSearchCV(clf, param_distributions=param_dist, n_iter=n_iter_search,scoring='roc_auc',cv=5) random_search.fit(x_train, y_train) random_search.best_estimator_
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=50, max_features=10, max_leaf_nodes=None, min_impurity_split=1e-07, min_samples_leaf=10, min_samples_split=20, min_weight_fraction_leaf=0.0, n_estimators=300, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) **Note: This is a result from one of the runs, you can very well get different results from a different run. Your results need not match with this.**
report(random_search.cv_results_,5) # select the best values from results above, they will vary slightly with each run rf=RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=50, max_features=10, max_leaf_nodes=None, min_impurity_split=1e-07, min_samples_leaf=10, min_samples_split=20, min_weight_fraction_leaf=0.0, n_estimators=300, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) rf.fit(x_train,y_train)
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Feature Importance
feat_imp_df=pd.DataFrame({'features':x_train.columns,'importance':rf.feature_importances_}) feat_imp_df.sort_values('importance',ascending=False)
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Partial Dependence Plot
var_name='education.num' preds=rf.predict_proba(x_train)[:,1] # part_dep_data var_data=pd.DataFrame({'var':x_train[var_name],'response':preds}) import seaborn as sns sns.lmplot(x='var',y='response',data=var_data,fit_reg=False) import statsmodels.api as sm smooth_data=sm.nonparametric.lowess(var_data['response'],var_data['var']) # smooth_data df=pd.DataFrame({'response':smooth_data[:,1],var_name:smooth_data[:,0]}) sns.lmplot(x=var_name,y='response',data=df,fit_reg=False)
_____no_output_____
Apache-2.0
Census_income.ipynb
umairnsr87/deploying-ml-model-with-django
Data Cleaning
# Checking for Consistent Column Name df.columns # Checking for Datatypes df.dtypes # Check for missing nan df.isnull().isnull().sum() # Checking for Date df["DATE"] df.AUTHOR # Convert the Author Name to First Name and Last Name #df[["FIRSTNAME","LASTNAME"]] = df['AUTHOR'].str.split(expand=True)
_____no_output_____
Unlicense
Youtube_Comments-checkpoint.ipynb
Vineeta12345/spam-detection
Working With Text Content
df_data = df[["CONTENT","CLASS"]] df_data.columns df_x = df_data['CONTENT'] df_y = df_data['CLASS']
_____no_output_____
Unlicense
Youtube_Comments-checkpoint.ipynb
Vineeta12345/spam-detection
Feature Extraction From Text CountVectorizer TfidfVectorizer
cv = CountVectorizer() ex = cv.fit_transform(["Great song but check this out","What is this song?"]) ex.toarray() cv.get_feature_names() # Extract Feature With CountVectorizer corpus = df_x cv = CountVectorizer() X = cv.fit_transform(corpus) # Fit the Data X.toarray() # get the feature names cv.get_feature_names()
_____no_output_____
Unlicense
Youtube_Comments-checkpoint.ipynb
Vineeta12345/spam-detection
Model Building
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, df_y, test_size=0.33, random_state=42) X_train # Naive Bayes Classifier from sklearn.naive_bayes import MultinomialNB clf = MultinomialNB() clf.fit(X_train,y_train) clf.score(X_test,y_test) # Accuracy of our Model print("Accuracy of Model",clf.score(X_test,y_test)*100,"%") ## Predicting with our model clf.predict(X_test) # Sample Prediciton comment = ["Check this out"] vect = cv.transform(comment).toarray() clf.predict(vect) class_dict = {'ham':0,'spam':1} class_dict.values() if clf.predict(vect) == 1: print("Spam") else: print("Ham") # Sample Prediciton 2 comment1 = ["Great song Friend"] vect = cv.transform(comment1).toarray() clf.predict(vect)
_____no_output_____
Unlicense
Youtube_Comments-checkpoint.ipynb
Vineeta12345/spam-detection
Save The Model
import pickle naivebayesML = open("YtbSpam_model.pkl","wb") pickle.dump(clf,naivebayesML) naivebayesML.close() # Load the model ytb_model = open("YtbSpam_model.pkl","rb") new_model = pickle.load(ytb_model) new_model # Sample Prediciton 3 comment2 = ["Hey Music Fans I really appreciate all of you,but see this song too"] vect = cv.transform(comment2).toarray() new_model.predict(vect) if new_model.predict(vect) == 1: print("Spam") else: print("Ham")
Spam
Unlicense
Youtube_Comments-checkpoint.ipynb
Vineeta12345/spam-detection
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages
#importing some useful packages import matplotlib.pyplot as plt import matplotlib.image as mpimg import numpy as np import cv2 %matplotlib inline
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Read in an Image
#reading in an image image = mpimg.imread('test_images/solidWhiteRight.jpg') #printing out some stats and plotting print('This image is:', type(image), 'with dimensions:', image.shape) plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson!
import math def grayscale(img): """Applies the Grayscale transform This will return an image with only one color channel but NOTE: to see the returned image as grayscale (assuming your grayscaled image is called 'gray') you should call plt.imshow(gray, cmap='gray')""" return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) # Or use BGR2GRAY if you read an image with cv2.imread() # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) def canny(img, low_threshold, high_threshold): """Applies the Canny transform""" return cv2.Canny(img, low_threshold, high_threshold) def gaussian_blur(img, kernel_size): """Applies a Gaussian Noise kernel""" return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0) def region_of_interest(img, vertices): """ Applies an image mask. Only keeps the region of the image defined by the polygon formed from `vertices`. The rest of the image is set to black. `vertices` should be a numpy array of integer points. """ #defining a blank mask to start with mask = np.zeros_like(img) #defining a 3 channel or 1 channel color to fill the mask with depending on the input image if len(img.shape) > 2: channel_count = img.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,) * channel_count else: ignore_mask_color = 255 #filling pixels inside the polygon defined by "vertices" with the fill color cv2.fillPoly(mask, vertices, ignore_mask_color) #returning the image only where mask pixels are nonzero masked_image = cv2.bitwise_and(img, mask) return masked_image def draw_lines(img, lines, color=[255, 0, 0], thickness=8): #create variables to hold the x, y and slopes for all the lines, left_x=[] left_y=[] left_slopes=[] right_x=[] right_y=[] right_slopes=[] right_limits=[5,0.5] left_limits=[-5,-0.5] #variables to lane points left_lane=[0,0,0,0]#x1,y1,x2,y2 right_lane=[0,0,0,0] #get image size to use the left and right corners for starting the line imshape = img.shape y_size = img.shape[0] #loop over all the lines to extract the ones in interest ( filter out the ones that are not expected) for line in lines: for x1,y1,x2,y2 in line: [slope, intercept] = np.polyfit((x1,x2), (y1,y2), 1) #filter only on the slopes that make sense to a lane if((slope<left_limits[1]) and (slope>left_limits[0])): # if line slope < 0 then it belongs to left lane (y=mx+b) left_x+=[x1,x2] left_y+=[y1,y2] left_slopes+=[slope] elif((slope<right_limits[0]) and (slope>right_limits[1])): # if line slope > 0 then it belongs to right lane (y=mx+b) where m is - right_x+=[x1,x2] right_y+=[y1,y2] right_slopes+=[slope] #average each line points to get the line equation which best describes the lane left_x_mean= np.mean(left_x) left_y_mean= np.mean(left_y) left_slope_mean=np.mean(left_slopes) left_intercept_mean =left_y_mean - (left_slope_mean * left_x_mean) right_x_mean= np.mean(right_x) right_y_mean= np.mean(right_y) right_slope_mean=np.mean(right_slopes) right_intercept_mean =right_y_mean - (right_slope_mean * right_x_mean) #get start and end of each line to draw the left lane and right lane using the equations above #only process incase size is > 0 (to fix challenge error ) if((np.size(left_y))>0) : left_lane[0]=int((np.min(left_y)-left_intercept_mean)/left_slope_mean)# x=(y-b)/m left_lane[2]=int((y_size-left_intercept_mean)/left_slope_mean)# left_lane[1]=int(np.min(left_y)) left_lane[3]=y_size #got errors seems that function only takes int cv2.line(img, (left_lane[0],left_lane[1] ), (left_lane[2],left_lane[3]), color, thickness) if((np.size(right_y))>0): right_lane[0]=int((np.min(right_y)-right_intercept_mean)/right_slope_mean)# x=(y-b)/m right_lane[2]=int((y_size-right_intercept_mean)/right_slope_mean)# right_lane[1]=int(np.min(right_y)) right_lane[3]=y_size #got errors seems that function only takes int cv2.line(img, (right_lane[0],right_lane[1]), (right_lane[2],right_lane[3]), color, thickness) def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap): """ `img` should be the output of a Canny transform. Returns an image with hough lines drawn. """ lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap) line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8) draw_lines(line_img, lines) return line_img # Python 3 has support for cool math symbols. def weighted_img(img, initial_img, α=0.8, β=1., γ=0.): """ `img` is the output of the hough_lines(), An image with lines drawn on it. Should be a blank image (all black) with lines drawn on it. `initial_img` should be the image before any processing. The result image is computed as follows: initial_img * α + img * β + γ NOTE: initial_img and img must be the same shape! """ return cv2.addWeighted(initial_img, α, img, β, γ) def convert_hls(image): return cv2.cvtColor(image, cv2.COLOR_RGB2HLS) def select_color(image ): # find colors that are in provided range and highlight it in red white_lo= np.array([100,100,200]) white_hi= np.array([255,255,255]) yellow_lo_RGB= np.array([225,180,0]) yellow_hi_RGB= np.array([255,255,170]) yellow_lo_HLS= np.array([20,120,80]) yellow_hi_HLS= np.array([45,200,255]) rgb_image = np.copy(image) hls_image = convert_hls(image) #plt.figure() #plt.imshow(rgb_image) #plt.figure() #plt.imshow(hls_image) mask_1=cv2.inRange(rgb_image,white_lo,white_hi) #filter on rgb white mask_2=cv2.inRange(hls_image,yellow_lo_RGB,yellow_hi_RGB) #filter on rgb yellow mask_3=cv2.inRange(hls_image,yellow_lo_HLS,yellow_hi_HLS) #filter on hls yellow mask = mask_1+mask_2+mask_3 #plt.figure() #plt.imshow(mask) result = cv2.bitwise_and(image,image, mask= mask) #plt.figure() #plt.imshow(result) return result
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.**
import os os.listdir("test_images/")
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# TODO: Build your pipeline that will draw lane lines on the test_images # then save them to the test_images_output directory. #output all processed images for documentation images=os.listdir("test_images/") for filename in images: #first read image and change to gray scale image = mpimg.imread("test_images/"+filename) #color select for lane to improve performance but didn't work #highlight white and yellow colors for better late detection highlighted_color=select_color(image) #change to gray scale image_gray = cv2.cvtColor(highlighted_color,cv2.COLOR_RGB2GRAY) cv2.imwrite("test_images_output/"+filename[:-4]+"_gray.jpg",image_gray) #Apply guassian filter image_blur_gray= gaussian_blur(image_gray, 5) cv2.imwrite("test_images_output/"+filename[:-4]+"_blur.jpg",image_blur_gray) # Using canny edges ( used threshoulds achieved from the exercise ) image_edge = canny(image_blur_gray, 100, 200) cv2.imwrite("test_images_output/"+filename[:-4]+"_edge.jpg",image_edge) # Add filter region (added vehicle hood area) imshape = image.shape vertices = np.array([[(200,imshape[0]-50),(420, 330), (580, 330), (imshape[1]-200,imshape[0]-50)]], dtype=np.int32) image_masked_edges = region_of_interest(image_edge,vertices) cv2.imwrite("test_images_output/"+filename[:-4]+"_masked_edge.jpg",image_masked_edges) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 40 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 100 # maximum gap in pixels between connectable line image_hough_lines = hough_lines(image_masked_edges, rho, theta, threshold, min_line_length, max_line_gap) image_hough_lines = cv2.cvtColor(image_hough_lines, cv2.COLOR_RGB2BGR) cv2.imwrite("test_images_output/"+filename[:-4]+"_Hough.jpg",image_hough_lines) #create image with overlay lines overlayed_image = weighted_img(image_hough_lines,image) cv2.imwrite("test_images_output/"+filename[:-4]+"_Final_overlay.jpg",overlayed_image)
C:\Users\Redaaaaaa\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:3373: RuntimeWarning: Mean of empty slice. out=out, **kwargs) C:\Users\Redaaaaaa\Anaconda3\lib\site-packages\numpy\core\_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars ret = ret.dtype.type(ret / rcount)
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Import everything needed to edit/save/watch video clips from moviepy.editor import VideoFileClip from IPython.display import HTML def process_image(image): # NOTE: The output you return should be a color image (3 channel) for processing video below # TODO: put your pipeline here, #color select for lane to improve performance but didn't work #highlight white and yellow colors for better late detection highlighted_color=select_color(image) #Change image to Gray image_gray = cv2.cvtColor(highlighted_color,cv2.COLOR_RGB2GRAY) #Apply guassian filter image_blur_gray= gaussian_blur(image_gray, 5) # Using canny edges ( used threshoulds achieved from the exercise ) image_edge = canny(image_blur_gray, 100, 200) # Add filter region imshape = image.shape vertices = np.array([[(200,imshape[0]-50),(420, 330), (580, 330), (imshape[1]-200,imshape[0]-50)]], dtype=np.int32) image_masked_edges = region_of_interest(image_edge,vertices) # Define the Hough transform parameters # Make a blank the same size as our image to draw on rho = 1 # distance resolution in pixels of the Hough grid theta = np.pi/180 # angular resolution in radians of the Hough grid threshold = 40 # minimum number of votes (intersections in Hough grid cell) min_line_length = 20 #minimum number of pixels making up a line max_line_gap = 100 # maximum gap in pixels between connectable line image_hough_lines = hough_lines(image_masked_edges, rho, theta, threshold, min_line_length, max_line_gap) #create image with overlay lines overlayed_image = weighted_img(image_hough_lines,image) # you should return the final output (image where lines are drawn on lanes) return overlayed_image
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Let's try the one with the solid white lane on the right first ...
white_output = 'test_videos_output/solidWhiteRight.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4") white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!! %time white_clip.write_videofile(white_output, audio=False)
t: 0%| | 0/221 [00:00<?, ?it/s, now=None]
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(white_output))
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky!
yellow_output = 'test_videos_output/solidYellowLeft.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5) clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4') yellow_clip = clip2.fl_image(process_image) %time yellow_clip.write_videofile(yellow_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(yellow_output))
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
challenge_output = 'test_videos_output/challenge.mp4' ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video ## To do so add .subclip(start_second,end_second) to the end of the line below ## Where start_second and end_second are integer values representing the start and end of the subclip ## You may also uncomment the following line for a subclip of the first 5 seconds ##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5) clip3 = VideoFileClip('test_videos/challenge.mp4') challenge_clip = clip3.fl_image(process_image) %time challenge_clip.write_videofile(challenge_output, audio=False) HTML(""" <video width="960" height="540" controls> <source src="{0}"> </video> """.format(challenge_output))
_____no_output_____
MIT
P1.ipynb
RedaMokarrab/Nano_Degree_Self_Driving
Data Science Unit 1 Sprint Challenge 1 Loading, cleaning, visualizing, and analyzing dataIn this sprint challenge you will look at a dataset of the survival of patients who underwent surgery for breast cancer.http://archive.ics.uci.edu/ml/datasets/Haberman%27s+SurvivalData Set Information:The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer.Attribute Information:1. Age of patient at time of operation (numerical)2. Patient's year of operation (year - 1900, numerical)3. Number of positive axillary nodes detected (numerical)4. Survival status (class attribute)-- 1 = the patient survived 5 years or longer-- 2 = the patient died within 5 yearSprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it! Part 1 - Load and validate the data- Load the data as a `pandas` data frame.- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).- Validate that you have no missing values.- Add informative names to the features.- The survival variable is encoded as 1 for surviving >5 years and 2 for not - change this to be 0 for not surviving and 1 for surviving >5 years (0/1 is a more traditional encoding of binary variables)At the end, print the first five rows of the dataset to demonstrate the above.
import pandas as pd breast = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/haberman/haberman.data',names=['age','year_operation','nodes','survived']) print(breast.shape) print(breast.isna().sum()) labels = {'survived': {2:0}} breast.replace(labels, inplace=True) breast.survived.value_counts() print(breast.head())
age year_operation nodes survived 0 30 64 1 1 1 30 62 3 1 2 30 65 0 1 3 31 59 2 1 4 31 65 4 1
MIT
Boris_Krant_DS_Unit_1_Sprint_Challenge_1.ipynb
bkrant/DS-Unit-1-Sprint-1-Dealing-With-Data
Part 2 - Examine the distribution and relationships of the featuresExplore the data - create at least *2* tables (can be summary statistics or crosstabulations) and *2* plots illustrating the nature of the data.This is open-ended, so to remind - first *complete* this task as a baseline, then go on to the remaining sections, and *then* as time allows revisit and explore further.Hint - you may need to bin some variables depending on your chosen tables/plots.
print(breast.corr()) print(breast.describe()) sns.heatmap(breast.corr()); import seaborn as sns import matplotlib.pyplot as plt import numpy as np sns.pairplot(breast); g = sns.FacetGrid(breast, row="survived", margin_titles=True) bins = np.linspace(0, breast.age.max()) g.map(plt.hist, "age", color="steelblue", bins=bins); breast.boxplot();
_____no_output_____
MIT
Boris_Krant_DS_Unit_1_Sprint_Challenge_1.ipynb
bkrant/DS-Unit-1-Sprint-1-Dealing-With-Data
HVAC with Amazon SageMaker RL--- IntroductionHVAC stands for Heating, Ventilation and Air Conditioning and is responsible for keeping us warm and comfortable indoors. HVAC takes up a whopping 50% of the energy in a building and accounts for 40% of energy use in the US [1, 2]. Several control system optimizations have been proposed to reduce energy usage while ensuring thermal comfort.Modern buildings collect data about the weather, occupancy and equipment use. All of this can be used to optimize HVAC energy usage. Reinforcement Learning (RL) is a good fit because it can learn how to interact with the environment and identify strategies to limit wasted energy. Several recent research efforts have shown that RL can reduce HVAC energy consumption by 15-20% [3, 4].As training an RL algorithm in a real HVAC system can take time to converge as well as potentially lead to hazardous settings as the agent explores its state space, we turn to a simulator to train the agent. [EnergyPlus](https://energyplus.net/) is an open source, state of the art HVAC simulator from the US Department of Energy. We use a simple example with this simulator to showcase how we can train an RL model easily with Amazon SageMaker RL.1. Objective: Control the data center HVAC system to reduce energy consumption while ensuring the room temperature stays within specified limits.2. Environment: We have a small single room datacenter that the HVAC system is cooling to ensure the compute equipment works properly. We will train our RL agent to control this HVAC system for one day subject to weather conditions in San Francisco. The agent takes actions every 5 minutes for a 24 hour period. Hence, the episode is a fixed 120 steps. 3. State: The outdoor temperature, outdoor humidity and indoor room temperature.4. Action: The agent can set the heating and cooling setpoints. The cooling setpoint tells the HVAC system that it should start cooling the room if the room temperature goes above this setpoint. Likewise, the HVAC systems starts heating if the room temperature goes below the heating setpoint.5. Reward: The rewards has two components which are added together with coefficients: 1. It is proportional to the energy consumed by the HVAC system. 2. It gets a large penalty when the room temperature exceeds pre-specified lower or upper limits (as defined in `data_center_env.py`).References1. [sciencedirect.com](https://www.sciencedirect.com/science/article/pii/S0378778807001016)2. [environment.gov.au](https://www.environment.gov.au/system/files/energy/files/hvac-factsheet-energy-breakdown.pdf)3. Wei, Tianshu, Yanzhi Wang, and Qi Zhu. "Deep reinforcement learning for building hvac control." In Proceedings of the 54th Annual Design Automation Conference 2017, p. 22. ACM, 2017.4. Zhang, Zhiang, and Khee Poh Lam. "Practical implementation and evaluation of deep reinforcement learning control for a radiant heating system." In Proceedings of the 5th Conference on Systems for Built Environments, pp. 148-157. ACM, 2018. Pre-requisites ImportsTo get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
import sagemaker import boto3 import sys import os import glob import re import subprocess import numpy as np from IPython.display import HTML import time from time import gmtime, strftime sys.path.append("common") from misc import get_execution_role, wait_for_s3_object from docker_utils import build_and_push_docker_image from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Setup S3 bucketCreate a reference to the default S3 bucket that will be used for model outputs.
sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() s3_output_path = 's3://{}/'.format(s3_bucket) print("S3 bucket path: {}".format(s3_output_path))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Define Variables We define a job below that's used to identify our jobs.
# create unique job name job_name_prefix = 'rl-hvac'
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Configure settingsYou can run your RL training jobs locally on the SageMaker notebook instance or on SageMaker training. In both of these scenarios, you can run in either 'local' (where you run the commands) or 'SageMaker' mode (on SageMaker training instances). 'local' mode uses the SageMaker Python SDK to run your code in Docker containers locally. It can speed up iterative testing and debugging while using the same familiar Python SDK interface. Just set `local_mode = True`. And when you're ready move to 'SageMaker' mode to scale things up.
# run local (on this machine)? # or on sagemaker training instances? local_mode = False if local_mode: instance_type = 'local' else: # choose a larger instance to avoid running out of memory instance_type = "ml.m4.4xlarge"
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Create an IAM roleEither get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
try: role = sagemaker.get_execution_role() except: role = get_execution_role() print("Using IAM role arn: {}".format(role))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Install docker for `local` modeIn order to work in `local` mode, you need to have docker installed. When running from your local machine, please make sure that you have docker or docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependencies.Note, you can only run a single local notebook at one time.
# Only run from SageMaker notebook instance if local_mode: !/bin/bash ./common/setup.sh
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples