markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For Regression taskLoad the dataset.
cal_housing = fetch_california_housing() print(cal_housing.DESCR) X = cal_housing.data y = cal_housing.target cal_features = cal_housing.feature_names df = pd.concat((pd.DataFrame(X, columns=cal_features), pd.DataFrame({'MedianHouseVal': y})), axis=1) df.head()
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Visualizing a Decision TreeYou will need to install the `pydotplus` library.
#!pip install pydotplus import pydotplus # Create dataset X_train, X_test, y_train, y_test = train_test_split(df[cal_features], y, test_size=0.2) dt_reg = DecisionTreeRegressor(max_depth=3) dt_reg.fit(X_train, y_train) dot_data = export_graphviz(dt_reg, out_file="ca_housing.dot", feature_names=cal_features, filled=True, rounded=True, special_characters=True, leaves_parallel=False) graph = pydotplus.graphviz.graph_from_dot_file("ca_housing.dot") Image(graph.create_png())
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Make a sample prediction.
X_test[cal_features].iloc[[0]].transpose() dt_reg.predict(X_test[cal_features].iloc[[0]])
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
The root node is the mean of the labels from the training data.
y_train.mean()
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Train a simple Random Forest
rf_reg = RandomForestRegressor() rf_reg.fit(X_train, y_train) print(f'Instance 11 prediction: {rf_reg.predict(X_test.iloc[[11]])}') print(f'Instance 17 prediction: {rf_reg.predict(X_test.iloc[[17]])}') idx = 11 from treeinterpreter import treeinterpreter prediction, bias, contributions = treeinterpreter.predict(rf_reg, X_test.iloc[[idx]].values) print(f'prediction: {prediction}') print(f'bias: {bias}') print(f'contributions: {contributions}') for idx in [11, 17]: print(f'Instance: {idx}') prediction, bias, contributions = treeinterpreter.predict( rf_reg, X_test.iloc[[idx]].values) print(f'Bias term (training set mean): {bias}') print(f'Feature contributions:') for contribution, feature in sorted(zip(contributions[0], cal_features), key=lambda x: -abs(x[0])): print(feature, round(contribution, 2)) print('-'*20) idx = 17 prediction, bias, contributions = treeinterpreter.predict( rf_reg, X_test.iloc[[idx]].values) print(f'prediction: {prediction[0]}') print(f'bias + contributions: {bias + np.sum(contributions)}')
prediction: [4.8203671] bias + contributions: [4.8203671]
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
In fact, we can check that this holds for all elements of the test set:
predictions, biases, contributions = treeinterpreter.predict( rf_reg, X_test.values) assert(np.allclose(np.squeeze(predictions), biases + np.sum(contributions, axis=1))) assert(np.allclose(rf_reg.predict(X_test), biases + np.sum(contributions, axis=1)))
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Comparing Contributions across data slices
X1_test = X_test[:X_test.shape[0]//2:] X2_test = X_test[X_test.shape[0]//2:] predictions1, biases1, contributions1 = ti.predict(rf_reg, X1_test.values) predictions2, biases2, contributions2 = ti.predict(rf_reg, X2_test.values) total_contribs1 = np.mean(contributions1, axis=0) total_contribs2 = np.mean(contributions2, axis=0) print(f'Total contributions from X1_test: {total_contribs1}') print(f'Total contributions from X2_test: {total_contribs2}') print(f'Sum of feature contributions differences: {np.sum(total_contribs1 - total_contribs2)}') print(f'Difference between the average predictions: {np.mean(predictions1) - np.mean(predictions2)}')
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
TreeExplainer with SHAP
from sklearn.model_selection import train_test_split import xgboost as xgb import shap # print the JS visualization code to the notebook shap.initjs() import xgboost as xgb xgb_reg = xgb.XGBClassifier(max_depth=3, n_estimators=300, learning_rate=0.05) xgb_reg.fit(X_train, y_train) model_mse_error = np.sqrt(np.mean((xgb_reg.predict(X_test) - y_test)**2)) print(f'Mean squared error of MLP model: {model_mse_error}') explainer = shap.TreeExplainer(xgb_reg) shap_values = explainer.shap_values(X_train) shap.force_plot(explainer.expected_value[1], shap_values[1][0,:], X_train.iloc[0,:]) shap.force_plot(explainer.expected_value[1], shap_values[1][:1000,:], X_train.iloc[:1000,:])
_____no_output_____
Apache-2.0
03-tabular/treeinterpreters.ipynb
munnm/XAI-for-practitioners
Bayesian Ridge Regression Part 2 Multiple Features
import numpy as np import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") # yahoo finance is used to fetch data import yfinance as yf yf.pdr_override() # input symbol = 'AMD' start = '2014-01-01' end = '2018-08-27' # Read data dataset = yf.download(symbol,start,end) # View Columns dataset.head() dataset['Increase_Decrease'] = np.where(dataset['Volume'].shift(-1) > dataset['Volume'],1,0) dataset['Buy_Sell_on_Open'] = np.where(dataset['Open'].shift(-1) > dataset['Open'],1,0) dataset['Buy_Sell'] = np.where(dataset['Adj Close'].shift(-1) > dataset['Adj Close'],1,0) dataset['Returns'] = dataset['Adj Close'].pct_change() dataset = dataset.dropna() dataset.head() dataset.shape X = np.asanyarray(dataset[['Open','High','Low', 'Volume', 'Increase_Decrease', 'Buy_Sell_on_Open', 'Buy_Sell', 'Returns']]) y = np.asanyarray(dataset[['Adj Close']]) from sklearn.linear_model import BayesianRidge, LinearRegression # Fit the Bayesian Ridge Regression and an OLS for comparison model = BayesianRidge(compute_score=True) model.fit(X, y) model.coef_ model.scores_ from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0) model = BayesianRidge(compute_score=True) model.fit(X_train, y_train) model.coef_ model.scores_ y_pred = model.predict(X_test) from sklearn.metrics import mean_squared_error print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5) print('Bayesian Ridge Regression Score:', model.score(X_test, y_test))
Bayesian Ridge Regression Score: 0.9996452147933678
MIT
Stock_Algorithms/Bayesian_Ridge_Regression_Part2.ipynb
clairvoyant/Deep-Learning-Machine-Learning-Stock
Application of Linear Algebra in Data Science Here is the Python code to calculate and plot the MSE
import matplotlib.pyplot as plt x = list(range(1,6)) #data points y = [1,1,2,2,4] #original values y_bar = [0.6,1.29,1.99,2.69,3.4] #predicted values summation = 0 n = len(y) for i in range(0, n): # finding the difference between observed and predicted value difference = y[i] - y_bar[i] squared_difference = difference**2 # taking square of the differene # taking a sum of all the differences summation = summation + squared_difference MSE = summation/n # get the average of all print("The Mean Square Error is: ", MSE) #Plot relationship plt.scatter(x, y, color='#06AED5') plt.plot(x, y_bar, color='#1D3557', linewidth=2) plt.xlabel('Data Points', fontsize=12) plt.ylabel('Output', fontsize=12) plt.title("MSE")
_____no_output_____
Apache-2.0
Linear_Algebra_in_Research.ipynb
adriangalarion/Lab-Activities-1.1
Excercises Electric Machinery Fundamentals Chapter 2 Problem 2-15
%pylab notebook
Populating the interactive namespace from numpy and matplotlib
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Description An autotransformer is used to connect a 12.6-kV distribution line to a 13.8-kV distribution line. It must be capable of handling 2000 kVA. There are three phases, connected Y-Y with their neutrals solidly grounded.
Vl = 12.6e3 # [V] Vh = 13.8e3 # [V] Sio = 2000e3 # [VA]
_____no_output_____
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(a) * What must the $N_C / N _{SE}$ turns ratio be to accomplish this connection? (b) * How much apparent power must the windings of each autotransformer handle? (c) * What is the power advantage of this autotransformer system? (d) * If one of the autotransformers were reconnected as an ordinary transformer, what would its ratings be? SOLUTION (a)The transformer is connected Y-Y, so the primary and secondary phase voltages are the linevoltages divided by 3 . The turns ratio of each autotransformer is given by:$$\frac{V_H}{V_L} = \frac{N_C + N_{SE}}{N_C}$$
a = (Vh/sqrt(3)) / (Vl/sqrt(3)) n_a = 1 / (a-1) # n_a = Nc/Nse print(''' Nc/Nse = {:.1f} ============= '''.format(n_a))
Nc/Nse = 10.5 =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(b)The power advantage of this autotransformer is:$$\frac{S_{IO}}{S_W} = \frac{N_C + N_{SE}}{N_{SE}}$$
n_b = (10.5 + 1) / 1 # n_b = Sio/Sw print('Sio/Sw = {:.1f}'.format(n_b))
Sio/Sw = 11.5
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Since 1/3 of the total power is associated with each phase, **the windings in each autotransformer must handle:**
Sw = Sio / (3*n_b) print(''' Sw = {:.1f} kVA ============== '''.format(Sw/1000))
Sw = 58.0 kVA ==============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(c)As determined in (b), the power advantage of this autotransformer system is:
print(''' Sio/Sw = {:.1f} ============= '''.format(n_b))
Sio/Sw = 11.5 =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
(d)The voltages across each phase of the autotransformer are:
Vh_p = Vh / sqrt(3) Vl_p = Vl / sqrt(3) print(''' Vh_p = {:.0f} V Vl_p = {:.0f} V '''.format(Vh_p, Vl_p))
Vh_p = 7967 V Vl_p = 7275 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
The voltage across the common winding ( $N_C$ ) is:
Vnc = Vl_p print('Vnc = {:.0f} V'.format(Vnc))
Vnc = 7275 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
and the voltage across the series winding ( $N_{SE}$ ) is:
Vnse = Vh_p - Vl_p print('Vnse = {:.0f} V'.format(Vnse))
Vnse = 693 V
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Therefore, a single phase of the autotransformer connected as an ordinary transformer would be rated at:
print(''' Vnc/Vnse = {:.0f}/{:.0f} Sw = {:.1f} kVA =================== ============= '''.format(Vnc, Vnse, Sw/1000))
Vnc/Vnse = 7275/693 Sw = 58.0 kVA =================== =============
Unlicense
Chapman/Ch2-Problem_2-15.ipynb
dietmarw/EK5312
Analyze A/B Test ResultsYou may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). **Please save regularly.**This project will assure you have mastered the subjects covered in the statistics lessons. The hope is to have this project be as comprehensive of these topics as possible. Good luck! Table of Contents- [Introduction](intro)- [Part I - Probability](probability)- [Part II - A/B Test](ab_test)- [Part III - Regression](regression) IntroductionA/B tests are very commonly performed by data analysts and data scientists. It is important that you get some practice working with the difficulties of these For this project, you will be working to understand the results of an A/B test run by an e-commerce website. Your goal is to work through this notebook to help the company understand if they should implement the new page, keep the old page, or perhaps run the experiment longer to make their decision.**As you work through this notebook, follow along in the classroom and answer the corresponding quiz questions associated with each question.** The labels for each classroom concept are provided for each question. This will assure you are on the right track as you work through the project, and you can feel more confident in your final submission meeting the criteria. As a final check, assure you meet all the criteria on the [RUBRIC](https://review.udacity.com/!/projects/37e27304-ad47-4eb0-a1ab-8c12f60e43d0/rubric). Part I - ProbabilityTo get started, let's import our libraries.
import pandas as pd import numpy as np import random import matplotlib.pyplot as plt %matplotlib inline #We are setting the seed to assure you get the same answers on quizzes as we set up random.seed(42)
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`1.` Now, read in the `ab_data.csv` data. Store it in `df`. **Use your dataframe to answer the questions in Quiz 1 of the classroom.**a. Read in the dataset and take a look at the top few rows here:
df = pd.read_csv('ab_data.csv') df.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. Use the cell below to find the number of rows in the dataset.
df.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. The number of unique users in the dataset.
df.nunique()[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. The proportion of users converted.
df['converted'].sum() / df.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. The number of times the `new_page` and `treatment` don't match.
df[((df['group'] == 'treatment') & (df['landing_page'] != 'new_page')) | ((df['group'] != 'treatment') & (df['landing_page'] == 'new_page'))].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
f. Do any of the rows have missing values?
df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 294478 entries, 0 to 294477 Data columns (total 5 columns): user_id 294478 non-null int64 timestamp 294478 non-null object group 294478 non-null object landing_page 294478 non-null object converted 294478 non-null int64 dtypes: int64(2), object(3) memory usage: 11.2+ MB
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`2.` For the rows where **treatment** does not match with **new_page** or **control** does not match with **old_page**, we cannot be sure if this row truly received the new or old page. Use **Quiz 2** in the classroom to figure out how we should handle these rows. a. Now use the answer to the quiz to create a new dataset that meets the specifications from the quiz. Store your new dataframe in **df2**.
df2 = df[(((df['group'] == 'treatment') & (df['landing_page'] == 'new_page')) | ((df['group'] == 'control') & (df['landing_page'] == 'old_page')))] df2.head() # Double Check all of the correct rows were removed - this should be 0 df2[((df2['group'] == 'treatment') == (df2['landing_page'] == 'new_page')) == False].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`3.` Use **df2** and the cells below to answer questions for **Quiz3** in the classroom. a. How many unique **user_id**s are in **df2**?
df2.nunique()[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. There is one **user_id** repeated in **df2**. What is it?
uid = df2[df2['user_id'].duplicated() == True].index[0] uid
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. What is the row information for the repeat **user_id**?
df2.loc[uid]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. Remove **one** of the rows with a duplicate **user_id**, but keep your dataframe as **df2**.
df2.drop(2893, inplace=True) df2.shape[0]
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3697: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy errors=errors)
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
`4.` Use **df2** in the cells below to answer the quiz questions related to **Quiz 4** in the classroom.a. What is the probability of an individual converting regardless of the page they receive?
df2[df2['converted'] == 1].shape[0] / df2.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. Given that an individual was in the `control` group, what is the probability they converted?
df2[(df2['converted'] == 1) & ((df2['group'] == 'control'))].shape[0] / df2[(df2['group'] == 'control')].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. Given that an individual was in the `treatment` group, what is the probability they converted?
df2[(df2['converted'] == 1) & ((df2['group'] == 'treatment'))].shape[0] / df2[(df2['group'] == 'treatment')].shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. What is the probability that an individual received the new page?
df2[df2['landing_page'] == 'new_page'].shape[0] / df2.shape[0]
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. Consider your results from parts (a) through (d) above, and explain below whether you think there is sufficient evidence to conclude that the new treatment page leads to more conversions. **The probability of converting for an individual who received the control page is more than that who received the treatment page. So its more likely to convert for the control page viewers. So there is not much evidence to prove that the new treatment page leads to more conversions** Part II - A/B TestNotice that because of the time stamp associated with each event, you could technically run a hypothesis test continuously as each observation was observed. However, then the hard question is do you stop as soon as one page is considered significantly better than another or does it need to happen consistently for a certain amount of time? How long do you run to render a decision that neither page is better than another? These questions are the difficult parts associated with A/B tests in general. `1.` For now, consider you need to make the decision just based on all the data provided. If you want to assume that the old page is better unless the new page proves to be definitely better at a Type I error rate of 5%, what should your null and alternative hypotheses be? You can state your hypothesis in terms of words or in terms of **$p_{old}$** and **$p_{new}$**, which are the converted rates for the old and new pages. **Put your answer here.** `2.` Assume under the null hypothesis, $p_{new}$ and $p_{old}$ both have "true" success rates equal to the **converted** success rate regardless of page - that is $p_{new}$ and $p_{old}$ are equal. Furthermore, assume they are equal to the **converted** rate in **ab_data.csv** regardless of the page. Use a sample size for each page equal to the ones in **ab_data.csv**. Perform the sampling distribution for the difference in **converted** between the two pages over 10,000 iterations of calculating an estimate from the null. Use the cells below to provide the necessary parts of this simulation. If this doesn't make complete sense right now, don't worry - you are going to work through the problems below to complete this problem. You can use **Quiz 5** in the classroom to make sure you are on the right track.
df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
a. What is the **conversion rate** for $p_{new}$ under the null?
p_new = df2[(df2['converted'] == 1)].shape[0] / df2.shape[0] p_new
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. What is the **conversion rate** for $p_{old}$ under the null?
p_old = df2[(df2['converted'] == 1)].shape[0] / df2.shape[0] p_old
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. What is $n_{new}$, the number of individuals in the treatment group?
n_new = df2[(df2['landing_page'] == 'new_page') & (df2['group'] == 'treatment')].shape[0] n_new
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. What is $n_{old}$, the number of individuals in the control group?
n_old = df2[(df2['landing_page'] == 'old_page') & (df2['group'] == 'control')].shape[0] n_old
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. Simulate $n_{new}$ transactions with a conversion rate of $p_{new}$ under the null. Store these $n_{new}$ 1's and 0's in **new_page_converted**.
new_page_converted = np.random.choice([1,0],n_new, p=(p_new,1-p_new)) new_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
f. Simulate $n_{old}$ transactions with a conversion rate of $p_{old}$ under the null. Store these $n_{old}$ 1's and 0's in **old_page_converted**.
old_page_converted = np.random.choice([1,0],n_old, p=(p_old,1-p_old)) old_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
g. Find $p_{new}$ - $p_{old}$ for your simulated values from part (e) and (f).
# p_new - p_old new_page_converted.mean() - old_page_converted.mean()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
h. Create 10,000 $p_{new}$ - $p_{old}$ values using the same simulation process you used in parts (a) through (g) above. Store all 10,000 values in a NumPy array called **p_diffs**.
p_diffs = [] for _ in range(10000): new_page_converted = np.random.choice([0, 1], size = n_new, p = [1-p_new, p_new], replace = True).sum() old_page_converted = np.random.choice([0, 1], size = n_old, p = [1-p_old, p_old], replace = True).sum() diff = new_page_converted/n_new - old_page_converted/n_old p_diffs.append(diff) p_diffs = np.array(p_diffs) p_diffs
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
i. Plot a histogram of the **p_diffs**. Does this plot look like what you expected? Use the matching problem in the classroom to assure you fully understand what was computed here.
plt.hist(p_diffs); plt.plot();
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
j. What proportion of the **p_diffs** are greater than the actual difference observed in **ab_data.csv**?
# (p_diffs > (p_new - p_old)) prop = (p_diffs > df['converted'].sample(10000)).mean() prop
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
k. Please explain using the vocabulary you've learned in this course what you just computed in part **j.** What is this value called in scientific studies? What does this value mean in terms of whether or not there is a difference between the new and old pages? **Difference is not significant** l. We could also use a built-in to achieve similar results. Though using the built-in might be easier to code, the above portions are a walkthrough of the ideas that are critical to correctly thinking about statistical significance. Fill in the below to calculate the number of conversions for each page, as well as the number of individuals who received each page. Let `n_old` and `n_new` refer the the number of rows associated with the old page and new pages, respectively.
import statsmodels.api as sm convert_old = df2[(df2['landing_page'] == 'old_page') & (df2['group'] == 'control')] convert_new = df2[(df2['landing_page'] == 'new_page') & (df2['group'] == 'treatment')] n_old = convert_old.shape[0] n_new = convert_new.shape[0] n_old, n_new # df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
m. Now use `stats.proportions_ztest` to compute your test statistic and p-value. [Here](http://knowledgetack.com/python/statsmodels/proportions_ztest/) is a helpful link on using the built in.
from statsmodels.stats.proportion import proportions_ztest (df2['converted'] == 1).sum() df2.shape[0] prop stat, pval = proportions_ztest((df2['converted'] == 1).sum(), df2.shape[0], prop) stat, pval
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
n. What do the z-score and p-value you computed in the previous question mean for the conversion rates of the old and new pages? Do they agree with the findings in parts **j.** and **k.**? **p val = 0****No** Part III - A regression approach`1.` In this final part, you will see that the result you achieved in the A/B test in Part II above can also be achieved by performing regression. a. Since each row is either a conversion or no conversion, what type of regression should you be performing in this case? **Logical Regression**
df2.head()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
b. The goal is to use **statsmodels** to fit the regression model you specified in part **a.** to see if there is a significant difference in conversion based on which page a customer receives. However, you first need to create in df2 a column for the intercept, and create a dummy variable column for which page each user received. Add an **intercept** column, as well as an **ab_page** column, which is 1 when an individual receives the **treatment** and 0 if **control**.
import statsmodels.api as sm df2[['control','ab_page']] = pd.get_dummies(df2['group']) df2.drop(['control','group'],axis=1, inplace=True) df2.head()
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3140: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy self[k1] = value[k2] /opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3697: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy errors=errors)
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
c. Use **statsmodels** to instantiate your regression model on the two columns you created in part b., then fit the model using the two columns you created in part **b.** to predict whether or not an individual converts.
df2['intercept'] = 1 logit_mod = sm.Logit(df2['converted'], df2[['intercept','ab_page']]) results = logit_mod.fit() np.exp(-0.0150) 1/np.exp(-0.0150)
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
d. Provide the summary of your model below, and use it as necessary to answer the following questions.
results.summary()
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
e. What is the p-value associated with **ab_page**? Why does it differ from the value you found in **Part II**? **Hint**: What are the null and alternative hypotheses associated with your regression model, and how do they compare to the null and alternative hypotheses in **Part II**? **P value = 0.190** f. Now, you are considering other things that might influence whether or not an individual converts. Discuss why it is a good idea to consider other factors to add into your regression model. Are there any disadvantages to adding additional terms into your regression model? **Yes, its good to check for some more fields****Disadvantage - It may not be as easy to interpret as in the previous case** g. Now along with testing if the conversion rate changes for different pages, also add an effect based on which country a user lives in. You will need to read in the **countries.csv** dataset and merge together your datasets on the appropriate rows. [Here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html) are the docs for joining tables. Does it appear that country had an impact on conversion? Don't forget to create dummy variables for these country columns - **Hint: You will need two columns for the three dummy variables.** Provide the statistical output as well as a written response to answer this question.
df_countries = pd.read_csv('countries.csv') df_countries.head() df_merged = pd.merge(df2,df_countries, left_on='user_id', right_on='user_id') df_merged.head() df_merged[['US','UK','CA']] = pd.get_dummies(df_merged['country']) df_merged.drop(['country','CA'],axis=1, inplace=True) df_merged.head() df_merged['intercept'] = 1 logit_mod = sm.Logit(df_merged['converted'], df_merged[['intercept','US','UK']]) results = logit_mod.fit() results.summary()
Optimization terminated successfully. Current function value: 0.366116 Iterations 6
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
**US ia having negative coeff which means that conversion rate decreases if person is from US****UK ia having positive coeff which means that conversion rate increases if person is from UK** h. Though you have now looked at the individual factors of country and page on conversion, we would now like to look at an interaction between page and country to see if there significant effects on conversion. Create the necessary additional columns, and fit the new model. Provide the summary results, and your conclusions based on the results.
final_df = df_merged[['user_id','timestamp','landing_page','converted','ab_page','US','UK']] final_df.head() final_df['intercept'] = 1 logit_mod = sm.Logit(final_df['ab_page'], final_df[['intercept','US','UK']]) results = logit_mod.fit() results.summary()
Optimization terminated successfully. Current function value: 0.760413 Iterations 3
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
**'ab_page' column is 1 when an individual receives the treatment and 0 if control.****US ia having positive coeff which means that chance of getting treatment page increases ****UK ia having negative coeff which means that change of getting control page increases** Finishing Up> Congratulations! You have reached the end of the A/B Test Results project! You should be very proud of all you have accomplished!> **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the rubric (found on the project submission page at the end of the lesson). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible. Directions to Submit> Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).> Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.> Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
from subprocess import call call(['python', '-m', 'nbconvert', 'Analyze_ab_test_results_notebook.ipynb'])
_____no_output_____
MIT
Project 3 - Analyze AB Test Results/Analyze_ab_test_results_notebook.ipynb
geochri/Udacity-DAND
Eurocode 8 - Chapter 3 - seismic_actionraw functions
from streng.codes.eurocodes.ec8.raw.ch3.seismic_action import spectra
_____no_output_____
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
spectra αg
print(spectra.αg.__doc__) αg = spectra.αg(αgR=0.24, γI=1.20) print(f'αg = {αg}g')
αg = 0.288g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
S
print(spectra.S.__doc__) S = spectra.S(ground_type='B', spectrum_type=1) print(f'S = {S}')
S = 1.2
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TB
print(spectra.TB.__doc__) TB = spectra.TB(ground_type='B', spectrum_type=1) print(f'TB = {TB}')
TB = 0.15
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TC
print(spectra.TC.__doc__) TC = spectra.TC(ground_type='B', spectrum_type=1) print(f'TC = {TC}')
TC = 0.5
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
TD
print(spectra.TD.__doc__) TD = spectra.TD(ground_type='B', spectrum_type=1) print(f'TD = {TD}')
TD = 2.0
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Se
print(spectra.Se.__doc__) Se = spectra.Se(T=0.50, αg = 0.24, S=1.20, TB=0.15, TC=0.50, TD=2.0, η=1.0) print(f'Se = {Se}g')
Se = 0.72g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
SDe
print(spectra.SDe.__doc__) Sde = spectra.SDe(T=0.5, Se=0.72*9.81) print(f'Sde = {Sde:.3f}m')
Sde = 0.045m
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
dg
print(spectra.dg.__doc__) dg = spectra.dg(αg=0.24, S=1.20, TC=0.50, TD=2.0) print(f'dg = {dg:.4f}g')
dg = 0.0072g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Sd
print(spectra.Sd.__doc__) Sd = spectra.Sd(T=0.50, αg = 0.24, S=1.20, TB=0.15, TC=0.50, TD=2.0, q=3.9, β=0.20) print(f'Sd = {Sd:.3f}g')
Sd = 0.185g
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
η
print(spectra.η.__doc__) η_5 = spectra.η(5) print(f'η(5%) = {η_5:.2f}') η_7 = spectra.η(7) print(f'η(7%) = {η_7:.2f}')
η(5%) = 1.00 η(7%) = 0.91
MIT
codes/eurocodes/ec8/raw_ch3_seismic_action.ipynb
panagop/streng_jupyters
Standalone Convergence Checker for the numerical vKdV solverCopied from Standalone Convergence Checker for the numerical KdV solver - just add bathyDoes not save or require any input data
import xarray as xr from iwaves.kdv.kdvimex import KdVImEx#from_netcdf from iwaves.kdv.vkdv import vKdV from iwaves.kdv.solve import solve_kdv #from iwaves.utils.plot import vKdV_plot import iwaves.utils.initial_conditions as ics import numpy as np from scipy.interpolate import PchipInterpolator as pchip import matplotlib.pyplot as plt %matplotlib inline from matplotlib import rcParams # Set font sizes rcParams['font.family'] = 'sans-serif' rcParams['font.sans-serif'] = ['Bitstream Vera Sans'] rcParams['font.serif'] = ['Bitstream Vera Sans'] rcParams["font.size"] = "14" rcParams['axes.labelsize']='large' # CONSTANTS FOR WHOLE NOTEBOOK d = 252.5 L_d = 3.0e5 Nz = 100 # Functions def run_kdv(args): """ Main function for generating different soliton scenarios """ rho_params, bathy_params, a0, L_d, mode, nu_H, dx, runtime, dt, Lw = args #################################################### # Inputs mode = 0 Nz = 100 ntout = 1800.0 z = np.linspace(0, -d, Nz) dz = np.abs(z[1]-z[0]) x = np.arange(-2*dx,L_d+dx,dx) h = ics.depth_tanh2(bathy_params, x) kdvargs = dict(\ verbose=False,\ a0=a0,\ Lw=Lw,\ mode=mode, dt=dt,\ nu_H=nu_H,\ ekdv=False,\ wavefunc=ics.eta_fullsine,\ #L_d = L_d, x=x,\ Nsubset=10, ) ### # THIS WAS COPIED FROM THE KdV VERSION. IT INITIALISES EACH vKdV 3 TIMES - QUITE SLOW. ### ii=0 #rhoz = single_tanh_rho( # z, pp['rho0'][ii], pp['drho1'][ii], pp['z1'][ii], pp['h1'][ii]) rhoz = ics.rho_double_tanh_rayson(rho_params,z) ###### ## Call the vKdV run function mykdv, Bda, density = solve_kdv(rhoz, z, runtime,\ solver='vkdv', h=h, ntout=ntout, outfile=None, **kdvargs) print('Done with dx={} and dt={}'.format(dx, dt)) return mykdv, Bda dx = 10 x = np.arange(-2*dx,L_d+dx,dx) bathy_params = [L_d*0.6, 50000, d+50, d-50] h = ics.depth_tanh2(bathy_params, x) plt.figure(figsize=(9,5)) plt.plot(x, h, 'k') plt.ylabel('h (m)') plt.xlabel('x (m)') plt.title('vKdV bathy') #betas = [1023.7, 1.12, 105, 52, 155, 43] # ~April 5 #betas = [1023.5, 1.22, 67, 55, 157, 52] # ~March 1 betas_w = [1023.8229810318612, 0.9865506702797462, 143.5428700089361, 46.1265812512485, 136.66278860120943, 41.57014327398592] # 15 July 2016 betas_s =[1023.6834358117951, 1.2249066117658955, 156.78804559089772, 53.66835548728355, 73.14183287436342, 40.21031777315428] # 1st April 2017 a0 = 20. mode =0 nu_H = 0 runtime = 1.25*86400. # Going to make Lw an input for the vKdV as it will really speed things up. dx = 100 dt = 10 z = np.linspace(0, -d, Nz) rhoz_w = ics.rho_double_tanh_rayson(betas_w, z) rhoz_s = ics.rho_double_tanh_rayson(betas_s, z) Lw_w = ics.get_Lw(rhoz_w, z, z0=max(h), mode=0) Lw_s = ics.get_Lw(rhoz_s, z, z0=max(h), mode=0) print(Lw_w) print(Lw_s) dxs =[1600,800,400,200,100,75,50,37.5,25] dxs =[800,400,200,100,75,50,35] dt = 8. all_kdv_dx_w = [] all_kdv_dx_s = [] for dx in dxs: print(' ') print('Running dx={}'.format(dx)) print(' ') mykdv, B = run_kdv( (betas_w, bathy_params, a0, L_d, mode, nu_H, dx, runtime, dt, Lw_w)) all_kdv_dx_w.append(mykdv) mykdv, B = run_kdv( (betas_s, bathy_params, a0, L_d, mode, nu_H, dx, runtime, dt, Lw_s)) all_kdv_dx_s.append(mykdv) print(' ') print('Completed dx={}'.format(dx)) print(' ') plt.figure(figsize=(9,5)) for mykdv in all_kdv_dx_s: plt.plot(mykdv.x, mykdv.B, label=mykdv.dx_s) # plt.xlim((162200, 163600)) plt.legend() plt.show() plt.figure(figsize=(9,5)) for mykdv in all_kdv_dx_s: plt.plot(mykdv.x, mykdv.B, label=mykdv.dx_s) # plt.xlim((162200, 163600)) plt.ylim((-65, 40)) plt.xlim((165000, 185000)) plt.legend() plt.figure(figsize=(9,5)) for mykdv in all_kdv_dx_w: plt.plot(mykdv.x, mykdv.B, label=mykdv.dx_s) plt.legend() plt.show() plt.figure(figsize=(9,5)) for mykdv in all_kdv_dx_w: plt.plot(mykdv.x, mykdv.B, label=mykdv.dx_s) plt.legend() plt.ylim((-40, 10)) plt.xlim((135000, 170000)) # Compute the errors X = np.arange(0,L_d, 10.) nx = X.shape[0] ndx = len(dxs) solns = np.zeros((ndx, nx)) for ii, mykdv in enumerate(all_kdv_dx_w): Fx = pchip(mykdv.x, mykdv.B) solns[ii,:] = Fx(X) # Compute the error between each solution #err = np.diff(solns, axis=0) err = solns - solns[-1,:] err_rms_w = np.linalg.norm(err, ord=2, axis=1) # L2-norm #err_rms_w = np.sqrt(np.mean(err**2,axis=1)) solns = np.zeros((ndx, nx)) for ii, mykdv in enumerate(all_kdv_dx_s): Fx = pchip(mykdv.x, mykdv.B) solns[ii,:] = Fx(X) # Compute the error between each solution #err = np.diff(solns, axis=0) err = solns - solns[-1,:] err_rms_s = np.linalg.norm(err, ord=2, axis=1) # L2-norm #err_rms_s = np.sqrt(np.mean(err**2,axis=1)) plt.figure(figsize=(9,8)) plt.loglog(dxs[:-1],err_rms_s[:-1],'ko') plt.loglog(dxs[:-1],err_rms_w[:-1],'s', color='0.5') plt.xlim(2e1,2e3) plt.ylim(1e1,2e3) plt.grid(b=True) x0 = np.array([50,100.]) plt.plot(x0, 100/x0[0]**2*x0**2, 'k--') plt.plot(x0, 100/x0[0]**1*x0**1, 'k:') plt.ylabel('L2-norm Error [m]') plt.xlabel('$\Delta x$ [m]') alpha_s = -2*all_kdv_dx_s[0].c1*all_kdv_dx_s[0].r10 beta_s = -1*all_kdv_dx_s[0].r01 alpha_w = -2*all_kdv_dx_w[0].c1*all_kdv_dx_w[0].r10 beta_w = -1*all_kdv_dx_w[0].r01 plt.legend((r'$\alpha$ = (%3.4f,%3.4f), $\beta$ = (%3.4f,%3.4f)'%(min(alpha_s), max(alpha_s), min(beta_s), max(beta_s)), r'$\alpha$ = (%3.4f,%3.4f), $\beta$ = (%3.4f,%3.4f)'%(min(alpha_w), max(alpha_w), min(beta_w), max(beta_w))), loc='lower right') # Delta t comparison dts = [20,10.,5,2.5,1.25,0.6,0.3] dx = 50. all_kdv_dt_w = [] all_kdv_dt_s = [] for dt in dts: print(' ') print('Running dt={}'.format(dt)) print(' ') mykdv, B = run_kdv( (betas_w, bathy_params, a0, L_d, mode, nu_H, dx, runtime, dt, Lw_w)) all_kdv_dt_w.append(mykdv) mykdv, B = run_kdv( (betas_s, bathy_params, a0, L_d, mode, nu_H, dx, runtime, dt, Lw_s)) all_kdv_dt_s.append(mykdv) print(' ') print('Completed dt={}'.format(dt)) print(' ') plt.figure(figsize=(9,5)) for mykdv in all_kdv_dt_s: plt.plot(mykdv.x, mykdv.B, label=mykdv.dt_s) plt.legend() plt.show() plt.figure(figsize=(9,5)) for mykdv in all_kdv_dt_s: plt.plot(mykdv.x, mykdv.B, label=mykdv.dt_s) plt.legend() plt.ylim((-50, 30)) plt.xlim((195000, 210000)) plt.figure(figsize=(9,5)) for mykdv in all_kdv_dt_w: plt.plot(mykdv.x, mykdv.B, label=mykdv.dt_s) plt.legend() plt.show() plt.figure(figsize=(9,5)) for mykdv in all_kdv_dt_w: plt.plot(mykdv.x, mykdv.B, label=mykdv.dt_s) plt.legend() plt.ylim((-30, 1)) plt.xlim((175000, 205000)) # Compute the errors X = np.arange(0,L_d, 10.) nx = X.shape[0] ndx = len(dts) solns = np.zeros((ndx, nx)) for ii, mykdv in enumerate(all_kdv_dt_w): print(ii) Fx = pchip(mykdv.x, mykdv.B) solns[ii,:] = Fx(X) # Compute the error between each solution #err = np.diff(solns, axis=0) err = solns - solns[-1,:] err_rms_w_t = np.linalg.norm(err, ord=2, axis=1) # L2-norm #err_rms_w = np.sqrt(np.mean(err**2,axis=1)) solns = np.zeros((ndx, nx)) for ii, mykdv in enumerate(all_kdv_dt_s): print(ii) Fx = pchip(mykdv.x, mykdv.B) solns[ii,:] = Fx(X) # Compute the error between each solution #err = np.diff(solns, axis=0) err = solns - solns[-1,:] err_rms_s_t = np.linalg.norm(err, ord=2, axis=1) # L2-norm #err_rms_s = np.sqrt(np.mean(err**2,axis=1)) plt.figure(figsize=(12,8)) ax=plt.subplot(121) plt.loglog(dxs[:-1],err_rms_s[:-1],'ko', markersize=6) plt.loglog(dxs[:-1],err_rms_w[:-1],'s', color='0.5', markersize=4) plt.xlim(2e1,2e3) plt.ylim(1e0,2e3) plt.grid(b=True) x0 = np.array([50,100.]) plt.plot(x0, 100/x0[0]**2*x0**2, 'k--') plt.plot(x0, 100/x0[0]**1*x0**1, 'k:') plt.ylabel('L2-norm Error [m]') plt.xlabel('$\Delta x$ [m]') alpha_s = -2*all_kdv_dx_s[0].c1*all_kdv_dx_s[0].r10 beta_s = -1*all_kdv_dx_s[0].r01 alpha_w = -2*all_kdv_dx_w[0].c1*all_kdv_dx_w[0].r10 beta_w = -1*all_kdv_dx_w[0].r01 plt.legend((r'$\alpha$ = (%3.3f, %3.3f), $\beta$ = (%3.0f, %3.0f)'%(min(alpha_s), max(alpha_s), min(beta_s), max(beta_s)), r'$\alpha$ = (%3.3f, %3.3f), $\beta$ = (%3.0f, %3.0f)'%(min(alpha_w), max(alpha_w), min(beta_w), max(beta_w))), loc='lower right') plt.text(0.05,0.95,'(a)',transform=ax.transAxes) ax=plt.subplot(122) plt.loglog(dts[:-1],err_rms_s_t[:-1],'kd', markersize=6) plt.loglog(dts[:-1],err_rms_w_t[:-1],'s', color='0.5', markersize=4) plt.xlim(0,0.5e2) plt.ylim(1e-2,1e3) plt.grid(b=True) x0 = np.array([5,20]) plt.plot(x0, 10/x0[0]**2*x0**2, 'k--') plt.plot(x0, 10/x0[0]**1*x0**1, 'k:') #plt.ylabel('L2-norm Error [m]') plt.xlabel('$\Delta t$ [s]') plt.text(0.05,0.95,'(b)',transform=ax.transAxes) alpha_s = -2*all_kdv_dt_s[0].c1*all_kdv_dt_s[0].r10 beta_s = -1*all_kdv_dt_s[0].r01 alpha_w = -2*all_kdv_dt_w[0].c1*all_kdv_dt_w[0].r10 beta_w = -1*all_kdv_dt_w[0].r01 plt.legend((r'$\alpha$ = (%3.3f, %3.3f), $\beta$ = (%3.0f, %3.0f)'%(min(alpha_s), max(alpha_s), min(beta_s), max(beta_s)), r'$\alpha$ = (%3.3f, %3.3f), $\beta$ = (%3.0f, %3.0f)'%(min(alpha_w), max(alpha_w), min(beta_w), max(beta_w))), loc='lower right') # plt.savefig('../FIGURES/vkdv_convergence_dxdt.png',dpi=150) # plt.savefig('../FIGURES/vkdv_convergence_dxdt.pdf',dpi=150)
_____no_output_____
BSD-2-Clause
sandpit/standalone_vkdv_convergence.ipynb
mrayson/iwaves
Random Forest Classification Random Forest The fundamental idea behind a random forest is to combine many decision trees into a single model. Individually, predictions made by decision trees (or humans) may not be accurate, but combined together, the predictions will be closer to the mark on average. Pros - can handle large datasets - can handle missing values- less influenced by outliers in the data- no assumptions about underlying distributions in the data- can implicitly handle collinearity in features, highly similar features - work well with categorical and numerical features, mixing different range values Cons- robust algorithm makes it more complex tougher to analyze small details - not best to determine feature and target relationships/effects due to working with highly similar features Model Set Up Steps - load the data - determine regression or classification target - inspect, clean, organize data - check for, handle outliers - encode data if necessary - set features and target - train, test split the data - scale the data if necessary - build the model, fit on the data, run the model - run metrics, analyze, view results, adjust parameters, repeat until satisfied... Regression Models Random Forest Classification 1 dependent variable (binary) , 1+ independent variables (interval or ratio or categorical)![photo](https://upload.wikimedia.org/wikipedia/commons/7/76/Random_forest_diagram_complete.png) - classification predictor - generate reasonable predictions across a wide range of data while requiring little configuration Classification Models Import + Inspect
### imports ### import pandas as pd import numpy as np import sklearn df = pd.read_csv('https://raw.githubusercontent.com/CVanchieri/CS_Notes/main/Classification_Notes/bill_authentication.csv') # read in the file print('data frame shape:', df.shape) # show the data frame shape df.head() # show the data frame ### inspecting the data ### print('--- INSPECTING THE DATA --- ') print('--- columns --- ') print(df.columns) print('--- types --- ') print(df.dtypes) print('--- NA counts --- ') print(df.isna().sum()) # print('--- object descriptions --- ') # print(df.describe(include=object)) print('--- numericals descriptions --- ') df.describe() ### view basic feature correlations ### print('--- feature correlations ---') df.corr() ### view basic feature correlations in a heatmap ### import seaborn as sns import matplotlib.pyplot as plt f, ax = plt.subplots(1, 1, figsize = (10, 7)) print('--- feature correlations heatmap ---') sns.heatmap(df.corr() , cmap = 'Wistia' , annot = True) ### view scatter plots for each feature vs. target ### import matplotlib.pyplot as plt target_ = 'Class' # set the target features_ = df.iloc[:, 0:4] # set the features print('--- bar plots ---') for feature in features_: figure = plt.figure f, ax = plt.subplots(1, 1, figsize = (10, 7)) ax = plt.gca() ax.bar(df[target_], df[feature]) ax.set_xlabel(target_) ax.set_ylabel(feature) ax.set_title(f'''{target_} vs {feature}''') plt.show()
--- bar plots ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
Encode + Clean + Organize
### encoding not necessary with this example, all are numericals ### ### check for outliers in the data ### import matplotlib.pyplot as plt # view each feature in a boxplot for column in df: plt.figure() # plot figure f, ax = plt.subplots(1, 1, figsize = (10, 7)) df.boxplot([column]) # set data ### function to find outliers in the data ### def outlier_zscore(data): global outliers,zscore outliers = [] zscore = [] threshold = 3.5 # set threshold mean = np.mean(data) std = np.std(data) for i in data: z_score = (i - mean)/std # calculate the z_score zscore.append(z_score) # append the score to the zscore if np.abs(z_score) > threshold: outliers.append(i) # append z_score the outliers print(outliers) return len(outliers), outliers ### run each feature 'wanted' through the function ### print('--- possible outliers --- ') Variance_outliers_number, Variance_outliers = outlier_zscore(df.Variance) Skewness_outliers_number, Skewness_outliers = outlier_zscore(df.Skewness) Curtosis_outliers_number, Curtosis_outliers = outlier_zscore(df.Curtosis) Entropy_outliers_number, Entropy_outliers = outlier_zscore(df.Entropy) Class_outliers_number, Class_outliers = outlier_zscore(df.Class) ### removal of outliers per feature ### for num, i in enumerate(df['Curtosis']): # removing the outliers of 'bmi' if i in Curtosis_outliers: df['Curtosis'][num] = 13.5 # 3.5 under the lowest outlier for num, i in enumerate(df['Entropy']): # removing the outliers of 'charges' if i in Entropy_outliers: df['Entropy'][num] = -5.5 # 3.5 under the lowest outlier
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy after removing the cwd from sys.path. /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:7: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy import sys
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
Random Forest Classification - GridSearch CV - RandomSearch CV
### copy the data frame ### df1 = df.copy() ### split the data into features & target sets ### X = df1.iloc[:, 0:4].values # set the features y = df1.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) print('y shape:', y.shape) ### set the train test split parameters ### from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # split 80/20 ### feature scaling ### from sklearn.preprocessing import StandardScaler sc = StandardScaler() # initiate the scalar X_train = sc.fit_transform(X_train) # fit transform the data with scalar X_test = sc.transform(X_test) # fit transform the data with scalar ### random forest classifier ### from sklearn.ensemble import RandomForestClassifier from sklearn import metrics model = RandomForestClassifier() model.fit(X_train, y_train) y_pred = model.predict(X_test) #### create data frame of predictions and results ### y_pred_df = pd.DataFrame(y_pred, columns=["Predicted_Values" ]) y_test_df = pd.DataFrame(np.array(y_test), columns=["Real_Values"]) df_final = pd.concat([y_test_df , y_pred_df] , axis=1) print('--- real values vs predicted values ---') print(df_final.head()) ### get the model metrics ### print('--- model metrics ---') print('mean absolute error:', metrics.mean_absolute_error(y_test, y_pred)) # mae print('mean squared error:', metrics.mean_squared_error(y_test, y_pred)) # mse print('root mean squared error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # rmse score = metrics.r2_score(y_test , y_pred) # get the r2 score print("r2 score = {}".format(score)) # show the r2 score print('model score=', model.score(X_train, y_train)) # show the model score print("model accuracy= {}%".format(score * 100)) # show the model accuracy print('--- confusion matrix ---') print(metrics.confusion_matrix(y_test,y_pred)) # confusion matrix print('--- classification report ---') print(metrics.classification_report(y_test,y_pred)) # classificatin report print('model accuracy score=', metrics.accuracy_score(y_test, y_pred)) # model accuracy ### visualize the model prediction accuracy ### import seaborn as sns import matplotlib.pyplot as plt ### configure the plot ### print('--- distplot accuracy --- ') f, ax = plt.subplots(1, 1, figsize = (10, 7)) ax1 = sns.distplot(y_test, hist=False, color="b", label="Actual Values") sns.distplot(y_pred, hist=False, color="r", label="Predicted Values" , axlabel='Charges', ax=ax1) plt.legend()
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
GridSearch CV
### copy the data frame ### df2 = df.copy() ### split the data into features & target sets ### # for single regression select 1 feature X = df2.iloc[:, 0:4].values # set the features y = df2.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) print('y shape:', y.shape) ### set the train test split parameters ### from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # split 80/20 ### feature scaling ### from sklearn.preprocessing import StandardScaler sc = StandardScaler() # initiate the scalar X_train = sc.fit_transform(X_train) # fit transform the data with scalar X_test = sc.transform(X_test) # fit transform the data with scalar ### random forest classifier + gridsearch CV model ### from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import GridSearchCV model1 = RandomForestClassifier() param_grid = { # create the param grid 'n_estimators': [20, 100, 200], 'max_features': ['auto', 'sqrt', 'log2'], 'max_leaf_nodes' : [2, 6, 10], 'max_depth' : [5, 15, 25], 'min_samples_split' : [2, 10, 15], # 'bootstrap': [True, False], # 'ccp_alpha': [0.0, 0.25, 0.50], # 'criterion': 'mse', # 'max_samples': [2, 10, 15], # 'min_impurity_decrease': [0.0, 0.25, 0.50], # 'min_impurity_split': [2, 10, 15], # 'min_samples_leaf': [1, 5, 10], # 'min_weight_fraction_leaf': [0.0, 0.25, 0.50], # 'n_jobs': [1, 2, 5], # 'oob_score': [True, False], # 'random_state': [0, 2, 4], # 'verbose': [1], # 'warm_start': [True, False] } CV_rfc = GridSearchCV(estimator=model1, param_grid=param_grid, cv=3) print('--- model runtime --- ') %time CV_rfc.fit(X_train, y_train) print('--- best params --- ') CV_rfc.best_params_ ### random forest classifier + grid best params ### from sklearn.ensemble import RandomForestClassifier from sklearn import metrics model1 = RandomForestClassifier( max_depth= 25, max_features= 'log2', max_leaf_nodes= 10, min_samples_split= 2, n_estimators= 20 ) print('--- model runtime --- ') %time model1.fit(X_train, y_train) y_pred = model1.predict(X_test) #### create data frame of predictions and results ### y_pred_df = pd.DataFrame(y_pred, columns=["Predicted_Values" ]) y_test_df = pd.DataFrame(np.array(y_test), columns=["Real_Values"]) df_final = pd.concat([y_test_df , y_pred_df] , axis=1) print('--- real values vs predicted values ---') print(df_final.head()) ### get the model1 metrics ### print('--- model metrics ---') print('mean absolute error:', metrics.mean_absolute_error(y_test, y_pred)) # mae print('mean squared error:', metrics.mean_squared_error(y_test, y_pred)) # mse print('root mean squared error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # rmse score = metrics.r2_score(y_test , y_pred) # get the r2 score print("r2 score = {}".format(score)) # show the r2 score print('model score=', model1.score(X_train, y_train)) # show the model score print("model accuracy= {}%".format(score * 100)) # show the model accuracy print('--- confusion matrix ---') print(metrics.confusion_matrix(y_test,y_pred)) # confusion matrix print('--- classification report ---') print(metrics.classification_report(y_test,y_pred)) # classificatin report print('model1 accuracy score=', metrics.accuracy_score(y_test, y_pred)) # model accuracy ### visualize the model prediction accuracy ### import seaborn as sns import matplotlib.pyplot as plt ### configure the plot ### print('--- distplot accuracy --- ') f, ax = plt.subplots(1, 1, figsize = (10, 7)) ax1 = sns.distplot(y_test, hist=False, color="b", label="Actual Values") sns.distplot(y_pred, hist=False, color="r", label="Predicted Values" , axlabel='Charges', ax=ax1) plt.legend()
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
RandomSearch CV
### copy the data frame ### df3 = df.copy() ### split the data into features & target sets ### # for single regression select the 1 feature X = df3.iloc[:, 0:4].values # set the features y = df3.iloc[:, 4].values # set the target print('--- data shapes --- ') print('X shape:', X.shape) # show the shape print('y shape:', y.shape) # show the shape ### set the train test split parameters ### from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # split 80/20 ### feature scaling ### from sklearn.preprocessing import StandardScaler sc = StandardScaler() # initiate the scalar X_train = sc.fit_transform(X_train) # fit transform the data with scalar X_test = sc.transform(X_test) # fit transform the data with scalar ### random forest classifier + randomizedsearch CV model ### from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import RandomizedSearchCV model2 = RandomForestClassifier() param_grid = { # create the param grid 'n_estimators': [20, 100, 200], 'max_features': ['auto', 'sqrt', 'log2'], 'max_leaf_nodes' : [2, 6, 10], 'max_depth' : [5, 15, 25], 'min_samples_split' : [2, 10, 15], # 'bootstrap': [True, False], # 'ccp_alpha': [0.0, 0.25, 0.50], # 'criterion': 'mse', # 'max_samples': [2, 10, 15], # 'min_impurity_decrease': [0.0, 0.25, 0.50], # 'min_impurity_split': [2, 10, 15], # 'min_samples_leaf': [1, 5, 10], # 'min_weight_fraction_leaf': [0.0, 0.25, 0.50], # 'n_jobs': [1, 2, 5], # 'oob_score': [True, False], # 'random_state': [0, 2, 4], # 'verbose': [1], # 'warm_start': [True, False] } CV_rfc = RandomizedSearchCV(model2, param_grid, cv=3) %time CV_rfc.fit(X_train, y_train) CV_rfc.best_params_ ### random forest classifier + random best params ### from sklearn.ensemble import RandomForestClassifier from sklearn import metrics model2 = RandomForestClassifier( max_depth= 15, max_features= 'auto', max_leaf_nodes= 10, min_samples_split= 15, n_estimators= 20 ) print('--- model runtime --- ') %time model2.fit(X_train, y_train) y_pred = model2.predict(X_test) #### create data frame of predictions and results ### y_pred_df = pd.DataFrame(y_pred, columns=["Predicted_Values" ]) y_test_df = pd.DataFrame(np.array(y_test), columns=["Real_Values"]) df_final = pd.concat([y_test_df , y_pred_df] , axis=1) print('--- real values vs predicted values ---') print(df_final.head()) ### get the model2 metrics ### print('--- model metrics ---') print('mean absolute error:', metrics.mean_absolute_error(y_test, y_pred)) # mae print('mean squared error:', metrics.mean_squared_error(y_test, y_pred)) # mse print('root mean squared error:', np.sqrt(metrics.mean_squared_error(y_test, y_pred))) # rmse score = metrics.r2_score(y_test , y_pred) # get the r2 score print("r2 score = {}".format(score)) # show the r2 score print('model score=', model2.score(X_train, y_train)) # show the model score print("model accuracy= {}%".format(score * 100)) # show the model accuracy print('--- confusion matrix ---') print(metrics.confusion_matrix(y_test,y_pred)) # confusion matrix print('--- classification report ---') print(metrics.classification_report(y_test,y_pred)) # classificatin report print('model2 accuracy score=', metrics.accuracy_score(y_test, y_pred)) # model accuracy ### visualize the model prediction accuracy ### import seaborn as sns import matplotlib.pyplot as plt ### configure the plot ### print('--- distplot accuracy --- ') f, ax = plt.subplots(1, 1, figsize = (10, 7)) ax1 = sns.distplot(y_test, hist=False, color="b", label="Actual Values") sns.distplot(y_pred, hist=False, color="r", label="Predicted Values" , axlabel='Charges', ax=ax1) plt.legend()
--- distplot accuracy ---
MIT
Classification_Notes/SKlearn_RandomForest_Classification.ipynb
CVanchieri/CS_Notes
.init setup keras-retinanet
!git clone https://github.com/fizyr/keras-retinanet.git %cd keras-retinanet/ !pip install . !python setup.py build_ext --inplace
Cloning into 'keras-retinanet'... remote: Enumerating objects: 4712, done. remote: Total 4712 (delta 0), reused 0 (delta 0), pack-reused 4712 Receiving objects: 100% (4712/4712), 14.43 MiB | 36.84 MiB/s, done. Resolving deltas: 100% (3128/3128), done. /content/keras-retinanet Processing /content/keras-retinanet Requirement already satisfied: keras in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (2.2.4) Collecting keras-resnet (from keras-retinanet==0.5.0) Downloading https://files.pythonhosted.org/packages/76/d4/a35cbd07381139dda4db42c81b88c59254faac026109022727b45b31bcad/keras-resnet-0.2.0.tar.gz Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (1.12.0) Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (1.3.0) Requirement already satisfied: cython in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (0.29.9) Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (4.3.0) Requirement already satisfied: opencv-python in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (3.4.5.20) Requirement already satisfied: progressbar2 in /usr/local/lib/python3.6/dist-packages (from keras-retinanet==0.5.0) (3.38.0) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from keras->keras-retinanet==0.5.0) (1.0.9) Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from keras->keras-retinanet==0.5.0) (1.0.7) Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.6/dist-packages (from keras->keras-retinanet==0.5.0) (1.16.4) Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras->keras-retinanet==0.5.0) (2.8.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.6/dist-packages (from keras->keras-retinanet==0.5.0) (3.13) Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->keras-retinanet==0.5.0) (0.46) Requirement already satisfied: python-utils>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from progressbar2->keras-retinanet==0.5.0) (2.3.0) Building wheels for collected packages: keras-retinanet, keras-resnet Building wheel for keras-retinanet (setup.py) ... [?25l[?25hdone Stored in directory: /root/.cache/pip/wheels/b2/9f/57/cb0305f6f5a41fc3c11ad67b8cedfbe9127775b563337827ba Building wheel for keras-resnet (setup.py) ... [?25l[?25hdone Stored in directory: /root/.cache/pip/wheels/5f/09/a5/497a30fd9ad9964e98a1254d1e164bcd1b8a5eda36197ecb3c Successfully built keras-retinanet keras-resnet Installing collected packages: keras-resnet, keras-retinanet Successfully installed keras-resnet-0.2.0 keras-retinanet-0.5.0 running build_ext cythoning keras_retinanet/utils/compute_overlap.pyx to keras_retinanet/utils/compute_overlap.c /usr/local/lib/python3.6/dist-packages/Cython/Compiler/Main.py:367: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /content/keras-retinanet/keras_retinanet/utils/compute_overlap.pyx tree = Parsing.p_module(s, pxd, full_module_name) building 'keras_retinanet.utils.compute_overlap' extension creating build creating build/temp.linux-x86_64-3.6 creating build/temp.linux-x86_64-3.6/keras_retinanet creating build/temp.linux-x86_64-3.6/keras_retinanet/utils x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.6m -I/usr/local/lib/python3.6/dist-packages/numpy/core/include -c keras_retinanet/utils/compute_overlap.c -o build/temp.linux-x86_64-3.6/keras_retinanet/utils/compute_overlap.o In file included from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarraytypes.h:1822:0, from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/ndarrayobject.h:12, from /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/arrayobject.h:4, from keras_retinanet/utils/compute_overlap.c:598: /usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it with " \ ^~~~~~~ creating build/lib.linux-x86_64-3.6 creating build/lib.linux-x86_64-3.6/keras_retinanet creating build/lib.linux-x86_64-3.6/keras_retinanet/utils x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/keras_retinanet/utils/compute_overlap.o -o build/lib.linux-x86_64-3.6/keras_retinanet/utils/compute_overlap.cpython-36m-x86_64-linux-gnu.so copying build/lib.linux-x86_64-3.6/keras_retinanet/utils/compute_overlap.cpython-36m-x86_64-linux-gnu.so -> keras_retinanet/utils
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
download model
#!curl -LJO --output snapshots/pretrained.h5 https://github.com/fizyr/keras-retinanet/releases/download/0.5.0/resnet50_coco_best_v2.1.0.h5 import urllib PRETRAINED_MODEL = './snapshots/_pretrained_model.h5' URL_MODEL = 'https://github.com/fizyr/keras-retinanet/releases/download/0.5.0/resnet50_coco_best_v2.1.0.h5' urllib.request.urlretrieve(URL_MODEL, PRETRAINED_MODEL)
_____no_output_____
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
inference modules
!pwd #import os, sys #sys.path.insert(0, 'keras-retinanet') # show images inline %matplotlib inline # automatically reload modules when they have changed %load_ext autoreload %autoreload 2 import os #os.environ['CUDA_VISIBLE_DEVICES'] = '0' # import keras import keras from keras_retinanet import models from keras_retinanet.utils.image import read_image_bgr, preprocess_image, resize_image from keras_retinanet.utils.visualization import draw_box, draw_caption from keras_retinanet.utils.colors import label_color # import miscellaneous modules import matplotlib.pyplot as plt import cv2 import numpy as np import time # set tf backend to allow memory to grow, instead of claiming everything import tensorflow as tf def get_session(): config = tf.ConfigProto() config.gpu_options.allow_growth = True return tf.Session(config=config) # use this environment flag to change which GPU to use #os.environ["CUDA_VISIBLE_DEVICES"] = "1" # set the modified tf session as backend in keras keras.backend.tensorflow_backend.set_session(get_session())
/content/keras-retinanet
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
load model
# %cd keras-retinanet/ model_path = os.path.join('snapshots', sorted(os.listdir('snapshots'), reverse=True)[0]) print(model_path) print(os.path.isfile(model_path)) # load retinanet model model = models.load_model(model_path, backbone_name='resnet50') # model = models.convert_model(model) # load label to names mapping for visualization purposes labels_to_names = {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
snapshots/_pretrained_model.h5 True WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer.
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
detect objects
def img_inference(img_path, threshold_score = 0.8): image = read_image_bgr(img_path) # copy to draw on draw = image.copy() draw = cv2.cvtColor(draw, cv2.COLOR_BGR2RGB) # preprocess image for network image = preprocess_image(image) image, scale = resize_image(image) # process image start = time.time() boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0)) print("processing time: ", time.time() - start) # correct for image scale boxes /= scale # visualize detections for box, score, label in zip(boxes[0], scores[0], labels[0]): # scores are sorted so we can break if score < threshold_score: break color = label_color(label) b = box.astype(int) draw_box(draw, b, color=color) caption = "{} {:.3f}".format(labels_to_names[label], score) draw_caption(draw, b, caption) plt.figure(figsize=(10, 10)) plt.axis('off') plt.imshow(draw) plt.show() img_inference('examples/000000008021.jpg') from tensorflow.python.client import device_lib def get_available_gpus(): local_device_protos = device_lib.list_local_devices() return [x.physical_device_desc for x in local_device_protos if x.device_type == 'GPU'] GPU = get_available_gpus()[-1][0:-1] print(GPU) import glob def create_video(img_path, name ='processed', img_ext = '*.jpg', image_size=(1280, 720)): _name = name + '.mp4' #_cap = VideoCapture(0) _fourcc = cv2.VideoWriter_fourcc(*'MP4V') _out = cv2.VideoWriter(_name, _fourcc, 15.0, image_size) # out = cv2.VideoWriter('project.avi',cv2.VideoWriter_fourcc(*'DIVX'), 15, size) for filename in sorted(glob.glob(os.path.join(img_path, img_ext))): print(filename) img = cv2.imread(filename) _out.write(img) del img _out.release() import unicodedata import string valid_filename_chars = f"-_.() {string.ascii_letters}{string.digits}" char_limit = 255 def clean_filename(filename, whitelist=valid_filename_chars, replace=' '): # replace spaces for r in replace: filename = filename.replace(r, '_') # keep only valid ascii chars cleaned_filename = unicodedata.normalize('NFKD', filename).encode('ASCII', 'ignore').decode() # keep only whitelisted chars cleaned_filename = ''.join(c for c in cleaned_filename if c in whitelist) if len(cleaned_filename) > char_limit: print(f"Warning, filename truncated because it was over {char_limit}. Filenames may no longer be unique") return cleaned_filename[:char_limit] import colorsys import random from tqdm import tqdm N = len(labels_to_names) HSV_tuples = [(x*1.0/N, 0.5, 0.5) for x in range(N)] RGB_tuples = list(map(lambda x: tuple(255*np.array(colorsys.hsv_to_rgb(*x))), HSV_tuples)) random.shuffle(RGB_tuples) def object_detect_video(video_path, out_temp_dir='tmp', video_name = 'processed', threshold = 0.6): cap = cv2.VideoCapture(video_path) if not os.path.exists(out_temp_dir): os.makedirs(out_temp_dir) tq = tqdm(total=1, unit="frame(s)") counter = 0 sum_time = 0 video_out = None while(True): ret, draw = cap.read() if not ret: break bgr = cv2.cvtColor(draw, cv2.COLOR_RGB2BGR) # preprocess image for network image = preprocess_image(bgr) image, scale = resize_image(image) if counter == 0: height, width, channels = draw.shape #print(f'Shape: {width}X{height}') _name = video_name + '.mp4' _fourcc = cv2.VideoWriter_fourcc(*'MP4V') video_out = cv2.VideoWriter(_name, _fourcc, 20.0, (width, height)) # process image start = time.time() boxes, scores, labels = model.predict_on_batch(np.expand_dims(image, axis=0)) t = time.time() - start #print(f"frame:{counter} processing time: {t}") tq.total += 1 # fancy way to give info without forcing a refresh tq.set_postfix(dir=f'frame {counter} time {sum_time}', refresh=False) tq.update(0) # may trigger a refresh # correct for image scale boxes /= scale # visualize detections #draw_detections(image, boxes, scores, labels, color=None, label_to_name=None, score_threshold=0.5) for box, score, label in zip(boxes[0], scores[0], labels[0]): if score < threshold: continue color = label_color(label) b = box.astype(int) draw_box(draw, b, color=color) caption = f"{labels_to_names[label]} {score:.3f}" draw_caption(draw, b, caption) if sum_time>0: cv2.putText(draw, "Processing time %.2fs (%.1ffps) AVG %.2fs (%.1ffps)"%(t,1.0/t,sum_time/counter,counter/sum_time), (10, 70), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 7) cv2.putText(draw, "Processing time %.2fs (%.1ffps) AVG %.2fs (%.1ffps)"%(t,1.0/t,sum_time/counter,counter/sum_time), (10, 70), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 3) # cv2.imwrite(os.path.join(out_temp_dir, f'img{counter:08d}.jpg'),draw) video_out.write(draw) counter=counter+1 sum_time+=t cap.release() video_out.release() cv2.destroyAllWindows() tq.set_postfix(dir=video_path) tq.close() from google.colab import files uploaded = files.upload() for fn in uploaded.keys(): print(f'User uploaded file "{fn}" with length {len(uploaded[fn])} bytes') fn0 = clean_filename(fn) #with open(fn0, "wb") as df: # df.write(uploaded[fn]) # df.close() object_detect_video(fn, f'{fn0}_tmp', video_name=f'{os.path.basename(fn0)}_processed', threshold = 0.5) #create_video(f'{fn0}_tmp') files.download(f'{os.path.basename(fn0)}_processed.mp4') # object_detect_video('Canada vs. Finland - Gold Medal Game - Game Highlights - IIHFWorlds 2019.mp4', 'video_tmp', video_name = 'processed2') #sorted(glob.glob('/content/keras-retinanet/video_tmp/*.jpg')) #create_video('/content/keras-retinanet/video_tmp')
_____no_output_____
Apache-2.0
RetinaNet_Video_Object_Detection.ipynb
thingumajig/colab-experiments
Project: Part of Speech Tagging with Hidden Markov Models --- IntroductionPart of speech tagging is the process of determining the syntactic category of a word from the words in its surrounding context. It is often used to help disambiguate natural language phrases because it can be done quickly with high accuracy. Tagging can be used for many NLP tasks like determining correct pronunciation during speech synthesis (for example, _dis_-count as a noun vs dis-_count_ as a verb), for information retrieval, and for word sense disambiguation.In this notebook, we'll use the [Pomegranate](http://pomegranate.readthedocs.io/) library to build a hidden Markov model for part of speech tagging using a "universal" tagset. Hidden Markov models have been able to achieve [>96% tag accuracy with larger tagsets on realistic text corpora](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf). Hidden Markov models have also been used for speech recognition and speech generation, machine translation, gene recognition for bioinformatics, and human gesture recognition for computer vision, and more. ![](_post-hmm.png) The Road AheadWe will complete this project in 3 steps mentioned below. The section on Step 4 includes references & resources you can use to further explore HMM taggers.- [Step 1](Step-1:-Read-and-preprocess-the-dataset): Review the provided interface to load and access the text corpus- [Step 2](Step-2:-Build-a-Most-Frequent-Class-tagger): Build a Most Frequent Class tagger to use as a baseline- [Step 3](Step-3:-Build-an-HMM-tagger): Build an HMM Part of Speech tagger and compare to the MFC baseline- [Step 4](Step-4:-[Optional]-Improving-model-performance): (Optional) Improve the HMM tagger **Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
# Jupyter "magic methods" -- only need to be run once per kernel restart %load_ext autoreload %aimport helpers, tests %autoreload 1 # import python modules -- this cell needs to be run again if you make changes to any of the files import matplotlib.pyplot as plt import numpy as np from IPython.core.display import HTML from itertools import chain from collections import Counter, defaultdict from helpers import show_model, Dataset from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 1: Read and preprocess the dataset---We'll start by reading in a text corpus and splitting it into a training and testing dataset. The data set is a copy of the [Brown corpus](https://en.wikipedia.org/wiki/Brown_Corpus) (originally from the [NLTK](https://www.nltk.org/) library) that has already been pre-processed to only include the [universal tagset](https://arxiv.org/pdf/1104.2086.pdf). We should get slightly higher accuracy using this simplified tagset than the same model would achieve on a larger tagset like the full [Penn treebank tagset](https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html).The `Dataset` class provided in helpers.py will read and parse the corpus. You can generate your own datasets compatible with the reader by writing them to the following format. The dataset is stored in plaintext as a collection of words and corresponding tags. Each sentence starts with a unique identifier on the first line, followed by one tab-separated word/tag pair on each following line. Sentences are separated by a single blank line.Example from the Brown corpus. ```b100-38532Perhaps ADVit PRONwas VERBright ADJ; .; .b100-35577...```
data = Dataset("tags-universal.txt", "brown-universal.txt", train_test_split=0.8) print("There are {} sentences in the corpus.".format(len(data))) print("There are {} sentences in the training set.".format(len(data.training_set))) print("There are {} sentences in the testing set.".format(len(data.testing_set))) assert len(data) == len(data.training_set) + len(data.testing_set), \ "The number of sentences in the training set + testing set should sum to the number of sentences in the corpus"
There are 57340 sentences in the corpus. There are 45872 sentences in the training set. There are 11468 sentences in the testing set.
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
The Dataset InterfaceWe can access (mostly) immutable references to the dataset through a simple interface provided through the `Dataset` class, which represents an iterable collection of sentences along with easy access to partitions of the data for training & testing. Review the reference below, to make sure you understand the interface before moving on to the next step.```Dataset-only Attributes: training_set - reference to a Subset object containing the samples for training testing_set - reference to a Subset object containing the samples for testingDataset & Subset Attributes: sentences - a dictionary with an entry {sentence_key: Sentence()} for each sentence in the corpus keys - an immutable ordered (not sorted) collection of the sentence_keys for the corpus vocab - an immutable collection of the unique words in the corpus tagset - an immutable collection of the unique tags in the corpus X - returns an array of words grouped by sentences ((w11, w12, w13, ...), (w21, w22, w23, ...), ...) Y - returns an array of tags grouped by sentences ((t11, t12, t13, ...), (t21, t22, t23, ...), ...) N - returns the number of distinct samples (individual words or tags) in the datasetMethods: stream() - returns an flat iterable over all (word, tag) pairs across all sentences in the corpus __iter__() - returns an iterable over the data as (sentence_key, Sentence()) pairs __len__() - returns the nubmer of sentences in the dataset```For example, consider a Subset, `subset`, of the sentences `{"s0": Sentence(("See", "Spot", "run"), ("VERB", "NOUN", "VERB")), "s1": Sentence(("Spot", "ran"), ("NOUN", "VERB"))}`. The subset will have these attributes:```subset.keys == {"s1", "s0"} unorderedsubset.vocab == {"See", "run", "ran", "Spot"} unorderedsubset.tagset == {"VERB", "NOUN"} unorderedsubset.X == (("Spot", "ran"), ("See", "Spot", "run")) order matches .keyssubset.Y == (("NOUN", "VERB"), ("VERB", "NOUN", "VERB")) order matches .keyssubset.N == 7 there are a total of seven observations over all sentenceslen(subset) == 2 because there are two sentences```**Note:** The `Dataset` class is _convenient_, but it is **not** efficient. It is not suitable for huge datasets because it stores multiple redundant copies of the same data. Sentences`Dataset.sentences` is a dictionary of all sentences in the training corpus, each keyed to a unique sentence identifier. Each `Sentence` is itself an object with two attributes: a tuple of the words in the sentence named `words` and a tuple of the tag corresponding to each word named `tags`.
key = 'b100-38532' print("Sentence: {}".format(key)) print("words:\n\t{!s}".format(data.sentences[key].words)) print("tags:\n\t{!s}".format(data.sentences[key].tags))
Sentence: b100-38532 words: ('Perhaps', 'it', 'was', 'right', ';', ';') tags: ('ADV', 'PRON', 'VERB', 'ADJ', '.', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
**Note:** The underlying iterable sequence is **unordered** over the sentences in the corpus; it is not guaranteed to return the sentences in a consistent order between calls. Use `Dataset.stream()`, `Dataset.keys`, `Dataset.X`, or `Dataset.Y` attributes if you need ordered access to the data. Counting Unique ElementsWe can access the list of unique words (the dataset vocabulary) via `Dataset.vocab` and the unique list of tags via `Dataset.tagset`.
print("There are a total of {} samples of {} unique words in the corpus." .format(data.N, len(data.vocab))) print("There are {} samples of {} unique words in the training set." .format(data.training_set.N, len(data.training_set.vocab))) print("There are {} samples of {} unique words in the testing set." .format(data.testing_set.N, len(data.testing_set.vocab))) print("There are {} words in the test set that are missing in the training set." .format(len(data.testing_set.vocab - data.training_set.vocab))) assert data.N == data.training_set.N + data.testing_set.N, \ "The number of training + test samples should sum to the total number of samples"
There are a total of 1161192 samples of 56057 unique words in the corpus. There are 928458 samples of 50536 unique words in the training set. There are 232734 samples of 25112 unique words in the testing set. There are 5521 words in the test set that are missing in the training set.
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Accessing word and tag SequencesThe `Dataset.X` and `Dataset.Y` attributes provide access to ordered collections of matching word and tag sequences for each sentence in the dataset.
# accessing words with Dataset.X and tags with Dataset.Y for i in range(2): print("Sentence {}:".format(i + 1), data.X[i]) print() print("Labels {}:".format(i + 1), data.Y[i]) print()
Sentence 1: ('Mr.', 'Podger', 'had', 'thanked', 'him', 'gravely', ',', 'and', 'now', 'he', 'made', 'use', 'of', 'the', 'advice', '.') Labels 1: ('NOUN', 'NOUN', 'VERB', 'VERB', 'PRON', 'ADV', '.', 'CONJ', 'ADV', 'PRON', 'VERB', 'NOUN', 'ADP', 'DET', 'NOUN', '.') Sentence 2: ('But', 'there', 'seemed', 'to', 'be', 'some', 'difference', 'of', 'opinion', 'as', 'to', 'how', 'far', 'the', 'board', 'should', 'go', ',', 'and', 'whose', 'advice', 'it', 'should', 'follow', '.') Labels 2: ('CONJ', 'PRT', 'VERB', 'PRT', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'ADP', 'ADV', 'ADV', 'DET', 'NOUN', 'VERB', 'VERB', '.', 'CONJ', 'DET', 'NOUN', 'PRON', 'VERB', 'VERB', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Accessing (word, tag) SamplesThe `Dataset.stream()` method returns an iterator that chains together every pair of (word, tag) entries across all sentences in the entire corpus.
# use Dataset.stream() (word, tag) samples for the entire corpus print("\nStream (word, tag) pairs:\n") for i, pair in enumerate(data.stream()): print("\t", pair) if i > 5: break
Stream (word, tag) pairs: ('Mr.', 'NOUN') ('Podger', 'NOUN') ('had', 'VERB') ('thanked', 'VERB') ('him', 'PRON') ('gravely', 'ADV') (',', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
For both our baseline tagger and the HMM model we'll build, we need to estimate the frequency of tags & words from the frequency counts of observations in the training corpus. The next several cells will complete functions to compute the counts of several sets of counts. Step 2: Build a Most Frequent Class tagger---Perhaps the simplest tagger (and a good baseline for tagger performance) is to simply choose the tag most frequently assigned to each word. This "most frequent class" tagger inspects each observed word in the sequence and assigns it the label that was most often assigned to that word in the corpus. IMPLEMENTATION: Pair CountsThe function below computes the joint frequency counts for two input sequences.
def pair_counts(sequences_A, sequences_B): """Return a dictionary keyed to each unique value in the first sequence list that counts the number of occurrences of the corresponding value from the second sequences list. For example, if sequences_A is tags and sequences_B is the corresponding words, then if 1244 sequences contain the word "time" tagged as a NOUN, then you should return a dictionary such that pair_counts[NOUN][time] == 1244 """ # TODO: Finish this function! pair_dict = {} i = 0; for tag in sequences_A: pair_dict[tag] = {} for word, tag in (sequences_B): if word in pair_dict[tag]: pair_dict[tag][word] = pair_dict[tag][word] + 1 else: pair_dict[tag][word] = 1 return pair_dict # Calculate C(t_i, w_i) emission_counts = pair_counts(data.tagset, data.stream()) assert len(emission_counts) == 12, \ "Uh oh. There should be 12 tags in your dictionary." assert max(emission_counts["NOUN"], key=emission_counts["NOUN"].get) == 'time', \ "Hmmm...'time' is expected to be the most common NOUN." HTML('<div class="alert alert-block alert-success">Your emission counts look good!</div>')
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Most Frequent Class TaggerUse the `pair_counts()` function and the training dataset to find the most frequent class label for each word in the training data, and populate the `mfc_table` below. The table keys should be words, and the values should be the appropriate tag string.The `MFCTagger` class is provided to mock the interface of Pomegranite HMM models so that they can be used interchangeably.
# Create a lookup table mfc_table where mfc_table[word] contains the tag label most frequently assigned to that word from collections import namedtuple FakeState = namedtuple("FakeState", "name") class MFCTagger: # NOTE: You should not need to modify this class or any of its methods missing = FakeState(name="<MISSING>") def __init__(self, table): self.table = defaultdict(lambda: MFCTagger.missing) self.table.update({word: FakeState(name=tag) for word, tag in table.items()}) def viterbi(self, seq): """This method simplifies predictions by matching the Pomegranate viterbi() interface""" return 0., list(enumerate(["<start>"] + [self.table[w] for w in seq] + ["<end>"])) # calculate the frequency of each tag being assigned to each word (hint: similar, but not # the same as the emission probabilities) and use it to fill the mfc_table word_counts = pair_counts(data.tagset, data.training_set.stream()) def getMaxFreq(word, counts): maxFreq = -1 #maxFreqTag for key, value in counts.items(): if word in value: if value[word] > maxFreq: maxFreq = value[word] maxFreqTag = key return maxFreqTag def GetVocabFrequencies(vocab, counts): word_freq = {} for word in vocab: word_freq[word] = getMaxFreq(word, counts) return word_freq mfc_table = GetVocabFrequencies(data.training_set.vocab, word_counts) # TODO: YOUR CODE HERE # DO NOT MODIFY BELOW THIS LINE mfc_model = MFCTagger(mfc_table) # Create a Most Frequent Class tagger instance assert len(mfc_table) == len(data.training_set.vocab), "" assert all(k in data.training_set.vocab for k in mfc_table.keys()), "" assert sum(int(k not in mfc_table) for k in data.testing_set.vocab) == 5521, "" HTML('<div class="alert alert-block alert-success">Your MFC tagger has all the correct words!</div>')
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Making Predictions with a ModelThe helper functions provided below interface with Pomegranate network models & the mocked MFCTagger to take advantage of the [missing value](http://pomegranate.readthedocs.io/en/latest/nan.html) functionality in Pomegranate through a simple sequence decoding function. Run these functions, then run the next cell to see some of the predictions made by the MFC tagger.
def replace_unknown(sequence): """Return a copy of the input sequence where each unknown word is replaced by the literal string value 'nan'. Pomegranate will ignore these values during computation. """ return [w if w in data.training_set.vocab else 'nan' for w in sequence] def simplify_decoding(X, model): """X should be a 1-D sequence of observations for the model to predict""" _, state_path = model.viterbi(replace_unknown(X)) return [state[1].name for state in state_path[1:-1]] # do not show the start/end state predictions
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Example Decoding Sequences with MFC Tagger
for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, mfc_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n")
Sentence Key: b100-28144 Predicted labels: ----------------- ['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.'] Actual labels: -------------- ('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.') Sentence Key: b100-23146 Predicted labels: ----------------- ['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.'] Actual labels: -------------- ('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.') Sentence Key: b100-35462 Predicted labels: ----------------- ['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', '<MISSING>', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADV', 'NOUN', '.'] Actual labels: -------------- ('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Evaluating Model AccuracyThe function below will evaluate the accuracy of the MFC tagger on the collection of all sentences from a text corpus.
def accuracy(X, Y, model): """Calculate the prediction accuracy by using the model to decode each sequence in the input X and comparing the prediction with the true labels in Y. The X should be an array whose first dimension is the number of sentences to test, and each element of the array should be an iterable of the words in the sequence. The arrays X and Y should have the exact same shape. X = [("See", "Spot", "run"), ("Run", "Spot", "run", "fast"), ...] Y = [(), (), ...] """ correct = total_predictions = 0 for observations, actual_tags in zip(X, Y): # The model.viterbi call in simplify_decoding will return None if the HMM # raises an error (for example, if a test sentence contains a word that # is out of vocabulary for the training set). Any exception counts the # full sentence as an error (which makes this a conservative estimate). try: most_likely_tags = simplify_decoding(observations, model) correct += sum(p == t for p, t in zip(most_likely_tags, actual_tags)) except: pass total_predictions += len(observations) return correct / total_predictions
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Evaluate the accuracy of the MFC taggerRun the next cell to evaluate the accuracy of the tagger on the training and test corpus.
mfc_training_acc = accuracy(data.training_set.X, data.training_set.Y, mfc_model) print("training accuracy mfc_model: {:.2f}%".format(100 * mfc_training_acc)) mfc_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, mfc_model) print("testing accuracy mfc_model: {:.2f}%".format(100 * mfc_testing_acc)) assert mfc_training_acc >= 0.955, "Uh oh. Your MFC accuracy on the training set doesn't look right." assert mfc_testing_acc >= 0.925, "Uh oh. Your MFC accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your MFC tagger accuracy looks correct!</div>')
training accuracy mfc_model: 95.72% testing accuracy mfc_model: 93.00%
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 3: Build an HMM tagger---The HMM tagger has one hidden state for each possible tag, and parameterized by two distributions: the emission probabilties giving the conditional probability of observing a given **word** from each hidden state, and the transition probabilities giving the conditional probability of moving between **tags** during the sequence.We will also estimate the starting probability distribution (the probability of each **tag** being the first tag in a sequence), and the terminal probability distribution (the probability of each **tag** being the last tag in a sequence).The maximum likelihood estimate of these distributions can be calculated from the frequency counts as described in the following sections where you'll implement functions to count the frequencies, and finally build the model. The HMM model will make predictions according to the formula:$$t_i^n = \underset{t_i^n}{\mathrm{argmax}} \prod_{i=1}^n P(w_i|t_i) P(t_i|t_{i-1})$$Refer to Speech & Language Processing [Chapter 10](https://web.stanford.edu/~jurafsky/slp3/10.pdf) for more information. IMPLEMENTATION: Unigram CountsComplete the function below to estimate the co-occurrence frequency of each symbol over all of the input sequences. The unigram probabilities in our HMM model are estimated from the formula below, where N is the total number of samples in the input. (You only need to compute the counts for now.)$$P(tag_1) = \frac{C(tag_1)}{N}$$
def unigram_counts(sequences): """Return a dictionary keyed to each unique value in the input sequence list that counts the number of occurrences of the value in the sequences list. The sequences collection should be a 2-dimensional array. For example, if the tag NOUN appears 275558 times over all the input sequences, then you should return a dictionary such that your_unigram_counts[NOUN] == 275558. """ unigram_counts ={} for i in range(len(sequences)): for j in range(len(sequences[i])): if sequences[i][j] in unigram_counts: unigram_counts[sequences[i][j]] = unigram_counts[sequences[i][j]] + 1 else: unigram_counts[sequences[i][j]] = 1 return unigram_counts # call unigram_counts with a list of tag sequences from the training set tag_unigrams = unigram_counts(data.training_set.Y) print((tag_unigrams)) assert set(tag_unigrams.keys()) == data.training_set.tagset, \ "Uh oh. It looks like your tag counts doesn't include all the tags!" assert min(tag_unigrams, key=tag_unigrams.get) == 'X', \ "Hmmm...'X' is expected to be the least common class" assert max(tag_unigrams, key=tag_unigrams.get) == 'NOUN', \ "Hmmm...'NOUN' is expected to be the most common class" HTML('<div class="alert alert-block alert-success">Your tag unigrams look good!</div>')
{'ADV': 44877, 'NOUN': 220632, '.': 117757, 'VERB': 146161, 'ADP': 115808, 'ADJ': 66754, 'CONJ': 30537, 'DET': 109671, 'PRT': 23906, 'NUM': 11878, 'PRON': 39383, 'X': 1094}
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Bigram CountsComplete the function below to estimate the co-occurrence frequency of each pair of symbols in each of the input sequences. These counts are used in the HMM model to estimate the bigram probability of two tags from the frequency counts according to the formula: $$P(tag_2|tag_1) = \frac{C(tag_2|tag_1)}{C(tag_2)}$$
def bigram_counts(sequences): """Return a dictionary keyed to each unique PAIR of values in the input sequences list that counts the number of occurrences of pair in the sequences list. The input should be a 2-dimensional array. For example, if the pair of tags (NOUN, VERB) appear 61582 times, then you should return a dictionary such that your_bigram_counts[(NOUN, VERB)] == 61582 """ bigram_counts ={} for i in range(len(sequences)): for j in range(len(sequences[i]) - 1): pair = (sequences[i][j], sequences[i][j + 1]) if pair in bigram_counts: bigram_counts[pair] = bigram_counts[pair] + 1 else: bigram_counts[pair] = 1 return bigram_counts # TODO: call bigram_counts with a list of tag sequences from the training set tag_bigrams = bigram_counts(data.training_set.Y) assert len(tag_bigrams) == 144, \ "Uh oh. There should be 144 pairs of bigrams (12 tags x 12 tags)" assert min(tag_bigrams, key=tag_bigrams.get) in [('X', 'NUM'), ('PRON', 'X')], \ "Hmmm...The least common bigram should be one of ('X', 'NUM') or ('PRON', 'X')." assert max(tag_bigrams, key=tag_bigrams.get) in [('DET', 'NOUN')], \ "Hmmm...('DET', 'NOUN') is expected to be the most common bigram." HTML('<div class="alert alert-block alert-success">Your tag bigrams look good!</div>')
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Sequence Starting CountsComplete the code below to estimate the bigram probabilities of a sequence starting with each tag.
def starting_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the beginning of a sequence. For example, if 8093 sequences start with NOUN, then you should return a dictionary such that your_starting_counts[NOUN] == 8093 """ starting_counts = {} for i in range(len(sequences)): if sequences[i][0] in starting_counts: starting_counts[sequences[i][0]] = starting_counts[sequences[i][0]] + 1 else: starting_counts[sequences[i][0]] = 1 return starting_counts # Calculate the count of each tag starting a sequence tag_starts = starting_counts(data.training_set.Y) assert len(tag_starts) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_starts, key=tag_starts.get) == 'X', "Hmmm...'X' is expected to be the least common starting bigram." assert max(tag_starts, key=tag_starts.get) == 'DET', "Hmmm...'DET' is expected to be the most common starting bigram." HTML('<div class="alert alert-block alert-success">Your starting tag counts look good!</div>')
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Sequence Ending CountsComplete the function below to estimate the bigram probabilities of a sequence ending with each tag.
def ending_counts(sequences): """Return a dictionary keyed to each unique value in the input sequences list that counts the number of occurrences where that value is at the end of a sequence. For example, if 18 sequences end with DET, then you should return a dictionary such that your_starting_counts[DET] == 18 """ ending_counts = {} for i in range(len(sequences)): last_idx = len(sequences[i]) - 1 if sequences[i][last_idx] in ending_counts: ending_counts[sequences[i][last_idx]] = ending_counts[sequences[i][last_idx]] + 1 else: ending_counts[sequences[i][last_idx]] = 1 return ending_counts # Calculate the count of each tag ending a sequence tag_ends = ending_counts(data.training_set.Y) assert len(tag_ends) == 12, "Uh oh. There should be 12 tags in your dictionary." assert min(tag_ends, key=tag_ends.get) in ['X', 'CONJ'], "Hmmm...'X' or 'CONJ' should be the least common ending bigram." assert max(tag_ends, key=tag_ends.get) == '.', "Hmmm...'.' is expected to be the most common ending bigram." HTML('<div class="alert alert-block alert-success">Your ending tag counts look good!</div>')
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
IMPLEMENTATION: Basic HMM TaggerUse the tag unigrams and bigrams calculated above to construct a hidden Markov tagger.- Add one state per tag - The emission distribution at each state should be estimated with the formula: $P(w|t) = \frac{C(t, w)}{C(t)}$- Add an edge from the starting state `basic_model.start` to each tag - The transition probability should be estimated with the formula: $P(t|start) = \frac{C(start, t)}{C(start)}$- Add an edge from each tag to the end state `basic_model.end` - The transition probability should be estimated with the formula: $P(end|t) = \frac{C(t, end)}{C(t)}$- Add an edge between _every_ pair of tags - The transition probability should be estimated with the formula: $P(t_2|t_1) = \frac{C(t_1, t_2)}{C(t_1)}$
basic_model = HiddenMarkovModel(name="base-hmm-tagger") states = {} for tag in emission_counts: tag_count = tag_unigrams[tag] prob_distributuion = {word : word_count/tag_count for word, word_count in emission_counts[tag].items() } state = State(DiscreteDistribution(prob_distributuion), name=tag) states[tag] = state basic_model.add_states(state) for tag_pair in tag_bigrams.keys(): training_set_count = len(data.training_set.Y) start_prob = tag_starts[tag_pair[0]]/training_set_count basic_model.add_transition(basic_model.start, states[tag_pair[0]], start_prob) trans_prob = tag_bigrams[tag_pair]/tag_unigrams[tag_pair[0]] basic_model.add_transition(states[tag_pair[0]], states[tag_pair[1]], trans_prob) end_prob = tag_ends[tag_pair[0]]/training_set_count basic_model.add_transition(states[tag_pair[0]], basic_model.end, end_prob) basic_model.bake() assert all(tag in set(s.name for s in basic_model.states) for tag in data.training_set.tagset), \ "Every state in your network should use the name of the associated tag, which must be one of the training set tags." assert basic_model.edge_count() == 168, \ ("Your network should have an edge from the start node to each state, one edge between every " + "pair of tags (states), and an edge from each state to the end node.") HTML('<div class="alert alert-block alert-success">Your HMM network topology looks good!</div>') hmm_training_acc = accuracy(data.training_set.X, data.training_set.Y, basic_model) print("training accuracy basic hmm model: {:.2f}%".format(100 * hmm_training_acc)) hmm_testing_acc = accuracy(data.testing_set.X, data.testing_set.Y, basic_model) print("testing accuracy basic hmm model: {:.2f}%".format(100 * hmm_testing_acc)) assert hmm_training_acc > 0.97, "Uh oh. Your HMM accuracy on the training set doesn't look right." assert hmm_testing_acc > 0.955, "Uh oh. Your HMM accuracy on the testing set doesn't look right." HTML('<div class="alert alert-block alert-success">Your HMM tagger accuracy looks correct! Congratulations, you\'ve finished the project.</div>')
training accuracy basic hmm model: 97.54% testing accuracy basic hmm model: 96.18%
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Example Decoding Sequences with the HMM Tagger
for key in data.testing_set.keys[:3]: print("Sentence Key: {}\n".format(key)) print("Predicted labels:\n-----------------") print(simplify_decoding(data.sentences[key].words, basic_model)) print() print("Actual labels:\n--------------") print(data.sentences[key].tags) print("\n")
Sentence Key: b100-28144 Predicted labels: ----------------- ['CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.'] Actual labels: -------------- ('CONJ', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'NOUN', 'NUM', '.', 'CONJ', 'NOUN', 'NUM', '.', '.', 'NOUN', '.', '.') Sentence Key: b100-23146 Predicted labels: ----------------- ['PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.'] Actual labels: -------------- ('PRON', 'VERB', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', 'NOUN', 'VERB', 'VERB', '.', 'ADP', 'VERB', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADP', 'DET', 'NOUN', '.') Sentence Key: b100-35462 Predicted labels: ----------------- ['DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.'] Actual labels: -------------- ('DET', 'ADJ', 'NOUN', 'VERB', 'VERB', 'VERB', 'ADP', 'DET', 'ADJ', 'ADJ', 'NOUN', 'ADP', 'DET', 'ADJ', 'NOUN', '.', 'ADP', 'ADJ', 'NOUN', '.', 'CONJ', 'ADP', 'DET', 'NOUN', 'ADP', 'ADJ', 'ADJ', '.', 'ADJ', '.', 'CONJ', 'ADJ', 'NOUN', 'ADP', 'ADJ', 'NOUN', '.')
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
Step 4: [Optional] Improving model performance---There are additional enhancements that can be incorporated into your tagger that improve performance on larger tagsets where the data sparsity problem is more significant. The data sparsity problem arises because the same amount of data split over more tags means there will be fewer samples in each tag, and there will be more missing data tags that have zero occurrences in the data. The techniques in this section are optional.- [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing) (pseudocounts) Laplace smoothing is a technique where you add a small, non-zero value to all observed counts to offset for unobserved values.- Backoff Smoothing Another smoothing technique is to interpolate between n-grams for missing data. This method is more effective than Laplace smoothing at combatting the data sparsity problem. Refer to chapters 4, 9, and 10 of the [Speech & Language Processing](https://web.stanford.edu/~jurafsky/slp3/) book for more information.- Extending to Trigrams HMM taggers have achieved better than 96% accuracy on this dataset with the full Penn treebank tagset using an architecture described in [this](http://www.coli.uni-saarland.de/~thorsten/publications/Brants-ANLP00.pdf) paper. Altering your HMM to achieve the same performance would require implementing deleted interpolation (described in the paper), incorporating trigram probabilities in your frequency tables, and re-implementing the Viterbi algorithm to consider three consecutive states instead of two. Obtain the Brown Corpus with a Larger TagsetRun the code below to download a copy of the brown corpus with the full NLTK tagset. You will need to research the available tagset information in the NLTK docs and determine the best way to extract the subset of NLTK tags you want to explore. If you write the following the format specified in Step 1, then you can reload the data using all of the code above for comparison.Refer to [Chapter 5](http://www.nltk.org/book/ch05.html) of the NLTK book for more information on the available tagsets.
import nltk from nltk import pos_tag, word_tokenize from nltk.corpus import brown nltk.download('brown') training_corpus = nltk.corpus.brown training_corpus.tagged_sents()[0]
_____no_output_____
MIT
HMM TaggerPart of Speech Tagging - HMM.ipynb
Akshat2127/Part-Of-Speech-Tagging
[Bucles `for`](https://docs.python.org/3/tutorial/controlflow.htmlfor-statements) Iterando listas
mi_lista = [1, 2, 3, 4, 'Python', 'es', 'piola'] for item in mi_lista: print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3
`break`Parar la ejecución del bucle.
for item in mi_lista: if item == 'Python': break print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3
`continue`Continúa al próximo item sin ejecutar las lineas después de `continue` dentro del bucle.
for item in mi_lista: if item == 1: continue print(item)
_____no_output_____
MIT
notebooks/beginner/notebooks/for_loops.ipynb
mateodif/learn-python3