markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Data Loading | # Load the json files for processing
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Data Exploration Portfolio | portfolio.head()
items, attributes = portfolio.shape
print("Portfolio dataset has {} records and {} attributes".format(items, attributes))
portfolio.info()
portfolio.describe(include='all')
plt.figure(figsize=[5,5])
fig, ax = plt.subplots()
category_count = portfolio.offer_type.value_counts()
category_count.plot(kind='barh')
for i, count in enumerate(category_count):
ax.text(count, i, str(count))
plt.title("Offer distribution per offer Type")
#Get all possible channels
import itertools
set(itertools.chain.from_iterable(portfolio.channels)) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Profile | profile.head(5)
items, attributes = profile.shape
print("Portfolio dataset has {} records and {} attributes".format(items, attributes))
profile.info()
profile.describe(include="all")
#check for null values
profile.isnull().sum()
profile.duplicated().sum()
# age distribution
profile.age.hist();
sns.boxplot(profile['age'], width=0.5); | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Age 118 seems outlier. Lets explore it further. | profile[profile['age']== 118].age.count()
profile[profile.age == 118][['gender','income']] | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
As per above analysis we see that wherever age is 118, the values in Gender and income is null. And also 2175 is count of such of rows. Also we saw that 2175 instances had gender and income was null. So we will drop all instances where age equals 118 as these are errorneous record. | ## Gender-wise age distribution
sns.distplot(profile[profile.gender=='M'].age,label='Male')
sns.distplot(profile[profile.gender=='F'].age,label='Female')
sns.distplot(profile[profile.gender=='O'].age,label='Other')
plt.legend()
plt.show()
# distribution of income
profile.income.hist();
profile['income'].mean()
# Gender wise data distribution
profile.gender.value_counts()
## Gender-wise Income Distribution
sns.distplot(profile[profile.gender=='M'].income,label='Male')
sns.distplot(profile[profile.gender=='F'].income,label='Female')
sns.distplot(profile[profile.gender=='O'].income,label='Other')
plt.legend()
plt.show() | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Transcript | transcript.head()
items, attributes = transcript.shape
print("Transcript dataset has {} records and {} attributes".format(items, attributes))
transcript.info()
#check for null values
transcript.isnull().sum()
transcript['event'].value_counts()
keys = transcript['value'].apply(lambda x: list(x.keys()))
possible_keys = set()
for key in keys:
for item in key:
possible_keys.add(item)
print(possible_keys) | {'offer id', 'amount', 'offer_id', 'reward'}
| CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
For the **value** attribute have 3 possible value.1. offer id/ offer_id2. amount3. reward Data cleaning & Transformation Portfolio Renaming columns for better understanding and meaningfulness | #Rename columns
new_cols_name = {'difficulty':'offer_difficulty' , 'id':'offer_id', 'duration':'offer_duration', 'reward': 'offer_reward'}
portfolio = portfolio.rename(columns=new_cols_name ) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Exploding the channel attribute into four separate attribute - (email, mobile, social, web) | dummy = pd.get_dummies(portfolio.channels.apply(pd.Series).stack()).sum(level=0)
portfolio = pd.concat([portfolio, dummy], axis=1)
portfolio.drop(columns='channels', inplace=True)
portfolio.head() | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Profile Renaming columns for better understaning & meaningfulness | #Rename columns
cols_profile = {'id':'customer_id' , 'income':'customer_income'}
profile = profile.rename(columns=cols_profile) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Removing rows with missing values. We saw above that all nulls belong to age 118 which are outliers. | #drop all rows which has null value
profile = profile.loc[profile['gender'].isnull() == False] | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Classifying ages into groups for better understanding in Exploratory Data Analysis later:* Under 20* 21 - 35* 35 - 50* 50 - 65* Above 65 | #Convert ages into age group
profile.loc[(profile.age <= 20) , 'Age_group'] = 'Under 20'
profile.loc[(profile.age >= 21) & (profile.age <= 35) , 'Age_group'] = '21-35'
profile.loc[(profile.age >= 36) & (profile.age <= 50) , 'Age_group'] = '36-50'
profile.loc[(profile.age >= 51) & (profile.age <= 65) , 'Age_group'] = '51-65'
profile.loc[(profile.age >= 66) , 'Age_group'] = 'Above 65'
profile.drop('age',axis=1,inplace=True) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Classifying income into income_groups for better understanding in Exploratory Data Analysis later:* 30-50K* 50-80K* 80-110K* Above 110K | #Convert income into income group
profile.loc[(profile.customer_income >= 30000) & (profile.customer_income <= 50000) , 'Income_group'] = '30-50K'
profile.loc[(profile.customer_income >= 50001) & (profile.customer_income <= 80000) , 'Income_group'] = '50-80K'
profile.loc[(profile.customer_income >= 80001) & (profile.customer_income <= 110000) , 'Income_group'] = '80-110K'
profile.loc[(profile.customer_income >= 110001) , 'Income_group'] = 'Above 110K'
profile.drop('customer_income',axis=1,inplace=True) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Converting became_member_on to a more quantitative term member_since_days. This will depict how long the customer has been member of the program. | #Convert joining date to duration in days for which the customer is member
profile['became_member_on'] = pd.to_datetime(profile['became_member_on'], format='%Y%m%d')
baseline_date = max(profile['became_member_on'])
profile['member_since_days'] = profile['became_member_on'].apply(lambda x: (baseline_date - x).days)
profile.drop('became_member_on',axis=1,inplace=True)
profile.head() | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Transcript Renaming columns for better understaning & meaningfulness | #Rename columns
transcript_cols = {'person':'customer_id'}
transcript = transcript.rename(columns=transcript_cols) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Removing space in event as when we explode, its easier to maintain columns name without space. | transcript['event'] = transcript['event'].str.replace(' ', '-') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Split the value column into three columns as the keys of the dictionary which represents offer_id, reward, amount. Also we will merge offer_id and "offer id" into single attribute offer_id. | transcript['offer_id'] = transcript['value'].apply(lambda x: x.get('offer_id'))
transcript['offer id'] = transcript['value'].apply(lambda x: x.get('offer id'))
transcript['reward'] = transcript['value'].apply(lambda x: x.get('reward'))
transcript['amount'] = transcript['value'].apply(lambda x: x.get('amount'))
transcript['offer_id'] = transcript.apply(lambda x : x['offer id'] if x['offer_id'] == None else x['offer_id'], axis=1)
transcript.drop(['offer id' , 'value'] , axis=1, inplace=True)
transcript.fillna(0 , inplace=True)
transcript.head() | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Preparing data for Analysis Merging the three tables | merged_df = pd.merge(portfolio, transcript, on='offer_id')
merged_df = pd.merge(merged_df, profile, on='customer_id')
merged_df.head()
merged_df.groupby(['event','offer_type'])['offer_type'].count()
merged_df['event'] = merged_df['event'].map({'offer-received':1, 'offer-viewed':2, 'offer-completed':3}) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Generating the target variable When a customer completes the offer against an offer_id we will label that as a success. If the status is not in Offer-completed then the cust_id, order_id detail we be considerd as unsuccessful ad targeting. | #Create a target variable from event
merged_df['Offer_Encashed'] = 0
for row in range(merged_df.shape[0]):
current_event = merged_df.at[row,'event']
if current_event == 3:
merged_df.at[row,'Offer_Encashed'] = 1
merged_df.Offer_Encashed.value_counts()
merged_df['offer_type'].value_counts().plot.barh(title='Offer Type distribution') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Buy One Get One & discount Offer type have similar distribution. | merged_df['Age_group'].value_counts().plot.barh(title=' Distribution of age groups') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
It is quite surprising to see that customers Above 60 use Starbucks application the most, those with age 40-60 are on the second. One would usually think that customers between age 20-45 use app the most, but this is not the case here. | merged_df['event'].value_counts().plot.barh(title=' Event distribution') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
From distribution it follows the sales funnel. Offer received > Offer Viewed > Offer completed. | plt.figure(figsize=(15, 5))
sns.countplot(x="Age_group", hue="gender", data=merged_df)
sns.set(style="whitegrid")
plt.title('Gender distribution in different age groups')
plt.ylabel('No of instances')
plt.xlabel('Age Group')
plt.legend(title='Gender') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
The male customers are more than the female ones in each age group. Buut in above 60 range the distribution is almost 50-50 | plt.figure(figsize=(15, 5))
sns.countplot(x="event", hue="gender", data=merged_df)
plt.title('Distribution of Event Type by Gender ')
plt.ylabel('No of instances')
plt.xlabel('Event Type')
plt.legend(title='Gender')
plt.figure(figsize=(15, 5))
sns.countplot(x="event", hue="offer_type", data=merged_df)
plt.title('Distribution of offer types in events')
plt.ylabel('No of instances')
plt.xlabel('Event Type')
plt.legend(title='Offer Type') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
From the graph we can infer that the discount offer type once viewed are very likely to be completed. | plt.figure(figsize=(15, 5))
sns.countplot(x="Age_group", hue="event", data=merged_df)
plt.title('Event type distribution by age group')
plt.ylabel('No of instances')
plt.xlabel('Age Group')
plt.legend(title='Event Type') | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
iv) Build a Machine Learning model to predict response of a customer to an offer 1. Data Preparation and Cleaning II Tasks1. Encode categorical data such as gender, offer type and age groups.2. Encode the 'event' data to numerical values: * offer received ---> 1 * offer viewed ---> 2 * offer completed ---> 33. Encode offer id.4. Scale and normalize numerical data. | dummy = pd.get_dummies(merged_df.offer_type.apply(pd.Series).stack()).sum(level=0)
merged_df = pd.concat([merged_df, dummy], axis=1)
merged_df.drop(columns='offer_type', inplace=True)
dummy = pd.get_dummies(merged_df.gender.apply(pd.Series).stack()).sum(level=0)
merged_df = pd.concat([merged_df, dummy], axis=1)
merged_df.drop(columns='gender', inplace=True)
dummy = pd.get_dummies(merged_df.Age_group.apply(pd.Series).stack()).sum(level=0)
merged_df = pd.concat([merged_df, dummy], axis=1)
merged_df.drop(columns='Age_group', inplace=True)
dummy = pd.get_dummies(merged_df.Income_group.apply(pd.Series).stack()).sum(level=0)
merged_df = pd.concat([merged_df, dummy], axis=1)
merged_df.drop(columns='Income_group', inplace=True)
offerids = merged_df['offer_id'].unique().tolist()
o_mapping = dict( zip(offerids,range(len(offerids))) )
merged_df.replace({'offer_id': o_mapping},inplace=True) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Distribution of encashemnt of offer by Age group and gender. | sns.set_style('whitegrid')
bar_color= ['r', 'g', 'y', 'c', 'm']
fig,ax= plt.subplots(1,3,figsize=(15,5))
fig.tight_layout()
merged_df[merged_df['Offer_Encashed']==1][['F','M','O']].sum().plot.bar(ax=ax[0], fontsize=10,color=bar_color)
ax[0].set_title(" Offer Encashed - Gender Wise")
ax[0].set_xlabel("Gender")
ax[0].set_ylabel("No of Encashment")
age_cols=['Under 20','21-35', '36-50', '51-65', 'Above 65']
merged_df[merged_df['Offer_Encashed']==1][age_cols].sum().plot.bar(ax=ax[1], fontsize=10,color=bar_color)
ax[1].set_title("Offer Encashed - Age Wise")
ax[1].set_xlabel("Age Group")
ax[1].set_ylabel("No of Encashment")
income_cols=['30-50K', '50-80K', '80-110K', 'Above 110K']
merged_df[merged_df['Offer_Encashed']==1][income_cols].sum().plot.bar(ax=ax[2], fontsize=10, color=bar_color)
ax[2].set_title("Offer Encashed - Income Wise")
ax[2].set_xlabel("Income")
ax[2].set_ylabel("No of Encashment")
plt.show()
#drop customer_id, time, amount, event
merged_df.drop(['customer_id', 'time', 'amount', 'event', 'reward'], axis=1, inplace=True)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
numerical = ['offer_difficulty', 'offer_duration', 'offer_reward', 'member_since_days']
merged_df[numerical] = scaler.fit_transform(merged_df[numerical])
merged_df.drop_duplicates(inplace=True)
merged_df.head() | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
2. Split train and test data Final data is ready after tasks 1-5. We will now split the data (both features and their labels) into training and test sets, taking 60% of data for training and 40% for testing. | data = merged_df.drop('Offer_Encashed', axis=1)
label = merged_df['Offer_Encashed']
X_train, X_test, y_train, y_test = train_test_split(data, label, test_size = 0.3, random_state = 4756)
print("Train: {} Test {}".format(X_train.shape[0], X_test.shape[0])) | Train: 52300 Test 22415
| CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Model training and testing Metrics We will consider the F1 score as the model metric to assess the quality of the approach and determine which model gives the best results. It can be interpreted as the weighted average of the precision and recall. The traditional or balanced F-score (F1 score) is the harmonic mean of precision and recall, where an F1 score reaches its best value at 100 and worst at 0. | def get_model_scores(classifier):
train_prediction = (classifier.fit(X_train, y_train)).predict(X_train)
test_predictions = (classifier.fit(X_train, y_train)).predict(X_test)
f1_train = accuracy_score(y_train, train_prediction)*100
f1_test = fbeta_score(y_test, test_predictions, beta = 0.5, average='micro' )*100
clf_name = classifier.__class__.__name__
return f1_train, f1_test, clf_name | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
LogisticRegression (Benchmark) I am using LogisticRegression classifier to build the benchmark, and evaluate the model result by the F1 score metric. | lr_clf = LogisticRegression(random_state = 10)
lr_f1_train, lr_f1_test, lr_model = get_model_scores(lr_clf)
linear = {'Benchmark Model': [ lr_model], 'F1-Score(Training)':[lr_f1_train], 'F1-Score(Test)': [lr_f1_test]}
benchmark = pd.DataFrame(linear)
benchmark | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
RandomForestClassifier | rf_clf = RandomForestClassifier(random_state = 10, criterion='gini', min_samples_leaf=10, min_samples_split=2, n_estimators=100)
rf_f1_train, rf_f1_test, rf_model = get_model_scores(rf_clf) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
DecisionTreeClassifier | dt_clf = DecisionTreeClassifier(random_state = 10)
dt_f1_train, dt_f1_test, dt_model = get_model_scores(dt_clf) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
K Nearest Neighbors | knn_clf = KNeighborsClassifier(n_neighbors = 5)
knn_f1_train, knn_f1_test, knn_model = get_model_scores(knn_clf) | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
Classifier Evaluation Summary | performance_summary = {'Classifier': [lr_model, rf_model, dt_model, knn_model],
'F1-Score':[lr_f1_train, rf_f1_train, dt_f1_train, knn_f1_train] }
performance_summary = pd.DataFrame(performance_summary)
performance_summary | _____no_output_____ | CNRI-Python | Starbucks_Capstone_notebook.ipynb | amit-singh-rathore/Starbucks-Capstone |
About the Dataset |
#nextcell
ratings = pd.read_csv('/Users/ankitkothari/Documents/gdrivre/UMD/MSML-602-DS/final_project/ratings_small.csv')
movies = pd.read_csv('/Users/ankitkothari/Documents/gdrivre/UMD/MSML-602-DS/final_project/movies_metadata_features.csv')
| _____no_output_____ | MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Data Cleaning Dropping Columns | movies.drop(columns=['Unnamed: 0'],inplace=True)
ratings = pd.merge(movies,ratings).drop(['genres','timestamp','imdb_id','overview','popularity','production_companies','production_countries','release_date','revenue','runtime','vote_average','year','vote_count','original_language'],axis=1)
usri = int(input()) #587 #15 #468
select_user = ratings.loc[ratings['userId'] == usri]
| 15
| MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Finding Similarity Matrix Creating a Pivot Table of Title against userId for ratings | userRatings = ratings.pivot_table(index=['title'],columns=['userId'],values='rating')
userRatings = userRatings.dropna(thresh=10, axis=1).fillna(0,axis=1)
corrMatrix = userRatings.corr(method='pearson')
#corrMatrix = userRatings.corr(method='spearman')
#corrMatrix = userRatings.corr(method='kendall')
| _____no_output_____ | MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Creating Similarity Matrix using Pearson Correlation method | def get_similar(usrid):
similar_ratings = corrMatrix[usrid]
similar_ratings = similar_ratings.sort_values(ascending=False)
return similar_ratings
| _____no_output_____ | MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Recommendation | moidofotus = [0,0,0,0]
s_m = pd.DataFrame()
s_m = s_m.append(get_similar(usri), ignore_index=True)
for c in range(0,4):
moidofotus[c]=s_m.columns[c]
if moidofotus[0] == usri:
moidofotus.pop(0)
print(moidofotus)
movie_match=[]
for i in moidofotus:
select_user = ratings.loc[ratings['userId'] == i]
#print(select_user)
print("For user", i)
final_use = select_user.loc[select_user['rating'] >= 4.0].sort_values(by=['rating'],ascending=False).iloc[0:10,:]
print(final_use['title'])
movie_match.append(final_use['title'].to_list())
select_user['title'] | _____no_output_____ | MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Performance Evaluation | movies_suggested_and_he_watched=0
total_suggest_movies = 0
for movies in movie_match:
total_suggest_movies=total_suggest_movies+len(movies)
for movie in movies:
if movie in select_user['title'].to_list():
movies_suggested_and_he_watched=movies_suggested_and_he_watched+1
print(movies_suggested_and_he_watched)
print(total_suggest_movies) | 27
30
| MIT | Recommendations/recommendation_kmeans/recommendation_project_part2.ipynb | ankit-kothari/data_science_journey |
Uninove Data: 17/02/2022Professor: Leandro Romualdo da SilvaDisciplina: Inteligência ArtificialMatéria: Algoritmos de Busca Resumo: O código abaixo cria o ambiente do labirinto usando a biblioteca turtle e o agente precisa encontrar o caminho de saida do labirinto, a busca com objetivo de encontrar a saida utiliza algumas funções que fazer o agente testar caminhos e após encontrar o caminho de volta retorna a posição inicial. Referências: https://panda.ime.usp.br/panda/static/pythonds_pt/04-Recursao/10-labirinto.htmlhttps://docs.python.org/3/library/turtle.htmlOutro material de referência muito interessante é este trabalho que usa algoritmos de busca no pacman. http://www.ic.uff.br/~bianca/ia-pos/t1.html | import turtle
'''
Parâmetros que delimitam o labirinto, indicam os obstaculos, caminhos livres para seguir, saida do labirinto e caminho correto identificado.
PART_OF_PART - O caminho correto é sinalizado retornando ao ponto de partida.
TRIED - Caminho percorrido pelo agente. Sinaliza o caminho que ele esta buscando pela saida.
OBSTACLE - O caminho contém obstaculos que delimitam o labirinto e são representados pelo simbolo +.
DEAD_END - Sinaliza caminhos que o agente já percorreu e estão errados.
'''
PART_OF_PATH = 'O'
TRIED = '.'
OBSTACLE = '+'
DEAD_END = '-'
class Maze:
'''
A função __init__ lê o arquivo com a matriz que representa o labirinto, lê a quantidade de linhas e colunas, bem como linha a coluna de inicio
Instância o Turtle para gerar a interface gráfica e utiliza como coordenadas as linhas e colunas da nossa matriz
A posição inicial do agente é lida através do loop na função.
Instanciamos o turtle, definimos um formato do agente que pode ser turtle, arrow, circle, square, triangle, classic.
'''
def __init__(self,mazeFileName):
rowsInMaze = 0
columnsInMaze = 0
self.mazelist = []
mazeFile = open(mazeFileName,'r')
rowsInMaze = 0
for line in mazeFile:
rowList = []
col = 0
for ch in line[:-1]:
rowList.append(ch)
if ch == 'S':
self.startRow = rowsInMaze
self.startCol = col
col = col + 1
rowsInMaze = rowsInMaze + 1
self.mazelist.append(rowList)
columnsInMaze = len(rowList)
self.rowsInMaze = rowsInMaze
self.columnsInMaze = columnsInMaze
self.xTranslate = -columnsInMaze/2
self.yTranslate = rowsInMaze/2
self.t = turtle.Turtle()
self.t.shape('turtle')
turtle.title('Desafio saida de labirinto')
self.wn = turtle.Screen()
self.wn.setworldcoordinates(-(columnsInMaze-1)/2-.5,-(rowsInMaze-1)/2-.5,(columnsInMaze-1)/2+.5,(rowsInMaze-1)/2+.5)
def drawMaze(self):
'''
Função que cria a interação do gráfico do labirinto, temos a velocidade, o tracer, criamos uma lista com a linha e coluna
checamos se é um obstáculo e pintamos de laranja para gerar o mapa do labirinto.
O rastro do agente é da cor cinza e o agente da cor vermelho e pode ser alterado nas configurações abaixo.
'''
self.t.speed(10)
self.wn.tracer(0)
for y in range(self.rowsInMaze):
for x in range(self.columnsInMaze):
if self.mazelist[y][x] == OBSTACLE:
self.drawCenteredBox(x+self.xTranslate,-y+self.yTranslate,'orange')
self.t.color('gray')
self.t.fillcolor('red')
self.wn.update()
self.wn.tracer(1)
def drawCenteredBox(self,x,y,color):
'''
Esta função recebe coluna, linha e cor que será aplicada para o centro do box.
'''
self.t.up()
self.t.goto(x-.5,y-.5)
self.t.color(color)
self.t.fillcolor(color)
self.t.setheading(90)
self.t.down()
self.t.begin_fill()
for i in range(4):
self.t.forward(1)
self.t.right(90)
self.t.end_fill()
def moveAgent(self,x,y):
'''
Função que move o agente, a chamada "goto" faz o movimento do agente.
'''
self.t.up()
self.t.setheading(self.t.towards(x+self.xTranslate,-y+self.yTranslate))
self.t.goto(x+self.xTranslate,-y+self.yTranslate)
def dropBreadcrumb(self,color):
self.t.dot(10,color)
def updatePosition(self,row,col,val=None):
'''
Checa se a posição indicada é valida e movimenta o agente para nova posição, Se a posição é valida a cor azul é aplicada ao rastro,
Se for um caminho já explorado a cor vermelha é aplicada, caso tenha finalizado a saida o percurso de volta é salvo em verde.
'''
if val:
self.mazelist[row][col] = val
self.moveAgent(col,row)
if val == PART_OF_PATH:
color = 'green'
elif val == OBSTACLE:
color = 'red'
elif val == TRIED:
color = 'blue'
elif val == DEAD_END:
color = 'red'
else:
color = None
if color:
self.dropBreadcrumb(color)
def isExit(self,row,col):
'''
Função de saida de acordo com as regras da matriz 0 ou rowsInMaze-1 determinam a saida.
'''
return (row == 0 or
row == self.rowsInMaze-1 or
col == 0 or
col == self.columnsInMaze-1 )
def __getitem__(self,idx):
return self.mazelist[idx]
def searchFrom(maze, startRow, startColumn):
'''
Função de busca em si, recebe a matriz (maze) linha e coluna de inicio. Aqui aplicamos os testes de direção e vamos explorando o caminho
usando as demais funções.
'''
# Tente cada uma das posições até encontrar a saida
# Valores de retorno na saida da base
# 1. Se encontrar um obstaculo retornar false
maze.updatePosition(startRow, startColumn)
if maze[startRow][startColumn] == OBSTACLE :
return False
# 2. Encontrou uma área que já foi explorada
if maze[startRow][startColumn] == TRIED or maze[startRow][startColumn] == DEAD_END:
return False
# 3. Encontrou uma borda não ocupada por um obstáculo
if maze.isExit(startRow,startColumn):
maze.updatePosition(startRow, startColumn, PART_OF_PATH)
return True
maze.updatePosition(startRow, startColumn, TRIED)
print(startColumn, startRow)
# Caso contrário teste cada direção novamente
found = searchFrom(maze, startRow-1, startColumn) or \
searchFrom(maze, startRow+1, startColumn) or \
searchFrom(maze, startRow, startColumn-1) or \
searchFrom(maze, startRow, startColumn+1)
if found:
maze.updatePosition(startRow, startColumn, PART_OF_PATH)
else:
maze.updatePosition(startRow, startColumn, DEAD_END)
return found
myMaze = Maze('D:\Users\andre\Documents\Faculdade\inteligencia artificial\maze2.txt')
myMaze.drawMaze()
myMaze.updatePosition(myMaze.startRow,myMaze.startCol)
searchFrom(myMaze, myMaze.startRow, myMaze.startCol) | 15 8
15 7
14 7
14 6
14 5
14 4
13 4
13 5
13 6
12 6
12 7
12 8
12 9
| MIT | busca v0.5.ipynb | carvalhoandre/interpretacao_dados |
Exercise 6: Collect data using APIsUse Exchange Rates API to get USD to other currency rate for today: https://www.exchangerate-api.com/ | import json
import pprint
import requests
import pandas as pd
r = requests.get("https://api.exchangerate-api.com/v4/latest/USD")
data = r.json()
pprint.pprint(data)
df = pd.DataFrame(data)
df.head() | _____no_output_____ | MIT | Chapter04/Exercise 4.06/Exercise 4.06.ipynb | abhishekr128/The-Natural-Language-Processing-Workshop |
Dataset Used : Titanic ( https://www.kaggle.com/c/titanic )This dataset basically includes information regarding all the passengers on Titanic . Various attributes of passengers like age , sex , class ,etc. is recorded and final label 'survived' determines whether or the passenger survived or not . | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
titanic_data_df = pd.read_csv('titanic-data.csv') | _____no_output_____ | MIT | Section 5/Bivariate Analysis - Titanic.ipynb | kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x |
1. **Survived:** Outcome of survival (0 = No; 1 = Yes)2. **Pclass:** Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)3. **Name:** Name of passenger4. **Sex:** Sex of the passenger5. **Age:** Age of the passenger (Some entries contain NaN)6. **SibSp:** Number of siblings and spouses of the passenger aboard7. **Parch:** Number of parents and children of the passenger aboard8. **Ticket:** Ticket number of the passenger9. **Fare: **Fare paid by the passenger10. **Cabin** Cabin number of the passenger (Some entries contain NaN)11. **Embarked:** Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) | g = sns.countplot(x='Sex', hue='Survived', data=titanic_data_df)
g = sns.catplot(x="Embarked", col="Survived",
data=titanic_data_df, kind="count",
height=4, aspect=.7);
g = sns.countplot(x='Embarked', hue='Survived', data=titanic_data_df)
g = sns.countplot(x='Embarked', hue='Pclass', data=titanic_data_df)
g = sns.countplot(x='Pclass', hue='Survived', data=titanic_data_df) | _____no_output_____ | MIT | Section 5/Bivariate Analysis - Titanic.ipynb | kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x |
Add a new column - Family size I will be adding a new column 'Family Size' which will be the SibSp and Parch + 1 | #Function to add new column 'FamilySize'
def add_family(df):
df['FamilySize'] = df['SibSp'] + df['Parch'] + 1
return df
titanic_data_df = add_family(titanic_data_df)
titanic_data_df.head(10)
g = sns.countplot(x="FamilySize", hue="Survived",
data=titanic_data_df);
g = sns.countplot(x="FamilySize", hue="Sex",
data=titanic_data_df); | _____no_output_____ | MIT | Section 5/Bivariate Analysis - Titanic.ipynb | kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x |
Add a new column - Age Group | age_df = titanic_data_df[~titanic_data_df['Age'].isnull()]
#Make bins and group all passengers into these bins and store those values in a new column 'ageGroup'
age_bins = ['0-9', '10-19', '20-29', '30-39', '40-49', '50-59', '60-69', '70-79']
age_df['ageGroup'] = pd.cut(titanic_data_df.Age, range(0, 81, 10), right=False, labels=age_bins)
age_df[['Age', 'ageGroup']]
sns.countplot(x='ageGroup', hue='Survived', data=age_df) | _____no_output_____ | MIT | Section 5/Bivariate Analysis - Titanic.ipynb | kamaleshreddy/Exploratory-Data-Analysis-with-Pandas-and-Python-3.x |
Formulas: Fitting models using R-style formulas Since version 0.5.0, ``statsmodels`` allows users to fit statistical models using R-style formulas. Internally, ``statsmodels`` uses the [patsy](http://patsy.readthedocs.org/) package to convert formulas and data to the matrices that are used in model fitting. The formula framework is quite powerful; this tutorial only scratches the surface. A full description of the formula language can be found in the ``patsy`` docs: * [Patsy formula language description](http://patsy.readthedocs.org/) Loading modules and functions | import numpy as np # noqa:F401 needed in namespace for patsy
import statsmodels.api as sm | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Import convention You can import explicitly from statsmodels.formula.api | from statsmodels.formula.api import ols | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Alternatively, you can just use the `formula` namespace of the main `statsmodels.api`. | sm.formula.ols | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Or you can use the following conventioin | import statsmodels.formula.api as smf | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
These names are just a convenient way to get access to each model's `from_formula` classmethod. See, for instance | sm.OLS.from_formula | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
All of the lower case models accept ``formula`` and ``data`` arguments, whereas upper case ones take ``endog`` and ``exog`` design matrices. ``formula`` accepts a string which describes the model in terms of a ``patsy`` formula. ``data`` takes a [pandas](https://pandas.pydata.org/) data frame or any other data structure that defines a ``__getitem__`` for variable names like a structured array or a dictionary of variables. ``dir(sm.formula)`` will print a list of available models. Formula-compatible models have the following generic call signature: ``(formula, data, subset=None, *args, **kwargs)`` OLS regression using formulasTo begin, we fit the linear model described on the [Getting Started](gettingstarted.html) page. Download the data, subset columns, and list-wise delete to remove missing observations: | dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head() | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Fit the model: | mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary()) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Categorical variablesLooking at the summary printed above, notice that ``patsy`` determined that elements of *Region* were text strings, so it treated *Region* as a categorical variable. `patsy`'s default is also to include an intercept, so we automatically dropped one of the *Region* categories.If *Region* had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the ``C()`` operator: | res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit()
print(res.params) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Patsy's mode advanced features for categorical variables are discussed in: [Patsy: Contrast Coding Systems for categorical variables](contrasts.html) OperatorsWe have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix. Removing variablesThe "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by: | res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit()
print(res.params) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Multiplicative interactions":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together: | res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit()
res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit()
print(res1.params, '\n')
print(res2.params) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Many other things are possible with operators. Please consult the [patsy docs](https://patsy.readthedocs.org/en/latest/formulas.html) to learn more. FunctionsYou can apply vectorized functions to the variables in your model: | res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit()
print(res.params) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Define a custom function: | def log_plus_1(x):
return np.log(x) + 1.
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit()
print(res.params) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
Any function that is in the calling namespace is available to the formula. Using formulas with models that do not (yet) support themEven if a given `statsmodels` function does not support formulas, you can still use `patsy`'s formula language to produce design matrices. Those matrices can then be fed to the fitting function as `endog` and `exog` arguments. To generate ``numpy`` arrays: | import patsy
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='matrix')
print(y[:5])
print(X[:5]) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
To generate pandas data frames: | f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary()) | _____no_output_____ | BSD-3-Clause | examples/notebooks/formulas.ipynb | diego-mazon/statsmodels |
CH6EJ3 Extracción Componentes Principales Procedimiento Cargamos y/o instalamos las librerias necesarios | if(!require(devtools)){
install.packages('devtools',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org')
require(devtools)
}
if(!require(ggbiplot)){
install.packages('ggbiplot',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org')
require(ggbiplot)
}
if(!require(scales)){
install.packages('scales',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org')
require(scales)
}
if(!require(grid)){
install.packages('grid',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org')
require(grid)
}
if(!require(plyr)){
install.packages('plyr',dependencies =c("Depends", "Imports"),repos='http://cran.es.r-project.org')
require(plyr)
} | Loading required package: devtools
Warning message:
"package 'devtools' was built under R version 3.3.3"Loading required package: ggbiplot
Warning message:
"package 'ggbiplot' was built under R version 3.3.3"Loading required package: ggplot2
Warning message:
"package 'ggplot2' was built under R version 3.3.3"Loading required package: plyr
Loading required package: scales
Loading required package: grid
| MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Cargamos los datos de un directorio local. | Alumnos_usos_sociales <- read.csv("B2.332_Students.csv", comment.char="#")
# X contiene las variables que queremos trabajar
R <- Alumnos_usos_sociales[,c(31:34)]
head(R) | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Cálculo de la Singular value decomposition y de los valores que lo caracterizan. | # Generamos SVD
R.order <- R
R.svd <-svd(R.order[,c(1:3)])
# D, U y V
R.svd$d
head(R.svd$u)
R.svd$v | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Calculo de la varianza acumulada en el primer factor | sum(R.svd$d)
var=sum(R.svd$d[1])
var
var/sum(R.svd$d) | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Porcentaje de la varianza explicada por los svd generados | plot(R.svd$d^2/sum(R.svd$d^2),type="l",xlab="Singular vector",ylab="Varianza explicada") | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Porcentaje de la varianza acumulada explicada | plot(cumsum(R.svd$d^2/sum(R.svd$d^2)),type="l",xlab="Singular vector",ylab="Varianza explicada acumulada") | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Creamos un gráfico con el primer y segundo vector asignando colores. Rojo no supera, verde supera | # Dibujamos primero todos los scores de comp2 y comp1
Y <- R.order[,4]
plot(R.svd$u[,1],R.svd$u[,2])
# Asignamos rojo a no supera y verde a si supera
points(R.svd$u[Y=="No",1],R.svd$u[Y=="No",2],col="red")
points(R.svd$u[Y=="Si",1],R.svd$u[Y=="Si",2],col="green") | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Reconstrucción de la imagen de los datos a partir de los SVD | R.recon1=R.svd$u[,1]%*%diag(R.svd$d[1],length(1),length(1))%*%t(R.svd$v[,1])
R.recon2=R.svd$u[,2]%*%diag(R.svd$d[2],length(2),length(2))%*%t(R.svd$v[,2])
R.recon3=R.svd$u[,3]%*%diag(R.svd$d[3],length(3),length(3))%*%t(R.svd$v[,3])
par(mfrow=c(2,2))
image(as.matrix(R.order[,c(1:3)]),main="Matriz Original")
image(R.recon1,main="Matriz Factor 1")
image(R.recon2,main="Matriz Factor 2")
image(R.recon3,main="Matriz Factor 3") | _____no_output_____ | MIT | 05-data-mining/labs/CH6EJ3-Descomposicion-en-valores-singulares.ipynb | quiquegv/NEOLAND-DS2020-datalabs |
Introduction | import ipyscales
# Make a default scale, and list its trait values:
scale = ipyscales.LinearScale()
print(', '.join('%s: %s' % (key, getattr(scale, key)) for key in sorted(scale.keys) if not key.startswith('_'))) | clamp: False, domain: (0.0, 1.0), interpolator: interpolate, range: (0.0, 1.0)
| BSD-3-Clause | examples/introduction.ipynb | vidartf/jupyter-scales |
ToDo- probably make candidate 10 sentences per letter and pick sentences with sentence transformer trained with Next Sentence Prediction Task?- Filter out similar sentences based on levenstein distance or sentence bert- remove curse words, person words with pororo or other tools -> either from dataset or inference process | # https://github.com/lovit/levenshtein_finder | _____no_output_____ | MIT | inference_finetuned_35000-step.ipynb | snoop2head/KoGPT-Joong-2 |
Distributed XGBoost (CPU)Scaling out on AmlCompute is simple! The code from the previous notebook has been modified and adapted in [src/run.py](src/run.py). In particular, changes include:- use ``dask_mpi`` to initialize Dask on MPI- use ``argparse`` to allow for command line argument inputs- use ``mlflow`` logging The [environment.yml](environment.yml) contains the conda environment specification. Get Workspace | from azureml.core import Workspace
ws = Workspace.from_config()
ws | _____no_output_____ | MIT | python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb | msftcoderdjw/azureml-examples |
Distributed RemotelySimply use ``MpiConfiguration`` with the desired node count. **Important**: see the [``dask-mpi`` documentation](http://mpi.dask.org/en/latest/) for details on how the Dask workers and scheduler are started.By default with the Azuer ML MPI configuration, two nodes are used for the scheduler and script process.This means you should add two additional nodes to reach the desired number of worker nodes. Additionally, we need to pass in the number of vCPUs per node, which will be used to intiialize the same number of threads via ``dask_mpi.initialize(nthreads=args.cpus_per_node)``. | nodes = 8 + 2 # number of workers + 2 needed for scheduler and script process
cpus_per_node = 4 # number of vCPUs per node; to initialize one thread per CPU
print(f"Nodes: {nodes}\nCPUs/node: {cpus_per_node}")
arguments = [
"--cpus_per_node",
cpus_per_node,
"--num_boost_round",
100,
"--learning_rate",
0.2,
"--gamma",
0,
]
arguments
from azureml.core import ScriptRunConfig, Experiment, Environment
from azureml.core.runconfig import MpiConfiguration
env = Environment.from_conda_specification("xgboost-cpu-tutorial", "environment.yml")
mpi_config = MpiConfiguration(node_count=nodes)
src = ScriptRunConfig(
source_directory="src",
script="run.py",
arguments=arguments,
compute_target="cpu-cluster",
environment=env,
distributed_job_config=mpi_config,
max_run_duration_seconds=60 * 60,
)
run = Experiment(ws, "xgboost-cpu-tutorial").submit(src)
run | _____no_output_____ | MIT | python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb | msftcoderdjw/azureml-examples |
View WidgetOptionally, view the output in the run widget. | from azureml.widgets import RunDetails
RunDetails(run).show() | _____no_output_____ | MIT | python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb | msftcoderdjw/azureml-examples |
for testing, wait for the run to complete | run.wait_for_completion(show_output=True) | _____no_output_____ | MIT | python-sdk/experimental/using-xgboost/2.distributed-cpu.ipynb | msftcoderdjw/azureml-examples |
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_BTC.ipynb) **Detect Entities in Twitter texts** 1. Colab Setup | !wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
!pip install --ignore-installed spark-nlp-display
import pandas as pd
import numpy as np
import json
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
2. Start Spark Session | spark = sparknlp.start() | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
3. Some sample examples | text_list = test_sentences = ["""Wengers big mistakes is not being ruthless enough with bad players.""",
"""Aguero goal . From being someone previously so reliable , he 's been terrible this year .""",
"""Paul Scholes approached Alex Ferguson about making a comeback . Ferguson clearly only too happy to accommodate him .""",
"""Wikipedia today , as soon as you load the website , hit ESC to prevent the 'blackout ' from loading.""",
"""David Attenborough shows us a duck billed platypus.""",
"""London GET UPDATES FROM Peter Hotez""",
"""Pentagram's Dominic Lippa is working on a new identity for University of Arts London """] | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
4. Define Spark NLP pipeline | document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
tokenizer = Tokenizer()\
.setInputCols("document")\
.setOutputCol("token")
tokenClassifier = BertForTokenClassification.pretrained("bert_token_classifier_ner_btc", "en")\
.setInputCols("token", "document")\
.setOutputCol("ner")\
.setCaseSensitive(True)
ner_converter = NerConverter()\
.setInputCols(["document","token","ner"])\
.setOutputCol("ner_chunk")\
pipeline = Pipeline(stages=[document, tokenizer, tokenClassifier, ner_converter])
| bert_token_classifier_ner_btc download started this may take some time.
Approximate size to download 385.3 MB
[OK!]
| Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
5. Run the pipeline | model = pipeline.fit(spark.createDataFrame(pd.DataFrame({'text': ['']})))
result = model.transform(spark.createDataFrame(pd.DataFrame({'text': text_list})))
| _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
6. Visualize results |
result.select(F.explode(F.arrays_zip('document.result', 'ner_chunk.result',"ner_chunk.metadata")).alias("cols")) \
.select(
F.expr("cols['1']").alias("chunk"),
F.expr("cols['2'].entity").alias('result')).show(truncate=False)
from sparknlp_display import NerVisualizer
for i in range(len(text_list)):
NerVisualizer().display(
result = result.collect()[i],
label_col = 'ner_chunk',
document_col = 'document'
)
| _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_BTC.ipynb | Laurasgmt/spark-nlp-workshop |
Population Segmentation with SageMakerIn this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions in the US.Using **principal component analysis** (PCA) you will reduce the dimensionality of the original census data. Then, you'll use **k-means clustering** to assign each US county to a particular cluster based on where a county lies in component space. How each cluster is arranged in component space can tell you which US counties are most similar and what demographic traits define that similarity; this information is most often used to inform targeted, marketing campaigns that want to appeal to a specific group of people. This cluster information is also useful for learning more about a population by revealing patterns between regions that you otherwise may not have noticed. US Census DataYou'll be using data collected by the [US Census](https://en.wikipedia.org/wiki/United_States_Census), which aims to count the US population, recording demographic traits about labor, age, population, and so on, for each county in the US. The bulk of this notebook was taken from an existing SageMaker example notebook and [blog post](https://aws.amazon.com/blogs/machine-learning/analyze-us-census-data-for-population-segmentation-using-amazon-sagemaker/), and I've broken it down further into demonstrations and exercises for you to complete. Machine Learning WorkflowTo implement population segmentation, you'll go through a number of steps:* Data loading and exploration* Data cleaning and pre-processing * Dimensionality reduction with PCA* Feature engineering and data transformation* Clustering transformed data with k-means* Extracting trained model attributes and visualizing k clustersThese tasks make up a complete, machine learning workflow from data loading and cleaning to model deployment. Each exercise is designed to give you practice with part of the machine learning workflow, and to demonstrate how to use SageMaker tools, such as built-in data management with S3 and built-in algorithms.--- First, import the relevant libraries into this SageMaker notebook. | # data managing and display libs
import pandas as pd
import numpy as np
import os
import io
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# sagemaker libraries
import boto3
import sagemaker | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Loading the Data from Amazon S3This particular dataset is already in an Amazon S3 bucket; you can load the data by pointing to this bucket and getting a data file by name. > You can interact with S3 using a `boto3` client. | # boto3 client to get S3 data
s3_client = boto3.client('s3')
bucket_name='aws-ml-blog-sagemaker-census-segmentation' | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Take a look at the contents of this bucket; get a list of objects that are contained within the bucket and print out the names of the objects. You should see that there is one file, 'Census_Data_for_SageMaker.csv'. | # get a list of objects in the bucket
obj_list=s3_client.list_objects(Bucket=bucket_name)
# print object(s)in S3 bucket
files=[]
for contents in obj_list['Contents']:
files.append(contents['Key'])
print(files)
# there is one file --> one key
file_name=files[0]
print(file_name) | Census_Data_for_SageMaker.csv
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Retrieve the data file from the bucket with a call to `client.get_object()`. | # get an S3 object by passing in the bucket and file name
data_object = s3_client.get_object(Bucket=bucket_name, Key=file_name)
# what info does the object contain?
display(data_object)
# information is in the "Body" of the object
data_body = data_object["Body"].read()
print('Data type: ', type(data_body)) | Data type: <class 'bytes'>
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
This is a `bytes` datatype, which you can read it in using [io.BytesIO(file)](https://docs.python.org/3/library/io.htmlbinary-i-o). | # read in bytes data
data_stream = io.BytesIO(data_body)
# create a dataframe
counties_df = pd.read_csv(data_stream, header=0, delimiter=",")
counties_df.head() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Exploratory Data Analysis (EDA)Now that you've loaded in the data, it is time to clean it up, explore it, and pre-process it. Data exploration is one of the most important parts of the machine learning workflow because it allows you to notice any initial patterns in data distribution and features that may inform how you proceed with modeling and clustering the data. EXERCISE: Explore data & drop any incomplete rows of dataWhen you first explore the data, it is good to know what you are working with. How many data points and features are you starting with, and what kind of information can you get at a first glance? In this notebook, you're required to use complete data points to train a model. So, your first exercise will be to investigate the shape of this data and implement a simple, data cleaning step: dropping any incomplete rows of data.You should be able to answer the **question**: How many data points and features are in the original, provided dataset? (And how many points are left after dropping any incomplete rows?) | counties_df.shape
# print out stats about data
counties_df.shape
# drop any incomplete rows of data, and create a new df
clean_counties_df = counties_df.dropna()
clean_counties_df.shape | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Create a new DataFrame, indexed by 'State-County'Eventually, you'll want to feed these features into a machine learning model. Machine learning models need numerical data to learn from and not categorical data like strings (State, County). So, you'll reformat this data such that it is indexed by region and you'll also drop any features that are not useful for clustering.To complete this task, perform the following steps, using your *clean* DataFrame, generated above:1. Combine the descriptive columns, 'State' and 'County', into one, new categorical column, 'State-County'. 2. Index the data by this unique State-County name.3. After doing this, drop the old State and County columns and the CensusId column, which does not give us any meaningful demographic information.After completing this task, you should have a DataFrame with 'State-County' as the index, and 34 columns of numerical data for each county. You should get a resultant DataFrame that looks like the following (truncated for display purposes):``` TotalPop Men Women Hispanic ... Alabama-Autauga 55221 26745 28476 2.6 ...Alabama-Baldwin 195121 95314 99807 4.5 ...Alabama-Barbour 26932 14497 12435 4.6 ......``` | # index data by 'State-County'
clean_counties_df.index= clean_counties_df.State + '-' + clean_counties_df.County
clean_counties_df.head(1)
# drop the old State and County columns, and the CensusId column
# clean df should be modified or created anew
columns_to_drop = ['State', 'County','CensusId']
clean_counties_df = clean_counties_df.drop(columns = columns_to_drop)
clean_counties_df.head(1) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Now, what features do you have to work with? | # features
features_list = clean_counties_df.columns.values
print('Features: \n', features_list) | Features:
['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'
'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'
'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'
'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'
'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'
'SelfEmployed' 'FamilyWork' 'Unemployment']
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Visualizing the DataIn general, you can see that features come in a variety of ranges, mostly percentages from 0-100, and counts that are integer values in a large range. Let's visualize the data in some of our feature columns and see what the distribution, over all counties, looks like.The below cell displays **histograms**, which show the distribution of data points over discrete feature ranges. The x-axis represents the different bins; each bin is defined by a specific range of values that a feature can take, say between the values 0-5 and 5-10, and so on. The y-axis is the frequency of occurrence or the number of county data points that fall into each bin. I find it helpful to use the y-axis values for relative comparisons between different features.Below, I'm plotting a histogram comparing methods of commuting to work over all of the counties. I just copied these feature names from the list of column names, printed above. I also know that all of these features are represented as percentages (%) in the original data, so the x-axes of these plots will be comparable. | # transportation (to work)
transport_list = ['Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp']
n_bins = 30 # can decrease to get a wider bin (or vice versa)
for column_name in transport_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Create histograms of your ownCommute transportation method is just one category of features. If you take a look at the 34 features, you can see data on profession, race, income, and more. Display a set of histograms that interest you! | # create a list of features that you want to compare or examine
my_list = ['Hispanic', 'White', 'Black', 'Native', 'Asian', 'Pacific']
n_bins = 50 # define n_bins
# histogram creation code is similar to above
for column_name in my_list:
ax=plt.subplots(figsize=(6,3))
# get data by column_name and display a histogram
ax = plt.hist(clean_counties_df[column_name], bins=n_bins)
title="Histogram of " + column_name
plt.title(title, fontsize=12)
plt.show() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Normalize the dataYou need to standardize the scale of the numerical columns in order to consistently compare the values of different features. You can use a [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) to transform the numerical values so that they all fall between 0 and 1. | # scale numerical features into a normalized range, 0-1
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# store them in this dataframe
counties_scaled = pd.DataFrame(scaler.fit_transform(clean_counties_df.astype(float)))
counties_scaled.columns=clean_counties_df.columns
counties_scaled.index=clean_counties_df.index
counties_scaled.head() | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
--- Data ModelingNow, the data is ready to be fed into a machine learning model!Each data point has 34 features, which means the data is 34-dimensional. Clustering algorithms rely on finding clusters in n-dimensional feature space. For higher dimensions, an algorithm like k-means has a difficult time figuring out which features are most important, and the result is, often, noisier clusters.Some dimensions are not as important as others. For example, if every county in our dataset has the same rate of unemployment, then that particular feature doesn’t give us any distinguishing information; it will not help to separate counties into different groups because its value doesn’t *vary* between counties.> Instead, we really want to find the features that help to separate and group data. We want to find features that cause the **most variance** in the dataset!So, before I cluster this data, I’ll want to take a dimensionality reduction step. My aim will be to form a smaller set of features that will better help to separate our data. The technique I’ll use is called PCA or **principal component analysis** Dimensionality ReductionPCA attempts to reduce the number of features within a dataset while retaining the “principal components”, which are defined as *weighted*, linear combinations of existing features that are designed to be linearly independent and account for the largest possible variability in the data! You can think of this method as taking many features and combining similar or redundant features together to form a new, smaller feature set.We can reduce dimensionality with the built-in SageMaker model for PCA. Roles and Buckets> To create a model, you'll first need to specify an IAM role, and to save the model attributes, you'll need to store them in an S3 bucket.The `get_execution_role` function retrieves the IAM role you created at the time you created your notebook instance. Roles are essentially used to manage permissions and you can read more about that [in this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). For now, know that we have a FullAccess notebook, which allowed us to access and download the census data stored in S3.You must specify a bucket name for an S3 bucket in your account where you want SageMaker model parameters to be stored. Note that the bucket must be in the same region as this notebook. You can get a default S3 bucket, which automatically creates a bucket for you and in your region, by storing the current SageMaker session and calling `session.default_bucket()`. | from sagemaker import get_execution_role
session = sagemaker.Session() # store the current SageMaker session
# get IAM role
role = get_execution_role()
print(role)
# get default bucket
bucket_name = session.default_bucket()
print(bucket_name)
print() | sagemaker-eu-central-1-730357687813
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Define a PCA ModelTo create a PCA model, I'll use the built-in SageMaker resource. A SageMaker estimator requires a number of parameters to be specified; these define the type of training instance to use and the model hyperparameters. A PCA model requires the following constructor arguments:* role: The IAM role, which was specified, above.* train_instance_count: The number of training instances (typically, 1).* train_instance_type: The type of SageMaker instance for training.* num_components: An integer that defines the number of PCA components to produce.* sagemaker_session: The session used to train on SageMaker.Documentation on the PCA model can be found [here](http://sagemaker.readthedocs.io/en/latest/pca.html).Below, I first specify where to save the model training data, the `output_path`. | # define location to store model artifacts
prefix = 'counties'
output_path='s3://{}/{}/'.format(bucket_name, prefix)
print('Training artifacts will be uploaded to: {}'.format(output_path))
# define a PCA model
from sagemaker import PCA
# this is current features - 1
# you'll select only a portion of these to use, later
N_COMPONENTS=33
pca_SM = PCA(role=role,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
output_path=output_path, # specified, above
num_components=N_COMPONENTS,
sagemaker_session=session)
| _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Convert data into a RecordSet formatNext, prepare the data for a built-in model by converting the DataFrame to a numpy array of float values.The *record_set* function in the SageMaker PCA model converts a numpy array into a **RecordSet** format that is the required format for the training input data. This is a requirement for _all_ of SageMaker's built-in models. The use of this data type is one of the reasons that allows training of models within Amazon SageMaker to perform faster, especially for large datasets. | # convert df to np array
train_data_np = counties_scaled.values.astype('float32')
# convert to RecordSet format
formatted_train_data = pca_SM.record_set(train_data_np) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Train the modelCall the fit function on the PCA model, passing in our formatted, training data. This spins up a training instance to perform the training job.Note that it takes the longest to launch the specified training instance; the fitting itself doesn't take much time. | %%time
# train the PCA mode on the formatted data
pca_SM.fit(formatted_train_data) | 2020-05-23 05:40:14 Starting - Starting the training job...
2020-05-23 05:40:16 Starting - Launching requested ML instances.........
2020-05-23 05:41:46 Starting - Preparing the instances for training......
2020-05-23 05:43:02 Downloading - Downloading input data
2020-05-23 05:43:02 Training - Downloading the training image..[34mDocker entrypoint called with argument(s): train[0m
[34mRunning default environment configuration script[0m
[34m[05/23/2020 05:43:18 INFO 140677512759104] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-conf.json: {u'_num_gpus': u'auto', u'_log_level': u'info', u'subtract_mean': u'true', u'force_dense': u'true', u'epochs': 1, u'algorithm_mode': u'regular', u'extra_components': u'-1', u'_kvstore': u'dist_sync', u'_num_kv_servers': u'auto'}[0m
[34m[05/23/2020 05:43:18 INFO 140677512759104] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'34', u'mini_batch_size': u'500', u'num_components': u'33'}[0m
[34m[05/23/2020 05:43:18 INFO 140677512759104] Final configuration: {u'num_components': u'33', u'_num_gpus': u'auto', u'_log_level': u'info', u'subtract_mean': u'true', u'force_dense': u'true', u'epochs': 1, u'algorithm_mode': u'regular', u'feature_dim': u'34', u'extra_components': u'-1', u'_kvstore': u'dist_sync', u'_num_kv_servers': u'auto', u'mini_batch_size': u'500'}[0m
[34m[05/23/2020 05:43:18 WARNING 140677512759104] Loggers have already been setup.[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Launching parameter server for role scheduler[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/76e3ea69-dccf-4e9b-aa4d-467320032ebb', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'eu-central-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-23-05-40-13-839', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-133-43.eu-central-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/d6ef282a-7c64-41e9-9d5a-34e911f2beb7', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-central-1:730357687813:training-job/pca-2020-05-23-05-40-13-839', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/76e3ea69-dccf-4e9b-aa4d-467320032ebb', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.133.43', 'AWS_REGION': 'eu-central-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-23-05-40-13-839', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-133-43.eu-central-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/d6ef282a-7c64-41e9-9d5a-34e911f2beb7', 'DMLC_ROLE': 'scheduler', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-central-1:730357687813:training-job/pca-2020-05-23-05-40-13-839', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Launching parameter server for role server[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/76e3ea69-dccf-4e9b-aa4d-467320032ebb', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'eu-central-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-23-05-40-13-839', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-133-43.eu-central-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/d6ef282a-7c64-41e9-9d5a-34e911f2beb7', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-central-1:730357687813:training-job/pca-2020-05-23-05-40-13-839', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/76e3ea69-dccf-4e9b-aa4d-467320032ebb', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.133.43', 'AWS_REGION': 'eu-central-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-23-05-40-13-839', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-133-43.eu-central-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/d6ef282a-7c64-41e9-9d5a-34e911f2beb7', 'DMLC_ROLE': 'server', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-central-1:730357687813:training-job/pca-2020-05-23-05-40-13-839', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Environment: {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/76e3ea69-dccf-4e9b-aa4d-467320032ebb', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_PS_ROOT_PORT': '9000', 'DMLC_NUM_WORKER': '1', 'SAGEMAKER_HTTP_PORT': '8080', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.133.43', 'AWS_REGION': 'eu-central-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-23-05-40-13-839', 'HOME': '/root', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-133-43.eu-central-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/d6ef282a-7c64-41e9-9d5a-34e911f2beb7', 'DMLC_ROLE': 'worker', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-central-1:730357687813:training-job/pca-2020-05-23-05-40-13-839', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}[0m
[34mProcess 60 is a shell:scheduler.[0m
[34mProcess 69 is a shell:server.[0m
[34mProcess 1 is a worker.[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Using default worker.[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Loaded iterator creator application/x-recordio-protobuf for content type ('application/x-recordio-protobuf', '1.0')[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Loaded iterator creator application/x-labeled-vector-protobuf for content type ('application/x-labeled-vector-protobuf', '1.0')[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Loaded iterator creator protobuf for content type ('protobuf', '1.0')[0m
[34m[05/23/2020 05:43:20 INFO 140677512759104] Create Store: dist_sync[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] nvidia-smi took: 0.0252349376678 secs to identify 0 gpus[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] Number of GPUs being used: 0[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] The default executor is <PCAExecutor on cpu(0)>.[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] 34 feature(s) found in 'data'.[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] <PCAExecutor on cpu(0)> is assigned to batch slice from 0 to 499.[0m
[34m#metrics {"Metrics": {"initialize.time": {"count": 1, "max": 742.8948879241943, "sum": 742.8948879241943, "min": 742.8948879241943}}, "EndTime": 1590212601.11761, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "PCA"}, "StartTime": 1590212600.365208}
[0m
[34m#metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Number of Batches Since Last Reset": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Number of Records Since Last Reset": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Total Batches Seen": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Total Records Seen": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Max Records Seen Between Resets": {"count": 1, "max": 0, "sum": 0.0, "min": 0}, "Reset Count": {"count": 1, "max": 0, "sum": 0.0, "min": 0}}, "EndTime": 1590212601.117854, "Dimensions": {"Host": "algo-1", "Meta": "init_train_data_iter", "Operation": "training", "Algorithm": "PCA"}, "StartTime": 1590212601.117795}
[0m
[34m[2020-05-23 05:43:21.118] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 0, "duration": 752, "num_examples": 1, "num_bytes": 82000}[0m
[34m[2020-05-23 05:43:21.159] [tensorio] [info] epoch_stats={"data_pipeline": "/opt/ml/input/data/train", "epoch": 1, "duration": 33, "num_examples": 7, "num_bytes": 527752}[0m
[34m#metrics {"Metrics": {"epochs": {"count": 1, "max": 1, "sum": 1.0, "min": 1}, "update.time": {"count": 1, "max": 41.612863540649414, "sum": 41.612863540649414, "min": 41.612863540649414}}, "EndTime": 1590212601.159863, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "PCA"}, "StartTime": 1590212601.117719}
[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] #progress_metric: host=algo-1, completed 100 % of epochs[0m
[34m#metrics {"Metrics": {"Max Batches Seen Between Resets": {"count": 1, "max": 7, "sum": 7.0, "min": 7}, "Number of Batches Since Last Reset": {"count": 1, "max": 7, "sum": 7.0, "min": 7}, "Number of Records Since Last Reset": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Total Batches Seen": {"count": 1, "max": 7, "sum": 7.0, "min": 7}, "Total Records Seen": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Max Records Seen Between Resets": {"count": 1, "max": 3218, "sum": 3218.0, "min": 3218}, "Reset Count": {"count": 1, "max": 1, "sum": 1.0, "min": 1}}, "EndTime": 1590212601.16041, "Dimensions": {"Host": "algo-1", "Meta": "training_data_iter", "Operation": "training", "Algorithm": "PCA", "epoch": 0}, "StartTime": 1590212601.118201}
[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] #throughput_metric: host=algo-1, train throughput=75987.4469923 records/second[0m
[34m#metrics {"Metrics": {"finalize.time": {"count": 1, "max": 24.407148361206055, "sum": 24.407148361206055, "min": 24.407148361206055}}, "EndTime": 1590212601.185214, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "PCA"}, "StartTime": 1590212601.160081}
[0m
[34m[05/23/2020 05:43:21 INFO 140677512759104] Test data is not provided.[0m
[34m#metrics {"Metrics": {"totaltime": {"count": 1, "max": 2453.641891479492, "sum": 2453.641891479492, "min": 2453.641891479492}, "setuptime": {"count": 1, "max": 1556.6868782043457, "sum": 1556.6868782043457, "min": 1556.6868782043457}}, "EndTime": 1590212601.188449, "Dimensions": {"Host": "algo-1", "Operation": "training", "Algorithm": "PCA"}, "StartTime": 1590212601.185625}
[0m
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Accessing the PCA Model AttributesAfter the model is trained, we can access the underlying model parameters. Unzip the Model DetailsNow that the training job is complete, you can find the job under **Jobs** in the **Training** subsection in the Amazon SageMaker console. You can find the job name listed in the training jobs. Use that job name in the following code to specify which model to examine.Model artifacts are stored in S3 as a TAR file; a compressed file in the output path we specified + 'output/model.tar.gz'. The artifacts stored here can be used to deploy a trained model. | # Get the name of the training job, it's suggested that you copy-paste
# from the notebook or from a specific job in the AWS console
training_job_name='pca-2020-05-22-09-14-18-586'
# where the model is saved, by default
model_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz')
print(model_key)
# download and unzip model
boto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')
# unzipping as model_algo-1
os.system('tar -zxvf model.tar.gz')
os.system('unzip model_algo-1') | counties/pca-2020-05-22-09-14-18-586/output/model.tar.gz
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
MXNet ArrayMany of the Amazon SageMaker algorithms use MXNet for computational speed, including PCA, and so the model artifacts are stored as an array. After the model is unzipped and decompressed, we can load the array using MXNet.You can take a look at the MXNet [documentation, here](https://aws.amazon.com/mxnet/). | import mxnet as mx
# loading the unzipped artifacts
pca_model_params = mx.ndarray.load('model_algo-1')
# what are the params
print(pca_model_params) | {'s':
[1.7896362e-02 3.0864021e-02 3.2130770e-02 3.5486195e-02 9.4831578e-02
1.2699370e-01 4.0288666e-01 1.4084760e+00 1.5100485e+00 1.5957943e+00
1.7783760e+00 2.1662524e+00 2.2966361e+00 2.3856051e+00 2.6954880e+00
2.8067985e+00 3.0175958e+00 3.3952675e+00 3.5731301e+00 3.6966958e+00
4.1890211e+00 4.3457499e+00 4.5410376e+00 5.0189657e+00 5.5786467e+00
5.9809699e+00 6.3925138e+00 7.6952214e+00 7.9913125e+00 1.0180052e+01
1.1718245e+01 1.3035975e+01 1.9592180e+01]
<NDArray 33 @cpu(0)>, 'v':
[[ 2.46869749e-03 2.56468095e-02 2.50773830e-03 ... -7.63925165e-02
1.59879066e-02 5.04589686e-03]
[-2.80601848e-02 -6.86634064e-01 -1.96283013e-02 ... -7.59587288e-02
1.57304872e-02 4.95312130e-03]
[ 3.25766727e-02 7.17300594e-01 2.40726061e-02 ... -7.68136829e-02
1.62378680e-02 5.13597298e-03]
...
[ 1.12151138e-01 -1.17030945e-02 -2.88011521e-01 ... 1.39890045e-01
-3.09406728e-01 -6.34506866e-02]
[ 2.99992133e-02 -3.13433539e-03 -7.63589665e-02 ... 4.17341813e-02
-7.06735924e-02 -1.42857227e-02]
[ 7.33537527e-05 3.01008171e-04 -8.00925500e-06 ... 6.97060227e-02
1.20169498e-01 2.33626723e-01]]
<NDArray 34x33 @cpu(0)>, 'mean':
[[0.00988273 0.00986636 0.00989863 0.11017046 0.7560245 0.10094159
0.0186819 0.02940491 0.0064698 0.01154038 0.31539047 0.1222766
0.3030056 0.08220861 0.256217 0.2964254 0.28914267 0.40191284
0.57868284 0.2854676 0.28294644 0.82774544 0.34378946 0.01576072
0.04649627 0.04115358 0.12442778 0.47014 0.00980645 0.7608103
0.19442631 0.21674445 0.0294168 0.22177474]]
<NDArray 1x34 @cpu(0)>}
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
PCA Model AttributesThree types of model attributes are contained within the PCA model.* **mean**: The mean that was subtracted from a component in order to center it.* **v**: The makeup of the principal components; (same as ‘components_’ in an sklearn PCA model).* **s**: The singular values of the components for the PCA transformation. This does not exactly give the % variance from the original feature space, but can give the % variance from the projected feature space. We are only interested in v and s. From s, we can get an approximation of the data variance that is covered in the first `n` principal components. The approximate explained variance is given by the formula: the sum of squared s values for all top n components over the sum over squared s values for _all_ components:\begin{equation*}\frac{\sum_{n}^{ } s_n^2}{\sum s^2}\end{equation*}From v, we can learn more about the combinations of original features that make up each principal component. | # get selected params
s=pd.DataFrame(pca_model_params['s'].asnumpy())
v=pd.DataFrame(pca_model_params['v'].asnumpy()) | _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Data VarianceOur current PCA model creates 33 principal components, but when we create new dimensionality-reduced training data, we'll only select a few, top n components to use. To decide how many top components to include, it's helpful to look at how much **data variance** the components capture. For our original, high-dimensional data, 34 features captured 100% of our data variance. If we discard some of these higher dimensions, we will lower the amount of variance we can capture. Tradeoff: dimensionality vs. data varianceAs an illustrative example, say we have original data in three dimensions. So, three dimensions capture 100% of our data variance; these dimensions cover the entire spread of our data. The below images are taken from the PhD thesis, [“Approaches to analyse and interpret biological profile data”](https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/index/index/docId/696) by Matthias Scholz, (2006, University of Potsdam, Germany).Now, you may also note that most of this data seems related; it falls close to a 2D plane, and just by looking at the spread of the data, we can visualize that the original, three dimensions have some correlation. So, we can instead choose to create two new dimensions, made up of linear combinations of the original, three dimensions. These dimensions are represented by the two axes/lines, centered in the data. If we project this in a new, 2D space, we can see that we still capture most of the original data variance using *just* two dimensions. There is a tradeoff between the amount of variance we can capture and the number of component-dimensions we use to represent our data.When we select the top n components to use in a new data model, we'll typically want to include enough components to capture about 80-90% of the original data variance. In this project, we are looking at generalizing over a lot of data and we'll aim for about 80% coverage. **Note**: The _top_ principal components, with the largest s values, are actually at the end of the s DataFrame. Let's print out the s values for the top n, principal components. | # looking at top 5 components
n_principal_components = 5
start_idx = N_COMPONENTS - n_principal_components # 33-n
# print a selection of s
print(s.iloc[start_idx:, :]) | 0
28 7.991313
29 10.180052
30 11.718245
31 13.035975
32 19.592180
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
EXERCISE: Calculate the explained varianceIn creating new training data, you'll want to choose the top n principal components that account for at least 80% data variance. Complete a function, `explained_variance` that takes in the entire array `s` and a number of top principal components to consider. Then return the approximate, explained variance for those top n components. For example, to calculate the explained variance for the top 5 components, calculate s squared for *each* of the top 5 components, add those up and normalize by the sum of *all* squared s values, according to this formula:\begin{equation*}\frac{\sum_{5}^{ } s_n^2}{\sum s^2}\end{equation*}> Using this function, you should be able to answer the **question**: What is the smallest number of principal components that captures at least 80% of the total variance in the dataset? | # Calculate the explained variance for the top n principal components
# you may assume you have access to the global var N_COMPONENTS
def explained_variance(s, n_top_components):
'''Calculates the approx. data variance that n_top_components captures.
:param s: A dataframe of singular values for top components;
the top value is in the last row.
:param n_top_components: An integer, the number of top components to use.
:return: The expected data variance covered by the n_top_components.'''
num = (s.iloc[-n_top_components:, :].values ** 2).sum()
denom = (s.values ** 2).sum()
exp_var = num/denom
return exp_var
| _____no_output_____ | MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Test CellTest out your own code by seeing how it responds to different inputs; does it return a reasonable value for the single, top component? What about for the top 5 components? | # test cell
n_top_components = 7 # select a value for the number of top components
# calculate the explained variance
exp_variance = explained_variance(s, n_top_components)
print('Explained variance: ', exp_variance) | Explained variance: 0.80167246
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
As an example, you should see that the top principal component accounts for about 32% of our data variance! Next, you may be wondering what makes up this (and other components); what linear combination of features make these components so influential in describing the spread of our data?Below, let's take a look at our original features and use that as a reference. | # features
features_list = counties_scaled.columns.values
print('Features: \n', features_list) | Features:
['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'
'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'
'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'
'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'
'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'
'SelfEmployed' 'FamilyWork' 'Unemployment']
| MIT | Population_Segmentation/Pop_Segmentation_Exercise.ipynb | fradeleo/Sagemaker_Case_Studies |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.