markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
9.0 Conclusions 9.1 Final Model
# model performance with unseen data xgb_final_model = XGBClassifier(objective='binary:logistic', n_estimators = 1000, eta=0.03, subsample = 0.7, min_child_weight = 3, max_depth = 30, colssample_bytree = 0.7, scale_pos_weight=80, verbosity=0) xgb_final_model.fit(X_train, y_train) pred_final = xgb_final_model.predict(X_test) pred_final_proba = xgb_final_model.predict_proba(X_test) xgb_final_model_result = ml_metrics('XGBoost', y_test, pred_final) xgb_final_model_result
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
9.2 Business Questions
df9 = df2.copy()
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
9.2.1 Qual a taxa atual de Churn da TopBank?
churn_rate = df9['exited'].value_counts(normalize=True).reset_index() churn_rate['exited'] = churn_rate['exited']*100 churn_rate.columns = ['churn', 'exited (%)'] churn_rate sns.countplot(df9['exited']).set_title('Churn Rate')
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
**A taxa atual de churn do TopBank é de 20.4%** 9.2.2 Qual o retorno esperado, em termos de faturamento, se a empresa utilizar seu modelo para evitar o churn dos clientes? - Para realização do cálculo de retorno financeiro foi utilizado uma amostra de 1000 clientes (10% do dataset). - Para comparação com os dados reais foram utlizados os valores da predição final do modelo.
aux = pd.concat([X_test9, y_test9], axis=1) mean_salary = df9['estimated_salary'].mean() aux['pred_exited'] = pred_final aux['client_return'] = aux['estimated_salary'].apply(lambda x: x*0.15 if x < mean_salary else x*0.20)
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
- Cálculo do retorno total para todos os clintes que entraram em churn na amostra
total_return = aux[aux['exited'] == 1]['client_return'].sum() print('O retorno total de todos os clientes que entraram em churn é de ${}' .format(total_return))
O retorno total de todos os clientes que entraram em churn é de $3658649.9845000003
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
- Selecionando os clientes que o modelo previu corretamente que entraram em churn. - Se fosse possível evitar que todos os clientes entrassem em churn seria possível recuperar aproximadamente 70% do valor total calculado acima.
churn_return = aux[(aux['pred_exited'] == 1) & (aux['exited'] == 1)]['client_return'].sum() print('O retorno total dos clientes que o modelo previu que entrariam em churn é de ${}' .format(churn_return))
O retorno total dos clientes que o modelo previu que entrariam em churn é de $2540855.073
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
9.2.3 Incentivo Financeiro Uma possível ação para evitar que o cliente entre em churn é oferecer um cupom de desconto, ou algum outro incentivo financeiro para ele renovar seu contrato por mais 12 meses.- Para quais clientes você daria o incentivo financeiro e qual seria esse valor, de modo a maximizar o ROI (Retorno sobre o investimento). Lembrando que a soma dos incentivos não pode ultrapassar os $10.000,00 Ainda levando em conta a amostra de 1000 clientes, foi possível analisar a probabilidade de cada cliente entrar em churn segundo o algoritmo e decidir de qual forma o incentivo finaceiro seria oferecido. Após algumas análises foram definidas as seguintes estratágias (foram considerados apenas clientes que o algoritmo previu como "positivos" para o churn):- Foi definido um ponto de corte (threshold) de 0.95, ou seja, a probabilidade dos clientes entrarem em churn foi comparada com esse ponto de corte e a partir disso foram definidos "grupos" que receberiam o incentivo. - Clientes com uma probabilidade de mais de 95% não receberiam o incentivo, pois foi considerado que possuem uma probabilidade muito grande a entrarem em churn e seria muito difícil convence-los a renovar o contrato mesmo com um incentivo finaceiro. - Clientes com uma probabilidade maior do que 90% e menor do que 95% receberiam um incentivo de 250. - Clientes com uma probabilidade entre 90% e 70% receberiam um incentivo de 200. - Clientes com uma probabilidade menor do que 70% receberiam um incentivo de 100.
threshold = 0.95 proba_list = [] for i in range (len(pred_final_proba)): proba = pred_final_proba[i][1] proba_list.append(proba) aux['pred_exited_proba'] = proba_list aux2 = aux[(aux['exited'] == 1) & (aux['pred_exited'] ==1)] aux2 = aux2[aux2['pred_exited_proba'] > threshold] aux2.sample(10) # definindo incentivo de acordo com a probabilidade de churn aux2['destinated_budget'] = aux2['pred_exited_proba'].apply(lambda x: 250 if x > 0.9 else 200 if ((x < 0.9) & (x > 0.7)) else 100 )
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
- Supondo que fosse possível evitar que todos os clientes que receberam o incentivo entrassem em churn, e então consequentemente renovassem seus contratos, seria possível obter um retorno finaceiro de $ 938.235,39
total_return = aux2['client_return'].sum() print('O Retorno financeiro total a partir dos clientes que receberam o incentivo foi de $ {}'.format(total_return))
O Retorno financeiro total a partir dos clientes que receberam o incentivo foi de $ 1602619.6835
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
10.0 Deploy
#saving models final_model = XGBClassifier(objective='binary:logistic', n_estimators = 1000, eta=0.03, subsample = 0.7, min_child_weight = 3, max_depth = 30, colssample_bytree = 0.7, scale_pos_weight=80, verbosity=0) final_model.fit(X_train, y_train) joblib.dump(final_model, 'Model/final_model_XGB.joblib') mm = MinMaxScaler() le = LabelEncoder() joblib.dump(mm, 'Parameters/scaler_mm.joblib') joblib.dump(le, 'Parameters/label_encoder.joblib')
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
10.1 Churn Class
import joblib import pandas as pd import inflection class Churn (object): def __init__(self): self.scaler = joblib.load('Parameters/scaler_mm.joblib') self.encoder_le = joblib.load('Parameters/label_encoder.joblib') def data_cleaning(self, df1): # rename columns cols_old = ['RowNumber', 'CustomerId', 'Surname', 'CreditScore', 'Geography', 'Gender', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'HasCrCard', 'IsActiveMember', 'EstimatedSalary', 'Exited'] snakecase = lambda x: inflection.underscore(x) cols_new = list(map(snakecase, cols_old)) df1.columns = cols_new return df1 def feature_engineering(self, df2): cols_drop = ['row_number','customer_id','surname'] df2 = df2.drop(cols_drop, axis=1) return df2 def data_preparation(self, df3): # rescaling mm_columns = ['credit_score', 'age', 'balance', 'estimated_salary', 'tenure', 'num_of_products'] df3[mm_columns] = self.scaler.fit_transform(df3[mm_columns]) df3['geography'] = self.encoder_le.fit_transform(df3['geography']) gender = {'Female':0, 'Male':1} df3['gender'] = df3['gender'].map(gender) return df3 def get_prediction(self, model, orignal_data, test_data): pred = model.predict(test_data) original_data['prediciton'] = pred return original_data.to_json(orient='records', date_format='iso')
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
10.2 API Handler
import joblib import pandas as pd from churn.Churn import Churn from flask import Flask, request, Response model = joblib.load('Model/final_model_XGB.joblib') # initialize API app = Flask(__name__) @app.route('/churn/predict', methods=['POST']) def churn_predict(): test_json = request.get_json() if test_json: # there is data if isinstance(test_json, dict): # unique example test_raw = pd.DataFrame(test_json, index=[0]) else: # multiple example test_raw = pd.DataFrame(test_json, columns=test_json[0].keys()) pipeline = Churn() # data cleaning df1 = pipeline.data_cleaning(test_raw) # feature engineering df2 = pipeline.feature_engineering(df1) # data preparation df3 = pipeline.data_preparation(df2) # prediction df_response = pipeline.get_prediciton(model, test_raw, df3) return df_response else: return Response('{}', status=200, mimetype='application/json') if __name__ == '__main__': app.run('127.0.0.1')
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
10.3 API Tester
df10 = pd.read_csv('data/churn.csv') # convert dataframe to json data = df10.to_json() url = 'http://0.0.0.0:5000/churn/predict' header = {'Content-type': 'application/json'} r = requests.post(url=url, data=data, headers=header) r.status_code r.json() d1 = pd.DataFrame( r.json(), columns=r.json()[0].keys() ) d1
_____no_output_____
MIT
Churn-Prediction.ipynb
Leonardodsch/churn-prediction
https://py.checkio.org/blog/design-patterns-part-2/https://py.checkio.org/en/mission/dialogues/share/07bc869edadfc11858e1caeaa4415987/
windows = dict.fromkeys(['main', 'settings', 'help']) windows text = """Karl said: Hi! What's new?R2D2 said: Hello, human. Could we speak later about it?""" text = text.replace('a', '0').replace('e', '0').replace('i', '0').replace('o', '0').replace('u', '0').replace('A', '0').replace('E', '0').replace('I', '0').replace('O', '0').replace('U', '0') text.replace() import re s = """Karl said: Hi! What's new?R2D2 said: Hello, human. Could we speak later about it?""" replaced = re.sub('[aeiouAEIOU]', '0', s) replaced = re.sub('[^0]', '1', replaced) print (replaced ) import re class Chat: def __init__(self): self.human = None self.robot = None self.human_dialogue = '' self.robot_dialogue = '' def connect_human(self, human): self.human = human def connect_robot(self, robot): self.robot = robot def send(self, text, name): self.human_dialogue += '{} said: {}'.format(name, text) self.robot_dialogue += '{} said: {}'.format(name, self.convert_to_robot_lang(text)) def show_human_dialogue(self): return self.human_dialogue def show_robot_dialogue(self): return self.robot_dialogue def convert_to_robot_lang(self, text): text = re.sub('[aeiou]', '0', text) return re.sub('[^0]', '1', text) class Human: def __init__(self, name): self.name = name def send(self, text): super().send(text, self.name) class Robot: def __init__(self, serial_number): self.serial_number = serial_number def send(self, text): super().send(text, self.name) if __name__ == '__main__': #These "asserts" using only for self-checking and not necessary for auto-testing chat = Chat() karl = Human("Karl") bot = Robot("R2D2") chat.connect_human(karl) chat.connect_robot(bot) karl.send("Hi! What's new?") bot.send("Hello, human. Could we speak later about it?") assert chat.show_human_dialogue() == """Karl said: Hi! What's new? R2D2 said: Hello, human. Could we speak later about it?""" assert chat.show_robot_dialogue() == """Karl said: 101111011111011 R2D2 said: 10110111010111100111101110011101011010011011""" print("Coding complete? Let's try tests!") class Parent: def __init__(self): self.parent_variable = 'Parent' class Child(Parent): def __init__(self): super().__init__() self.child_variable = 'Child' def print_val(self): print(self.child_variable) print(self.parent_variable) child = Child() child.print_val() class Dog(): """Represent a dog.""" def __init__(self, name): """Initialize dog object.""" self.name = name def sit(self): """Simulate sitting.""" print(self.name + ' is sitting.') my_dog = Dog('Tommy') print(my_dog.name + ' is a great dog!') my_dog.sit() class SDog(Dog): """Represent a search dog.""" def __init__(self, name): """Initialize the search dog.""" super().__init__(name) def search(self): """Simulate searching.""" print(self.name + ' is searching.') my_dog = SDog('Lucy') print(my_dog.name + ' is a search dog.') my_dog.sit() my_dog.search()
Lucy is a search dog. Lucy is sitting. Lucy is searching.
MIT
Other/Behavior Design Pattern - Mediator.ipynb
deepaksood619/Python-Competitive-Programming
Approach 2: Use Traditional statestical modelsIn this notebook we will discuss following models on daily sampled data. 1. MA 2. Simple Exponential Smoothing 3. Holt Linear 4. Holt-Winters These models are implemented using statsmodels library. **Objective: Implement above models and calculate RMSE to compare reults with Approach 1.** 1. Load previously created daily sampled data and decompose the time series 2. Fit each model and predict test data 3. Calculate RMSE and MAE 4. Compare results with Approach 1
# Load data data = pd.read_csv("daily_data.csv",parse_dates=[0], index_col=0) data.head()
_____no_output_____
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
Decompose time seriesA series is thought to be an aggregate or combination of these four components.All series have a level and noise. The trend and seasonality components are optional.These components combine either additively or multiplicatively. Additive ModelAn additive model suggests that the components are added together as follows: y(t) = Level + Trend + Seasonality + NoiseAn additive model is linear where changes over time are consistently made by the same amount.A linear trend is a straight line.A linear seasonality has the same frequency (width of cycles) and amplitude (height of cycles). Multiplicative ModelA multiplicative model suggests that the components are multiplied together as follows:y(t) = Level * Trend * Seasonality * NoiseA multiplicative model is nonlinear, such as quadratic or exponential. Changes increase or decrease over time.A nonlinear trend is a curved line.A non-linear seasonality has an increasing or decreasing frequency and/or amplitude over time.Referance: https://machinelearningmastery.com/decompose-time-series-data-trend-seasonality/
#Decompose time series into trend, seasonality and noise rcParams['figure.figsize'] = 11, 9 result = sm.tsa.seasonal_decompose(data, model='additive') result.plot() plt.show() #Print trend, seasality, residual print(result.trend) print(result.seasonal) print(result.resid) #print(result.observed) #Find out outliers sns.boxplot(x=data['Total Price'],orient='v')
_____no_output_____
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
**Z score denotes how many standerd deviation away your sample is from the mean. Hence we remove all samples which are 3 std. deviations away from mean**
#Calculate Z score for all samples z = np.abs(stats.zscore(data)) #Locate outliers outliers = data[(z > 3).all(axis=1)] outliers #Replace outliers by median value median = data[(z < 3).all(axis=1)].median() data.loc[data['Total Price'] > 71858, 'Total Price'] = np.nan data.fillna(median,inplace=True) median #Plot data again rcParams['figure.figsize'] = 20, 5 data.plot()
_____no_output_____
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
**Below we can see time series clearly, there is exponential growth in trend at start but linear towards the end. Seasonality is not increasing exponentialy, rather it's constant. Hece we can say that our time serie is additive.**
#Plot the data rcParams['figure.figsize'] = 20, 10 result = sm.tsa.seasonal_decompose(data, model='additive') result.plot() plt.show() #Train and test data train=data[0:-100] test=data[-100:] y_hat = test.copy()
_____no_output_____
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
1. Moving Average: In this method, we use the mean of the previous data. Using the prices of the initial period would highly affect the forecast for the next period. Therefore, we will take the average of the prices for last few recent time periods only. Such forecasting technique which uses window of time period for calculating the average is called Moving Average technique. Calculation of the moving average involves what is sometimes called a “sliding window” of size n.
#Calculate MA: use last 50 data points rcParams['figure.figsize'] = 17, 5 y_hat['moving_avg_forecast'] = train['Total Price'].rolling(50).mean().iloc[-1] plt.plot(train['Total Price'], label='Train') plt.plot(test['Total Price'], label='Test') plt.plot(y_hat['moving_avg_forecast'], label='Moving Average Forecast') plt.legend(loc='best') plt.show() #Calculate rmse rmse = sqrt(mean_squared_error(test['Total Price'], y_hat['moving_avg_forecast'])) print(rms) #Calculate MAE mae = mean_absolute_error(test['Total Price'], y_hat['moving_avg_forecast']) print(mae)
11707.529420000003
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
Method 2 : Simple Exponential SmoothingThis method takes into account all the data while weighing the data points differently. For example it may be sensible to attach larger weights to more recent observations than to observations from the distant past. The technique which works on this principle is called Simple exponential smoothing. Forecasts are calculated using weighted averages where the weights decrease exponentially as observations come from further in the past, the smallest weights are associated with the oldest observations:
#Fit the mosel fit1 = SimpleExpSmoothing(train).fit() y_hat['SES'] = fit1.forecast(len(test)).rename(r'$\alpha=%s$'%fit1.model.params['smoothing_level']) alpha = fit1.model.params['smoothing_level']
C:\Users\Snigdha\AppData\Local\conda\conda\envs\neuralnets\lib\site-packages\statsmodels\tsa\base\tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency D will be used. % freq, ValueWarning)
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
where 0≤ α ≤1 is the smoothing parameter.The one-step-ahead forecast for time T+1 is a weighted average of all the observations in the series y1,…,yT. The rate at which the weights decrease is controlled by the parameter α.
alpha #Plot the data rcParams['figure.figsize'] = 17, 5 plt.plot(train['Total Price'], label='Train') plt.plot(test['Total Price'], label='Test') plt.plot(y_hat['SES'], label='SES') plt.legend(loc='best') plt.show() #Calculate rmse rmse = sqrt(mean_squared_error(test['Total Price'], y_hat.SES)) print(rmse) #Calculate mae mae = mean_absolute_error(test['Total Price'], y_hat.SES) print(mae)
14885.45052998217
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
Method 3 – Holt’s Linear Trend methodIf we use any of the above methods, it won’t take into account this trend. Trend is the general pattern of prices that we observe over a period of time. In this case we can see that there is an increasing trend.Hence we use Holt’s Linear Trend method that can map the trend accurately without any assumptions.
#Holt-Linear model fit2 = Holt(np.asarray(train['Total Price'])).fit() y_hat['Holt_linear'] = fit2.forecast(len(test)) print("Smooting level", fit2.model.params['smoothing_level']) print("Smoothing slope",fit2.model.params['smoothing_slope']) #Plot the result rcParams['figure.figsize'] = 17, 5 plt.plot(train['Total Price'], label='Train') plt.plot(test['Total Price'], label='Test') plt.plot(y_hat['Holt_linear'], label='Holt_linear') plt.legend(loc='best') plt.show() #Calculate rmse rmse = sqrt(mean_squared_error(test['Total Price'], y_hat.Holt_linear)) print(rmse) #Calculate mae mae = mean_absolute_error(test['Total Price'], y_hat.Holt_linear) print(mae)
14058.411166558843
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
If we observe closely, there are spikes in sales in middle of the month.
data.tail(100).plot()
_____no_output_____
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
Method 4 : Holt-Winters MethodDatasets which show a similar set of pattern after fixed intervals of a time period have from seasonality.Hence we need a method that takes into account both trend and seasonality to forecast future prices. One such algorithm that we can use in such a scenario is Holt’s Winter method. The idea behind triple exponential smoothing(Holt’s Winter) is to apply exponential smoothing to the seasonal components in addition to level and trend.
#Fit model fit3 = ExponentialSmoothing(np.asarray(train['Total Price']) ,seasonal_periods= 30, trend='add', seasonal='add').fit() y_hat['Holt_Winter'] = fit3.forecast(len(test)) #Plot the data rcParams['figure.figsize'] = 17, 5 plt.plot( train['Total Price'], label='Train') plt.plot(test['Total Price'], label='Test') plt.plot(y_hat['Holt_Winter'], label='Holt_Winter') plt.legend(loc='best') plt.show() #Calculate rmse rmse = sqrt(mean_squared_error(test['Total Price'], y_hat.Holt_Winter)) print(rms) #Calculate mae mae = mean_absolute_error(test['Total Price'], y_hat.Holt_Winter) print(mae)
14214.30484223988
MIT
Portfolio/TS_Traditional_Methods.ipynb
anujakapre/E-commerce-Market-Analysis-
Load DataAs usual, let's start by loading some network data. This time round, we have a [physician trust](http://konect.uni-koblenz.de/networks/moreno_innovation) network, but slightly modified such that it is undirected rather than directed.> This directed network captures innovation spread among 246 physicians in for towns in Illinois, Peoria, Bloomington, Quincy and Galesburg. The data was collected in 1966. A node represents a physician and an edge between two physicians shows that the left physician told that the righ physician is his friend or that he turns to the right physician if he needs advice or is interested in a discussion. There always only exists one edge between two nodes even if more than one of the listed conditions are true.
# Load the network. This network, while in reality is a directed graph, # is intentionally converted to an undirected one for simplification. G = cf.load_physicians_network() # Make a Circos plot of the graph from nxviz import CircosPlot c = CircosPlot(G) c.draw()
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
QuestionWhat can you infer about the structure of the graph from the Circos plot? My answer: The structure is interesting. The graph looks like the physician trust network is comprised of discrete subnetworks. Structures in a GraphWe can leverage what we have learned in the previous notebook to identify special structures in a graph. In a network, cliques are one of these special structures. CliquesIn a social network, cliques are groups of people in which everybody knows everybody. **Questions:**1. What is the simplest clique?1. What is the simplest complex clique?Let's try implementing a simple algorithm that finds out whether a node is present in a simple complex clique.
# Example code. def in_triangle(G, node): """ Returns whether a given node is present in a triangle relationship or not. """ # Then, iterate over every pair of the node's neighbors. for nbr1, nbr2 in combinations(G.neighbors(node), 2): # Check to see if there is an edge between the node's neighbors. # If there is an edge, then the given node is present in a triangle. if G.has_edge(nbr1, nbr2): # We return because any triangle that is present automatically # satisfies the problem requirements. return True return False in_triangle(G, 3)
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
In reality, NetworkX already has a function that *counts* the number of triangles that any given node is involved in. This is probably more useful than knowing whether a node is present in a triangle or not, but the above code was simply for practice.
nx.triangles(G, 3)
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
ExerciseCan you write a function that takes in one node and its associated graph as an input, and returns a list or set of itself + all other nodes that it is in a triangle relationship with? Do not return the triplets, but the `set`/`list` of nodes. (5 min.)**Possible Implementation:** If I check every pair of my neighbors, any pair that are also connected in the graph are in a triangle relationship with me.Hint: Python's [`itertools`](https://docs.python.org/3/library/itertools.html) module has a `combinations` function that may be useful.Hint: NetworkX graphs have a `.has_edge(node1, node2)` function that checks whether an edge exists between two nodes.Verify your answer by drawing out the subgraph composed of those nodes.
# Possible answer def get_triangles(G, node): neighbors1 = set(G.neighbors(node)) triangle_nodes = set() triangle_nodes.add(node) """ Fill in the rest of the code below. """ for nbr1, nbr2 in combinations(neighbors1, 2): if G.has_edge(nbr1, nbr2): triangle_nodes.add(nbr1) triangle_nodes.add(nbr2) return triangle_nodes # Verify your answer with the following funciton call. Should return something of the form: # {3, 9, 11, 41, 42, 67} get_triangles(G, 3) # Then, draw out those nodes. nx.draw(G.subgraph(get_triangles(G, 3)), with_labels=True) # Compare for yourself that those are the only triangles that node 3 is involved in. neighbors3 = list(G.neighbors(3)) neighbors3.append(3) nx.draw(G.subgraph(neighbors3), with_labels=True)
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
Friend Recommendation: Open TrianglesNow that we have some code that identifies closed triangles, we might want to see if we can do some friend recommendations by looking for open triangles.Open triangles are like those that we described earlier on - A knows B and B knows C, but C's relationship with A isn't captured in the graph. What are the two general scenarios for finding open triangles that a given node is involved in?1. The given node is the centre node.1. The given node is one of the termini nodes. ExerciseCan you write a function that identifies, for a given node, the other two nodes that it is involved with in an open triangle, if there is one? (5 min.)Note: For this exercise, only consider the case when the node of interest is the centre node.**Possible Implementation:** Check every pair of my neighbors, and if they are not connected to one another, then we are in an open triangle relationship.
def get_open_triangles(G, node): """ There are many ways to represent this. One may choose to represent only the nodes involved in an open triangle; this is not the approach taken here. Rather, we have a code that explicitly enumrates every open triangle present. """ open_triangle_nodes = [] neighbors = list(G.neighbors(node)) for n1, n2 in combinations(neighbors, 2): if not G.has_edge(n1, n2): open_triangle_nodes.append([n1, node, n2]) return open_triangle_nodes # # Uncomment the following code if you want to draw out each of the triplets. # nodes = get_open_triangles(G, 2) # for i, triplet in enumerate(nodes): # fig = plt.figure(i) # nx.draw(G.subgraph(triplet), with_labels=True) print(get_open_triangles(G, 3)) len(get_open_triangles(G, 3))
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
Triangle closure is also the core idea behind social networks' friend recommendation systems; of course, it's definitely more complicated than what we've implemented here. CliquesWe have figured out how to find triangles. Now, let's find out what **cliques** are present in the network. Recall: what is the definition of a clique?- NetworkX has a [clique-finding](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.algorithms.clique.find_cliques.html) algorithm implemented.- This algorithm finds all maximally-sized cliques for a given node.- Note that maximal cliques of size `n` include all cliques of `size < n`
list(nx.find_cliques(G))[0:20]
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
ExerciseTry writing a function `maximal_cliques_of_size(size, G)` that implements a search for all maximal cliques of a given size. (3 min.)
def maximal_cliqes_of_size(size, G): # Defensive programming check. assert isinstance(size, int), "size has to be an integer" assert size >= 2, "cliques are of size 2 or greater." return [i for i in list(nx.find_cliques(G)) if len(i) == size] maximal_cliqes_of_size(2, G)[0:20]
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
Connected ComponentsFrom [Wikipedia](https://en.wikipedia.org/wiki/Connected_component_%28graph_theory%29):> In graph theory, a connected component (or just component) of an undirected graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph.NetworkX also implements a [function](https://networkx.github.io/documentation/networkx-1.9.1/reference/generated/networkx.algorithms.components.connected.connected_component_subgraphs.html) that identifies connected component subgraphs.Remember how based on the Circos plot above, we had this hypothesis that the physician trust network may be divided into subgraphs. Let's check that, and see if we can redraw the Circos visualization.
ccsubgraph_nodes = list(nx.connected_components(G)) ccsubgraph_nodes
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
ExerciseDraw a circos plot of the graph, but now colour and order the nodes by their connected component subgraph. (5 min.)Recall Circos API:```pythonc = CircosPlot(G, node_order='...', node_color='...')c.draw()plt.show() or plt.savefig(...)```
# Start by labelling each node in the master graph G by some number # that represents the subgraph that contains the node. for i, nodeset in enumerate(ccsubgraph_nodes): for n in nodeset: G.nodes[n]['subgraph'] = i c = CircosPlot(G, node_color='subgraph', node_order='subgraph') c.draw() plt.savefig('images/physicians.png', dpi=300)
_____no_output_____
MIT
archive/4-cliques-triangles-structures-instructor.ipynb
ChrisKeefe/Network-Analysis-Made-Simple
Atmospheric, oceanic and land data handlingIn this notebook we discuss the subtleties of how NetCDF-SCM handles different data 'realms' and why these choices are made. The realms of intereset to date are atmosphere, ocean and land and the distinction between the realms follows the [CMIP6 realm controlled vocabulary](https://github.com/WCRP-CMIP/CMIP6_CVs/blob/master/CMIP6_realm.json).
import traceback from os.path import join import iris import iris.quickplot as qplt import matplotlib.pyplot as plt import numpy as np from netcdf_scm.iris_cube_wrappers import CMIP6OutputCube from netcdf_scm.utils import broadcast_onto_lat_lon_grid from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() plt.style.use("bmh") import logging root_logger = logging.getLogger() root_logger.setLevel(logging.WARNING) root_logger.addHandler(logging.StreamHandler()) DATA_PATH_TEST = join("..", "tests", "test-data")
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
Note that all of our data is on a regular grid, data on native model grids does not plot as nicely.
tas_file = join( DATA_PATH_TEST, "cmip6output", "CMIP6", "CMIP", "IPSL", "IPSL-CM6A-LR", "historical", "r1i1p1f1", "Amon", "tas", "gr", "v20180803", "tas_Amon_IPSL-CM6A-LR_historical_r1i1p1f1_gr_191001-191003.nc" ) gpp_file = tas_file.replace( "Amon", "Lmon" ).replace( "tas", "gpp" ) csoilfast_file = gpp_file.replace("gpp", "cSoilFast") hfds_file = join( DATA_PATH_TEST, "cmip6output", "CMIP6", "CMIP", "NOAA-GFDL", "GFDL-CM4", "piControl", "r1i1p1f1", "Omon", "hfds", "gr", "v20180701", "hfds_Omon_GFDL-CM4_piControl_r1i1p1f1_gr_015101-015103.nc" )
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
OceansWe start by loading our data.
hfds = CMIP6OutputCube() hfds.load_data_from_path(hfds_file)
Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
NetCDF-SCM will assume whether the data is "ocean", "land" or "atmosphere". The assumed realm can be checked by examining a `SCMCube`'s `netcdf_scm_realm` property. In our case we have "ocean" data.
hfds.netcdf_scm_realm
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
If we have ocean data, then there is no data which will go in a "land" box. Hence, if we request e.g. `World|Land` data, an error will be raised.
try: hfds.get_scm_timeseries(regions=["World", "World|Land"]) except ValueError as e: traceback.print_exc(limit=0, chain=False)
Traceback (most recent call last): ValueError: All weights are zero for region: `World|Land`
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
As there is no land data, the `World` mean is equal to the `World|Ocean` mean.
hfds_scm_ts = hfds.get_scm_timeseries( regions=["World", "World|Ocean"] ) hfds_scm_ts.line_plot(linestyle="region") np.testing.assert_allclose( hfds_scm_ts.filter(region="World").values, hfds_scm_ts.filter(region="World|Ocean").values, );
Not calculating land fractions as all required cubes are not available Performing lazy conversion to datetime for calendar: 365_day. This may cause subtle errors in operations that depend on the length of time between dates Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
When taking averages, there are 3 obvious options:- unweighted average- area weighted average- area and surface fraction weighted averageIn NetCDF-SCM, we always go for the third type in order to make sure that our weights are both area weighted and take into account how much each cell represents the SCM box of interest.In the cells below, we show the difference this choice makes.
def compare_weighting_options(input_scm_cube): unweighted_mean = input_scm_cube.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN ) area_cell = input_scm_cube.get_metadata_cube( input_scm_cube.areacell_var ).cube area_weights = broadcast_onto_lat_lon_grid( input_scm_cube, area_cell.data ) area_weighted_mean= input_scm_cube.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=area_weights ) surface_frac = input_scm_cube.get_metadata_cube( input_scm_cube.surface_fraction_var ).cube area_sf = area_cell * surface_frac area_sf_weights = broadcast_onto_lat_lon_grid( input_scm_cube, area_sf.data ) area_sf_weighted_mean = input_scm_cube.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=area_sf_weights ) plt.figure(figsize=(8, 4.5)) qplt.plot(unweighted_mean, label="unweighted") qplt.plot(area_weighted_mean, label="area weighted") qplt.plot( area_sf_weighted_mean, label="area-surface fraction weighted", linestyle="--", dashes=(10, 10), linewidth=4 ) plt.legend(); compare_weighting_options(hfds)
Collapsing spatial coordinate 'latitude' without weighting
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
We go to the trouble of taking these area-surface fraction weightings because they matter. In particular, the area weight is required to not overweight the poles (on whatever grid we're working) whilst the surface fraction ensures that the cells' contribution to the averages reflects how much they belong in a given 'SCM box'. More detailWe can check which variable is being used for the cell areas by loooking at `SCMCube.areacell_var`. For ocean data this is `areacello`.
hfds.areacell_var hfds_area_cell = hfds.get_metadata_cube(hfds.areacell_var).cube qplt.contourf( hfds_area_cell, );
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
We can check which variable is being used for the surface fraction by loooking at `SCMCube.surface_fraction_var`. For ocean data this is `sftof`.
hfds.surface_fraction_var hfds_surface_frac = hfds.get_metadata_cube(hfds.surface_fraction_var).cube qplt.contourf( hfds_surface_frac, );
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
The product of the area of the cells and the surface fraction gives us the area-surface fraction weights.
hfds_area_sf = hfds_area_cell * hfds_surface_frac plt.figure(figsize=(16, 9)) plt.subplot(121) qplt.contourf( hfds_area_sf, ) plt.subplot(122) lat_con = iris.Constraint(latitude=lambda cell: -50 < cell < -20) lon_con = iris.Constraint(longitude=lambda cell: 140 < cell < 160) qplt.contourf( hfds_area_sf.extract(lat_con & lon_con), );
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
The timeseries calculated by NetCDF-SCM is the same as the timeseries calculated using the surface fraction and area weights.
hfds_area_sf_weights = broadcast_onto_lat_lon_grid( hfds, hfds_area_sf.data ) hfds_area_sf_weighted_mean = hfds.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=hfds_area_sf_weights ) netcdf_scm_calculated = hfds.get_scm_timeseries( regions=["World"] ).timeseries() np.testing.assert_allclose( hfds_area_sf_weighted_mean.data, netcdf_scm_calculated.values.squeeze() ) netcdf_scm_calculated.T
Not calculating land fractions as all required cubes are not available Performing lazy conversion to datetime for calendar: 365_day. This may cause subtle errors in operations that depend on the length of time between dates
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
LandNext we look at land data.
gpp = CMIP6OutputCube() gpp.load_data_from_path(gpp_file) csoilfast = CMIP6OutputCube() csoilfast.load_data_from_path(csoilfast_file) gpp.netcdf_scm_realm csoilfast.netcdf_scm_realm
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
If we have land data, then there is no data which will go in a "ocean" box. Hence, if we request e.g. `World|Ocean` data, an error will be raised.
try: gpp.get_scm_timeseries(regions=["World", "World|Ocean"]) except ValueError as e: traceback.print_exc(limit=0, chain=False)
Traceback (most recent call last): ValueError: All weights are zero for region: `World|Ocean`
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
As there is no ocean data, the `World` mean is equal to the `World|Land` mean.
gpp_scm_ts = gpp.get_scm_timeseries( regions=["World", "World|Land"] ) gpp_scm_ts.line_plot(linestyle="region") np.testing.assert_allclose( gpp_scm_ts.filter(region="World").values, gpp_scm_ts.filter(region="World|Land").values, ); compare_weighting_options(gpp) compare_weighting_options(csoilfast)
Collapsing a non-contiguous coordinate. Metadata may not be fully descriptive for 'latitude'. Collapsing a non-contiguous coordinate. Metadata may not be fully descriptive for 'longitude'.
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
AtmosphereFinally we look at atmospheric data.
tas = CMIP6OutputCube() tas.load_data_from_path(tas_file) tas.netcdf_scm_realm
_____no_output_____
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
If we have atmosphere data, then we have global coverage and so can split data into both the land and ocean boxes.
fig = plt.figure(figsize=(16, 14)) ax1 = fig.add_subplot(311) tas.get_scm_timeseries( regions=[ "World", "World|Land", "World|Ocean", "World|Northern Hemisphere", "World|Southern Hemisphere", ] ).line_plot(color="region", ax=ax1) ax2 = fig.add_subplot(312, sharey=ax1, sharex=ax1) tas.get_scm_timeseries( regions=[ "World", "World|Northern Hemisphere|Land", "World|Southern Hemisphere|Land", "World|Northern Hemisphere|Ocean", "World|Southern Hemisphere|Ocean", ] ).line_plot(color="region", ax=ax2) ax3 = fig.add_subplot(313, sharey=ax1, sharex=ax1) tas.get_scm_timeseries( regions=[ "World", "World|Ocean", "World|North Atlantic Ocean", "World|El Nino N3.4", ] ).line_plot(color="region", ax=ax3); compare_weighting_options(tas)
Collapsing spatial coordinate 'latitude' without weighting
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
As our data is global, the "World" data is simply an area-weighted mean.
tas_area = tas.get_metadata_cube( tas.areacell_var ).cube tas_area_weights = broadcast_onto_lat_lon_grid( tas, tas_area.data ) tas_area_weighted_mean = tas.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=tas_area_weights ) netcdf_scm_calculated = tas.get_scm_timeseries( regions=["World"] ).timeseries() np.testing.assert_allclose( tas_area_weighted_mean.data, netcdf_scm_calculated.values.squeeze() ) netcdf_scm_calculated.T
Not calculating land fractions as all required cubes are not available
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
The "World|Land" data is surface fraction weighted.
tas_sf = tas.get_metadata_cube( tas.surface_fraction_var ).cube tas_area_sf = tas_area * tas_sf tas_area_sf_weights = broadcast_onto_lat_lon_grid( tas, tas_area_sf.data ) tas_area_sf_weighted_mean = tas.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=tas_area_sf_weights ) netcdf_scm_calculated = tas.get_scm_timeseries( regions=["World|Land"] ).timeseries() np.testing.assert_allclose( tas_area_sf_weighted_mean.data, netcdf_scm_calculated.values.squeeze() ) netcdf_scm_calculated.T
Not calculating land fractions as all required cubes are not available
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
The "World|Ocean" data is also surface fraction weighted (calculated as 100 minus land surface fraction).
tas_sf_ocean = tas.get_metadata_cube( tas.surface_fraction_var ).cube tas_sf_ocean.data = 100 - tas_sf_ocean.data tas_area_sf_ocean = tas_area * tas_sf_ocean tas_area_sf_ocean_weights = broadcast_onto_lat_lon_grid( tas, tas_area_sf_ocean.data ) tas_area_sf_ocean_weighted_mean = tas.cube.collapsed( ["latitude", "longitude"], iris.analysis.MEAN, weights=tas_area_sf_ocean_weights ) netcdf_scm_calculated = tas.get_scm_timeseries( regions=["World|Ocean"] ).timeseries() np.testing.assert_allclose( tas_area_sf_ocean_weighted_mean.data, netcdf_scm_calculated.values.squeeze() ) netcdf_scm_calculated.T
Not calculating land fractions as all required cubes are not available
BSD-2-Clause
notebooks/atmos-land-ocean-handling.ipynb
lewisjared/netcdf-scm
Decision Tree Classification Part 4
import numpy as np import matplotlib.pyplot as plt import pandas as pd import warnings warnings.filterwarnings("ignore") # yahoo finance is used to fetch data import yfinance as yf yf.pdr_override() # input symbol = 'AMD' start = '2014-01-01' end = '2019-01-01' # Read data dataset = yf.download(symbol,start,end) # View Columns dataset.head() dataset['Open_Close'] = (dataset['Open'] - dataset['Adj Close'])/dataset['Open'] dataset['High_Low'] = (dataset['High'] - dataset['Low'])/dataset['Low'] dataset['Increase_Decrease'] = np.where(dataset['Volume'].shift(-1) > dataset['Volume'],1,0) dataset['Buy_Sell_on_Open'] = np.where(dataset['Open'].shift(-1) > dataset['Open'],1,0) dataset['Buy_Sell'] = np.where(dataset['Adj Close'].shift(-1) > dataset['Adj Close'],1,0) dataset['Returns'] = dataset['Adj Close'].pct_change() dataset = dataset.dropna() dataset.head() X = dataset[['Open', 'High', 'Low', 'Volume', 'Adj Close','Returns']].values y = dataset['Buy_Sell'].values #Spilitting the dataset removed =[0,50,100] new_target = np.delete(y,removed) new_data = np.delete(X,removed, axis=0) from sklearn import tree clf = tree.DecisionTreeClassifier() clf=clf.fit(new_data,new_target) prediction = clf.predict(X[removed]) print("Original Labels",y[removed]) print("Labels Predicted",prediction) tree.plot_tree(clf)
_____no_output_____
MIT
Stock_Algorithms/Decision_Trees_Classification_Part4.ipynb
NTForked-ML/Deep-Learning-Machine-Learning-Stock
T81-558: Applications of Deep Neural Networks**Module 11: Natural Language Processing with Hugging Face*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 11 Material* Part 11.1: Introduction to Hugging Face [[Video]](https://www.youtube.com/watch?v=1IHXSbz02XM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_01_huggingface.ipynb)* Part 11.2: Hugging Face Tokenizers [[Video]](https://www.youtube.com/watch?v=U-EGU1RyChg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_02_tokenizers.ipynb)* Part 11.3: Hugging Face Datasets [[Video]](https://www.youtube.com/watch?v=Mq5ODegT17M&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_03_hf_datasets.ipynb)* **Part 11.4: Training Hugging Face Models** [[Video]](https://www.youtube.com/watch?v=https://www.youtube.com/watch?v=l69ov6b7DOM&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_04_hf_train.ipynb)* Part 11.5: What are Embedding Layers in Keras [[Video]](https://www.youtube.com/watch?v=OuNH5kT-aD0list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=58) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_11_05_embedding.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False
Note: using Google CoLab
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
Part 11.4: Training Hugging Face Models Up to this point, we've used data and models from the Hugging Face hub unmodified. In this section, we will transfer and train a Hugging Face model. To achieve this training, we will use Hugging Face data sets, tokenizers, and pretrained models.We begin by installing Hugging Face if needed. It is also essential to install Hugging Face datasets.
# HIDE OUTPUT !pip install transformers !pip install transformers[sentencepiece] !pip install datasets
Collecting transformers Downloading transformers-4.17.0-py3-none-any.whl (3.8 MB)  |████████████████████████████████| 3.8 MB 15.1 MB/s [?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20) Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.6.0) Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.21.5) Collecting tokenizers!=0.11.3,>=0.11.1 Downloading tokenizers-0.11.6-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.5 MB)  |████████████████████████████████| 6.5 MB 56.8 MB/s [?25hCollecting pyyaml>=5.1 Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)  |████████████████████████████████| 596 kB 72.2 MB/s [?25hRequirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.3) Collecting sacremoses Downloading sacremoses-0.0.49-py3-none-any.whl (895 kB)  |████████████████████████████████| 895 kB 65.0 MB/s [?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.63.0) Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.11.3) Collecting huggingface-hub<1.0,>=0.1.0 Downloading huggingface_hub-0.4.0-py3-none-any.whl (67 kB)  |████████████████████████████████| 67 kB 6.9 MB/s [?25hRequirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers) (3.10.0.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (3.0.7) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.7.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.10.8) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10) Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.1.0) Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0) Installing collected packages: pyyaml, tokenizers, sacremoses, huggingface-hub, transformers Attempting uninstall: pyyaml Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Successfully installed huggingface-hub-0.4.0 pyyaml-6.0 sacremoses-0.0.49 tokenizers-0.11.6 transformers-4.17.0 Requirement already satisfied: transformers[sentencepiece] in /usr/local/lib/python3.7/dist-packages (4.17.0) Requirement already satisfied: sacremoses in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (0.0.49) Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (3.6.0) Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (4.63.0) Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (21.3) Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (4.11.3) Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (2.23.0) Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (2019.12.20) Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (6.0) Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (1.21.5) Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (0.4.0) Requirement already satisfied: tokenizers!=0.11.3,>=0.11.1 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (0.11.6) Collecting sentencepiece!=0.1.92,>=0.1.91 Downloading sentencepiece-0.1.96-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.2 MB)  |████████████████████████████████| 1.2 MB 14.4 MB/s [?25hRequirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (3.17.3) Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.1.0->transformers[sentencepiece]) (3.10.0.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers[sentencepiece]) (3.0.7) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers[sentencepiece]) (3.7.0) Requirement already satisfied: six>=1.9 in /usr/local/lib/python3.7/dist-packages (from protobuf->transformers[sentencepiece]) (1.15.0) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (2.10) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (3.0.4) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (2021.10.8) Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers[sentencepiece]) (7.1.2) Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers[sentencepiece]) (1.1.0) Installing collected packages: sentencepiece Successfully installed sentencepiece-0.1.96 Collecting datasets Downloading datasets-2.0.0-py3-none-any.whl (325 kB)  |████████████████████████████████| 325 kB 14.5 MB/s [?25hCollecting xxhash Downloading xxhash-3.0.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (212 kB)  |████████████████████████████████| 212 kB 70.9 MB/s [?25hRequirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from datasets) (0.70.12.2) Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (0.4.0) Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from datasets) (1.21.5) Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (2.23.0) Requirement already satisfied: pyarrow>=5.0.0 in /usr/local/lib/python3.7/dist-packages (from datasets) (6.0.1) Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.7/dist-packages (from datasets) (4.63.0) Collecting responses<0.19 Downloading responses-0.18.0-py3-none-any.whl (38 kB) Requirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from datasets) (0.3.4) Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from datasets) (4.11.3) Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from datasets) (21.3) Requirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from datasets) (1.3.5) Collecting aiohttp Downloading aiohttp-3.8.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.1 MB)  |████████████████████████████████| 1.1 MB 62.9 MB/s [?25hCollecting fsspec[http]>=2021.05.0 Downloading fsspec-2022.2.0-py3-none-any.whl (134 kB)  |████████████████████████████████| 134 kB 74.3 MB/s [?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.6.0) Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (6.0) Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.2) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->datasets) (3.0.7) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets) (1.24.3) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets) (3.0.4) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.19.0->datasets) (2021.10.8) Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)  |████████████████████████████████| 127 kB 76.1 MB/s [?25hCollecting asynctest==0.13.0 Downloading asynctest-0.13.0-py3-none-any.whl (26 kB) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (21.4.0) Collecting multidict<7.0,>=4.5 Downloading multidict-6.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (94 kB)  |████████████████████████████████| 94 kB 4.6 MB/s [?25hCollecting yarl<2.0,>=1.0 Downloading yarl-1.7.2-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (271 kB)  |████████████████████████████████| 271 kB 65.7 MB/s [?25hCollecting frozenlist>=1.1.1 Downloading frozenlist-1.3.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (144 kB)  |████████████████████████████████| 144 kB 78.0 MB/s [?25hCollecting async-timeout<5.0,>=4.0.0a3 Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB) Collecting aiosignal>=1.1.2 Downloading aiosignal-1.2.0-py3-none-any.whl (8.2 kB) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp->datasets) (2.0.12) Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->datasets) (3.7.0) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2018.9) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->datasets) (2.8.2) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas->datasets) (1.15.0) Installing collected packages: multidict, frozenlist, yarl, urllib3, asynctest, async-timeout, aiosignal, fsspec, aiohttp, xxhash, responses, datasets Attempting uninstall: urllib3 Found existing installation: urllib3 1.24.3 Uninstalling urllib3-1.24.3: Successfully uninstalled urllib3-1.24.3 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible. Successfully installed aiohttp-3.8.1 aiosignal-1.2.0 async-timeout-4.0.2 asynctest-0.13.0 datasets-2.0.0 frozenlist-1.3.0 fsspec-2022.2.0 multidict-6.0.2 responses-0.18.0 urllib3-1.25.11 xxhash-3.0.0 yarl-1.7.2
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We begin by loading the emotion data set from the Hugging Face hub. Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. The following code loads the emotion data set from the Hugging Face hub.
# HIDE OUTPUT from datasets import load_dataset emotions = load_dataset("emotion")
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
You can see a single observation from the training data set here. This observation includes both the text sample and the assigned emotion label. The label is a numeric index representing the assigned emotion.
emotions['train'][2]
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We can display the labels in order of their index labels.
emotions['train'].features
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
Next, we utilize Hugging Face tokenizers and data sets together. The following code tokenizes the entire emotion data set. You can see below that the code has transformed the training set into subword tokens that are now ready to be used in conjunction with a transformer for either inference or training.
# HIDE OUTPUT from transformers import AutoTokenizer def tokenize(rows): return tokenizer(rows['text'], padding="max_length", truncation=True) model_ckpt = "distilbert-base-uncased" tokenizer=AutoTokenizer.from_pretrained(model_ckpt) emotions.set_format(type=None) tokenized_datasets = emotions.map(tokenize, batched=True)
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We will utilize the Hugging Face DefaultDataCollator to transform the emotion data set into TensorFlow type data that we can use to finetune a neural network.
from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf")
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
Now we generate a shuffled training and evaluation data set.
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42) small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42)
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We can now generate the TensorFlow data sets. We specify which columns should map to the input features and labels. We do not need to shuffle because we previously shuffled the data.
tf_train_dataset = small_train_dataset.to_tf_dataset( columns=["attention_mask", "input_ids", "token_type_ids"], label_cols=["labels"], shuffle=True, collate_fn=data_collator, batch_size=8, ) tf_validation_dataset = small_eval_dataset.to_tf_dataset( columns=["attention_mask", "input_ids", "token_type_ids"], label_cols=["labels"], shuffle=False, collate_fn=data_collator, batch_size=8, )
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We will now load the distilbert model for classification. We will adjust the pretrained weights to predict the emotions of text lines.
# HIDE OUTPUT import tensorflow as tf from transformers import TFAutoModelForSequenceClassification model = TFAutoModelForSequenceClassification.from_pretrained(\ "distilbert-base-uncased", num_labels=6)
_____no_output_____
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
We now train the neural network. Because the network is already pretrained, we use a small learning rate.
model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=tf.metrics.SparseCategoricalAccuracy(), ) model.fit(tf_train_dataset, validation_data=tf_validation_dataset, \ epochs=5)
Epoch 1/5 2000/2000 [==============================] - 360s 174ms/step - loss: 0.3720 - sparse_categorical_accuracy: 0.8669 - val_loss: 0.1728 - val_sparse_categorical_accuracy: 0.9180 Epoch 2/5 2000/2000 [==============================] - 347s 174ms/step - loss: 0.1488 - sparse_categorical_accuracy: 0.9338 - val_loss: 0.1496 - val_sparse_categorical_accuracy: 0.9295 Epoch 3/5 2000/2000 [==============================] - 347s 173ms/step - loss: 0.1253 - sparse_categorical_accuracy: 0.9420 - val_loss: 0.1617 - val_sparse_categorical_accuracy: 0.9245 Epoch 4/5 2000/2000 [==============================] - 346s 173ms/step - loss: 0.1092 - sparse_categorical_accuracy: 0.9486 - val_loss: 0.1654 - val_sparse_categorical_accuracy: 0.9295 Epoch 5/5 2000/2000 [==============================] - 347s 173ms/step - loss: 0.0960 - sparse_categorical_accuracy: 0.9585 - val_loss: 0.1830 - val_sparse_categorical_accuracy: 0.9220
Apache-2.0
t81_558_class_11_04_hf_train.ipynb
igunduz/t81_558_deep_learning
Paired a x b cross tableAlternative of z-test and chi-square test
# Enable the commands below when running this program on Google Colab. # !pip install arviz==0.7 # !pip install pymc3==3.8 # !pip install Theano==1.0.4 import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt import seaborn as sns import pymc3 as pm import math plt.style.use('seaborn-darkgrid') np.set_printoptions(precision=3) pd.set_option('display.precision', 3)
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
Q. A restaurant counted up what wines (red, rose, and white) customers chose for their main dishes (roast veal, pasta gorgonzola, and sole meuniere). Analyze the relationship between main dish and wine.
a = 3 # Kinds of man dishes b = 3 # Kinds of wines data = pd.DataFrame([[19, 12, 6], [8, 8, 4], [15, 19, 18]], columns=['Veal', 'Pasta', 'Sole'], index=['Red', 'Rose', 'White']) observed = [data['Veal']['Red'], data['Pasta']['Red'], data['Sole']['Red'], data['Veal']['Rose'], data['Pasta']['Rose'], data['Sole']['Rose'], data['Veal']['White'], data['Pasta']['White'], data['Sole']['White']] display(data) N = data.sum().sum()
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
Bayesian analysis
with pm.Model() as model: # Prior distribution p_ = pm.Uniform('p_', 0, 1, shape=(a * b)) p = pm.Deterministic('p', p_ / pm.math.sum(p_)) # Likelihood x = pm.Multinomial('x', n=N, p=p, observed=observed) # Marginal probability p1d = pm.Deterministic('p1d', p[0] + p[1] + p[2]) # p1. = p11 + p12 + p13 p2d = pm.Deterministic('p2d', p[3] + p[4] + p[5]) # p2. = p21 + p22 + p23 p3d = pm.Deterministic('p3d', p[6] + p[7] + p[8]) # p3. = p31 + p32 + p33 pd1 = pm.Deterministic('pd1', p[0] + p[3] + p[6]) # p.1 = p11 + p21 + p31 pd2 = pm.Deterministic('pd2', p[1] + p[4] + p[7]) # p.2 = p12 + p22 + p32 pd3 = pm.Deterministic('pd3', p[2] + p[5] + p[8]) # p.3 = p13 + p23 + p33 # Pearson's residual pp = [p1d * pd1, p1d * pd2, p1d * pd3, p2d * pd1, p2d * pd2, p2d * pd3, p3d * pd1, p3d * pd2, p3d * pd3] e = pm.Deterministic('e', (p - pp) / pm.math.sqrt(pp)) # Cramer's association coefficient V = pm.Deterministic('V', pm.math.sqrt(pm.math.sum(e**2) / (min(a, b) - 1))) trace = pm.sample(21000, chains=5) chain = trace[1000:] pm.traceplot(chain) plt.show() pm.summary(chain, var_names=['p', 'V', 'p1d', 'p2d', 'p3d', 'pd1', 'pd2', 'pd3'])
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
Independence and association
plt.boxplot( [chain['e'][:,0], chain['e'][:,1], chain['e'][:,2], chain['e'][:,3], chain['e'][:,4], chain['e'][:,5], chain['e'][:,6], chain['e'][:,7], chain['e'][:,8],], labels=['e11', 'e12', 'e13', 'e21', 'e22', 'e23', 'e31', 'e32', 'e33']) plt.show() print("Cramer's association coefficient: {:.3f}".format(chain['V'].mean())) # 1.0 - 0.5: strong association # 0.5 - 0.25: association # 0.25 - 0.1: weak association # 0.1 > : very weak association # 0: no association egz = pd.DataFrame( [[(chain['e'][:,0] > 0).mean(), (chain['e'][:,1] > 0).mean(), (chain['e'][:,2] > 0).mean()], [(chain['e'][:,3] > 0).mean(), (chain['e'][:,4] > 0).mean(), (chain['e'][:,5] > 0).mean()], [(chain['e'][:,6] > 0).mean(), (chain['e'][:,7] > 0).mean(), (chain['e'][:,8] > 0).mean()] ], columns=['Veal', 'Pasta', 'Sole'], index=['Red', 'Rose', 'White'] ) elz = pd.DataFrame( [[(chain['e'][:,0] < 0).mean(), (chain['e'][:,1] < 0).mean(), (chain['e'][:,2] < 0).mean()], [(chain['e'][:,3] < 0).mean(), (chain['e'][:,4] < 0).mean(), (chain['e'][:,5] < 0).mean()], [(chain['e'][:,6] < 0).mean(), (chain['e'][:,7] < 0).mean(), (chain['e'][:,8] < 0).mean()] ], columns=['Veal', 'Pasta', 'Sole'], index=['Red', 'Rose', 'White'] ) print('e > 0') display(egz) print('e < 0') display(elz)
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
RQ1: 「子牛」料理を選んだ客は「赤」を選び「白」は避け、「舌平目」料理を選んだ客は「白」を選び「赤」は避ける
val_1 = (chain['e'][:,0] > 0).mean() * (chain['e'][:,8] > 0).mean() * (chain['e'][:,6] < 0).mean() * (chain['e'][:,2] < 0).mean() print('Probability: {:.3f} %'.format(val_1 * 100))
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
RQ2: 「子牛」料理を選んだ客は「赤」を選び「白」は避け、「舌平目」料理を選んだ客は「白」を選ぶ
val_2 = (chain['e'][:,0] > 0).mean() * (chain['e'][:,8] > 0).mean() * (chain['e'][:,6] < 0).mean() print('Probability: {:.3f} %'.format(val_2 * 100))
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
RQ3: 「子牛」料理を選んだ客は「赤」を選び、「舌平目」料理を選んだ客は「白」を選ぶ
val_3 = (chain['e'][:,0] > 0).mean() * (chain['e'][:,8] > 0).mean() print('Probability: {:.3f} %'.format(val_3 * 100))
_____no_output_____
MIT
src/bayes/proportion/cross_table/paired_axb.ipynb
shigeodayo/ex_design_analysis
Sebastian Raschka, 2015 Python Machine Learning Essentials Chapter 3 - A Tour of Machine Learning Classifiers Using Scikit-Learn Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
%load_ext watermark %watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn # to install watermark just uncomment the following line: #%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Sections- [First steps with scikit-learn](First-steps-with-scikit-learn) - [Loading and preprocessing the data](Loading-and-preprocessing-the-data ) - [Training a perceptron via scikit-learn](Training-a-perceptron-via-scikit-learn)- [Modeling class probabilities via logistic regression](Modeling-class-probabilities-via-logistic-regression)- [Maximum margin classification with support vector machines](Maximum-margin-classification-with-support-vector-machines)- [Solving non-linear problems using a kernel SVM](Solving-non-linear-problems-using-a-kernel-SVM)- [Decision trees learning](Decision-trees-learning)- [Combining weak to strong learners via random forests](Combining-weak-to-strong-learners-via-random-forests)- [K-nearest neighbors - a lazy learning algorithm](K-nearest-neighbors---a-lazy-learning-algorithm) First steps with scikit-learn [[back to top](Sections)] Loading and preprocessing the data [[back to top](Sections)] Loading the Iris dataset from scikit-learn. Here, the third column represents the petal length, and the fourth column the petal width of the flower samples. The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
from sklearn import datasets import numpy as np iris = datasets.load_iris() X = iris.data[:, [2, 3]] y = iris.target print('Class labels:', np.unique(y))
Class labels: [0 1 2]
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Splitting data into 70% training and 30% test data:
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0)
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Standardizing the features:
from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test)
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Training a perceptron via scikit-learn [[back to top](Sections)] Redefining the `plot_decision_region` function from chapter 2:
from sklearn.linear_model import Perceptron ppn = Perceptron(n_iter=40, eta0=0.1, random_state=0) ppn.fit(X_train_std, y_train) y_test.shape y_pred = ppn.predict(X_test_std) print('Misclassified samples: %d' % (y_test != y_pred).sum()) from sklearn.metrics import accuracy_score print('Accuracy: %.2f' % accuracy_score(y_test, y_pred)) from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt %matplotlib inline def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) # plot all samples X_test, y_test = X[test_idx, :], y[test_idx] for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) # highlight test samples if test_idx: X_test, y_test = X[test_idx, :], y[test_idx] plt.scatter(X_test[:, 0], X_test[:, 1], c='', alpha=1.0, linewidth=1, marker='o', s=55, label='test set')
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Training a perceptron model using the standardized training data:
%matplotlib inline X_combined_std = np.vstack((X_train_std, X_test_std)) y_combined = np.hstack((y_train, y_test)) plot_decision_regions(X=X_combined_std, y=y_combined, classifier=ppn, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/iris_perceptron_scikit.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Modeling class probabilities via logistic regression [[back to top](Sections)] Plot sigmoid function:
%matplotlib inline import matplotlib.pyplot as plt import numpy as np def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) z = np.arange(-7, 7, 0.1) phi_z = sigmoid(z) plt.plot(z, phi_z) plt.axvline(0.0, color='k') plt.ylim(-0.1, 1.1) plt.xlabel('z') plt.ylabel('$\phi (z)$') # y axis ticks and gridline plt.yticks([0.0, 0.5, 1.0]) ax = plt.gca() ax.yaxis.grid(True) plt.tight_layout() # plt.savefig('./figures/sigmoid.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Plot cost function:
def cost_1(z): return - np.log(sigmoid(z)) def cost_0(z): return - np.log(1 - sigmoid(z)) z = np.arange(-10, 10, 0.1) phi_z = sigmoid(z) c1 = [cost_1(x) for x in z] plt.plot(phi_z, c1, label='J(w) if y=1') c0 = [cost_0(x) for x in z] plt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0') plt.ylim(0.0, 5.1) plt.xlim([0, 1]) plt.xlabel('$\phi$(z)') plt.ylabel('J(w)') plt.legend(loc='best') plt.tight_layout() # plt.savefig('./figures/log_cost.png', dpi=300) plt.show() from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C=1000.0, random_state=0) lr.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=lr, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/logistic_regression.png', dpi=300) plt.show() lr.predict_proba(X_test_std[0,:])
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Regularization path:
weights, params = [], [] for c in np.arange(-5, 5): lr = LogisticRegression(C=10**c, random_state=0) lr.fit(X_train_std, y_train) weights.append(lr.coef_[1]) params.append(10**c) weights = np.array(weights) plt.plot(params, weights[:, 0], label='petal length') plt.plot(params, weights[:, 1], linestyle='--', label='petal width') plt.ylabel('weight coefficient') plt.xlabel('C') plt.legend(loc='upper left') plt.xscale('log') # plt.savefig('./figures/regression_path.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Maximum margin classification with support vector machines [[back to top](Sections)]
from sklearn.svm import SVC svm = SVC(kernel='linear', C=1.0, random_state=0) svm.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/support_vector_machine_linear.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Solving non-linear problems using a kernel SVM [[back to top](Sections)]
import matplotlib.pyplot as plt import numpy as np %matplotlib inline np.random.seed(0) X_xor = np.random.randn(200, 2) y_xor = np.logical_xor(X_xor[:, 0] > 0, X_xor[:, 1] > 0) y_xor = np.where(y_xor, 1, -1) plt.scatter(X_xor[y_xor==1, 0], X_xor[y_xor==1, 1], c='b', marker='x', label='1') plt.scatter(X_xor[y_xor==-1, 0], X_xor[y_xor==-1, 1], c='r', marker='s', label='-1') plt.xlim([-3, 3]) plt.ylim([-3, 3]) plt.legend(loc='best') plt.tight_layout() # plt.savefig('./figures/xor.png', dpi=300) plt.show() svm = SVC(kernel='rbf', random_state=0, gamma=0.10, C=10.0) svm.fit(X_xor, y_xor) plot_decision_regions(X_xor, y_xor, classifier=svm) plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/support_vector_machine_rbf_xor.png', dpi=300) plt.show() from sklearn.svm import SVC svm = SVC(kernel='rbf', random_state=0, gamma=0.2, C=1.0) svm.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/support_vector_machine_rbf_iris_1.png', dpi=300) plt.show() svm = SVC(kernel='rbf', random_state=0, gamma=100.0, C=1.0) svm.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=svm, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/support_vector_machine_rbf_iris_2.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Decision trees learning
from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0) tree.fit(X_train, y_train) X_combined = np.vstack((X_train, X_test)) y_combined = np.hstack((y_train, y_test)) plot_decision_regions(X_combined, y_combined, classifier=tree, test_idx=range(105,150)) plt.xlabel('petal length [cm]') plt.ylabel('petal width [cm]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/decision_tree_decision.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
[[back to top](Sections)]
import matplotlib.pyplot as plt import numpy as np %matplotlib inline def gini(p): return (p)*(1 - (p)) + (1-p)*(1 - (1-p)) def entropy(p): return - p*np.log2(p) - (1 - p)*np.log2((1 - p)) def error(p): return 1 - np.max([p, 1 - p]) x = np.arange(0.0, 1.0, 0.01) ent = [entropy(p) if p != 0 else None for p in x] sc_ent = [e*0.5 if e else None for e in ent] err = [error(i) for i in x] fig = plt.figure() ax = plt.subplot(111) for i, lab, ls, c, in zip([ent, sc_ent, gini(x), err], ['Entropy', 'Entropy (scaled)', 'Gini Impurity', 'Misclassification Error'], ['-', '-', '--', '-.'], ['black', 'lightgray', 'red', 'green', 'cyan']): line = ax.plot(x, i, label=lab, linestyle=ls, lw=2, color=c) ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15), ncol=3, fancybox=True, shadow=False) ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--') ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--') plt.ylim([0, 1.1]) plt.xlabel('p(i=1)') plt.ylabel('Impurity Index') plt.tight_layout() plt.savefig('./figures/impurity.png', dpi=300, bbox_inches='tight') plt.show() from sklearn.tree import export_graphviz export_graphviz(tree, out_file='tree.dot', feature_names=['petal length', 'petal width'])
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
Combining weak to strong learners via random forests [[back to top](Sections)]
from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1, n_jobs=2) forest.fit(X_train, y_train) plot_decision_regions(X_combined, y_combined, classifier=forest, test_idx=range(105,150)) plt.xlabel('petal length [cm]') plt.ylabel('petal width [cm]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/random_forest.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
K-nearest neighbors - a lazy learning algorithm [[back to top](Sections)]
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski') knn.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=knn, test_idx=range(105,150)) plt.xlabel('petal length [standardized]') plt.ylabel('petal width [standardized]') plt.legend(loc='upper left') plt.tight_layout() # plt.savefig('./figures/k_nearest_neighbors.png', dpi=300) plt.show()
_____no_output_____
MIT
3547_03_Code.ipynb
varunmuriyanat/Python-Machine-Learning
12. Interactive Mapping with FoliumIn previous lessons we used `Geopandas` and `matplotlib` to create choropleth and point maps of our data. In this notebook we will take it to the next level by creating `interactive maps` with the **folium** library. > References>>This notebook provides an introduction to `folium`. To see what else you can do, check out the references listed below.>> - [Folium web site](https://github.com/python-visualization/folium)>> - [Folium notebook examples](https://nbviewer.jupyter.org/github/python-visualization/folium/tree/master/examples/) Import Libraries
import pandas as pd import geopandas as gpd import numpy as np import matplotlib # base python plotting library import matplotlib.pyplot as plt # submodule of matplotlib # To display plots, maps, charts etc in the notebook %matplotlib inline import folium # popular python web mapping tool for creating Leaflet maps import folium.plugins # Supress minor warnings about the syntax of CRS definitions, # ie "init=epsg:4269" vs "epsg:4269" import warnings warnings.simplefilter(action='ignore', category=FutureWarning)
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Check your version of `folium` and `geopandas`.Folium is a new and evolving Python library so make sure you have version 0.10.1 or later installed.
print(folium.__version__) # Make sure you have version 0.10.1 or later of folium! print(gpd.__version__) # Make sure you have version 0.7.0 or later of GeoPandas!
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
12.1 IntroductionInteractive maps serve two very important purposes in geospatial analysis. First, they provde new tools for exploratory data analysis. With an interactive map you can:- `pan` over the mapped data, - `zoom` into a smaller arear that is not easily visible when the full extent of the map is displayed, and - `click` on or `hover` over a feature to see more information about it.Second, when saved and shared, interactive maps provide a new tool for communicating the results of your analysis and for inviting your online audience to actively explore your work.For those of you who work with tools like ArcGIS or QGIS, interactive maps also make working in the jupyter notebook environment a bit more like working in a desktop GIS.The goal of this notebook is to show you how to create an interactive map with your geospatial data so that you can better analyze your data and save your output to share with others. After completing this lesson you will be able to create an interactive map like the one shown below.
%%html <iframe src="notebook_data/bartmap_example.html" width="1000" height="600"></iframe>
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
12.2 Interactive Mapping with FoliumUnder the hood, `folium` is a Python package for creating interactive maps with [Leaflet](https://leafletjs.com), a popular javascript web mapping library. Let's start by creating a interactive map with the `folium.Map` function and display it in the notebook.
# Create a new folium map and save it to the variable name map1 map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map width="100%", # the width & height of the output map height=500, # in pixels (int) or in percent of available space (str) zoom_start=13) # the zoom level for the data to be displayed (3-20) map1 # display the map in the notebook
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Let's discuss the map above and the code we used to generate it.At any time you can enter the following command to get help with `folium.Map`:
# uncomment to see help docs ?folium.Map
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Let's make another folium map using the code below:
# Create a new folium map and save it to the variable name map1 # map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map tiles='CartoDB Positron', #width=800, # the width & height of the output map #height=600, # in pixels or in percent of available space zoom_start=13) # the zoom level for the data to be displayed
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Questions- What's new in the code?- How do you think that will change the map?Let's display the map and see what changes...
map1 # display map in notebook
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Notice how the map changes when you change the underlying **tileset** from the default, which is `OpenStreetMap`, to `CartoDB Positron`. > [OpenStreetMap](https://www.openstreetmap.org/map=5/38.007/-95.844) is the largest free and open source dataset of geographic information about the world. So it is the default basemap for a lot of mapping tools and libraries.- You can find a list of the available tilesets you can use in the help documentation (`folium.Map?`), a snippet of which is shown below:Generate a base map of given width and height with either defaulttilesets or a custom tileset URL. The following tilesets are built-into Folium. Pass any of the following to the "tiles" keyword: - "OpenStreetMap" - "Mapbox Bright" (Limited levels of zoom for free tiles) - "Mapbox Control Room" (Limited levels of zoom for free tiles) - "Stamen" (Terrain, Toner, and Watercolor) - "Cloudmade" (Must pass API key) - "Mapbox" (Must pass API key) - "CartoDB" (positron and dark_matter) ExerciseTake a few minutes to try some of the different tilesets in the code below and see how they change the output map. *Avoid the ones that don't require an API key*.
# Make changes to the code below to change the folium Map ## Try changing the values for the zoom_start and tiles parameters. map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map tiles='Stamen Watercolor', # basemap aka baselay or tile set width=800, # the width & height of the output map height=500, # in pixels or percent of available space zoom_start=13) # the zoom level for the data to be displayed #display the map map1
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
12.3 Adding a Map Layer Now that we have created a folium map, let's add our California County data to the map. First, let's read that data into a Geopandas geodataframe.
# Alameda county census tract data with the associated ACS 5yr variables. ca_counties_gdf = gpd.read_file("notebook_data/california_counties/CaliforniaCounties.shp")
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Take another brief look at the geodataframe to recall the contents.
# take a look at first two rows ca_counties_gdf.head(2) # take a look at all column names ca_counties_gdf.columns
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Adding a layer with folium.GeoJsonFolium provides a number of ways to add vector data - points, lines, and polygons - to a map. The data we are working with are in Geopandas geodataframes. The main folium function for adding these to the map is `folium.GeoJson`.Let's build on our last map and add the census tracts as a `folium.GeoJson` layer.
map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map tiles='CartoDB positron', # basemap aka baselay or tile set width=800, # the width & height of the output map height=600, # in pixels or in percent of available space zoom_start=6) # the zoom level for the data to be displayed # Add the census tracts to the map folium.GeoJson(ca_counties_gdf).add_to(map1) #display the map map1
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
That was pretty straight-forward, but `folium.GeoJSON` provides a lot of arguments for customizing the display of the data in the map. We will review some of these soon. However, at any time you can get more information about `folium.GeoJSON` by taking a look at the function documentation.
# Uncomment to view documentation # folium.GeoJson?
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
Checking and Transforming the CRSIt's always a good idea to check the **CRS** of your geodata before doing anything with that data. This is true when we use `folium` to make an interactive map. Here is how folium deals with the CRS of a geodataframe before mapping it:- Folium checks to see if the gdf has a defined CRS - If the CRS is not defined, it assumes the data to be in the WGS84 CRS (epsg=4326). - If the CRS is defined, it will be transformed dynamically to WGS84 before mapping.So, if your map data doesn't show up where at all or where you think it should, check the CRS of your data!- If it is not defined, define it. Questions- What is the CRS of the tract data?- How is folium dealing with the CRS of this gdf?
# Check the CRS of the data print(...)
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
*Click here for answers*<!--- What is the CRS of the tract data?tracts_gdf.crs How is folium dealing with the CRS of this gdf? Dynamically transformed to WGS84 (but it already is in that projection so no change)---> Styling features with `folium.GeoJson`Let's dive deeper into the `folium.GeoJson` function. Below is an excerpt from the help documentation for the function that shows all the available function arguments that we can set. QuestionWhat argument do we use to style the color for our polygons?folium.GeoJson( data, style_function=None, highlight_function=None, name=None, overlay=True, control=True, show=True, smooth_factor=None, tooltip=None, embed=True,) Let's examine the options for the `style_function` in more detail since we will use these to change the style of our mapped data.`style_function = lambda x: {` apply to all features being mapped (ie, all rows in the geodataframe) `'weight': line_weight,` set the thickness of a line or polyline where 1 thick, 1 = default `'opacity': line_opacity,` set opacity where 1 is solid, 0.5 is semi-opaque and 0 is transparent `'color': line_color` set the color of the line, eg "red" or some hexidecimal color value`'fillOpacity': opacity,` set opacity of the fill of a polygon `'fillColor': color` set color of the fill of a polygon `'dashArray': '5, 5'` set line pattern to a dash of 5 pixels on, off `}`Ok! Let's try setting the style of our census tract by defining a style function.
# Define the basemap map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map tiles='CartoDB Positron', width=1000, # the width & height of the output map height=600, # in pixels zoom_start=6) # the zoom level for the data to be displayed # Add the census tracts gdf layer # setting the style of the data folium.GeoJson(ca_counties_gdf, style_function = lambda x: { 'weight':2, 'color':"white", 'opacity':1, 'fillColor':"red", 'fillOpacity':0.6 } ).add_to(map1) map1
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python
ExerciseCopy the code from our last map and paste it below. Take a few minutes edit the code to change the style of the census tract polygons.
# Your code here map1 = folium.Map(location=[37.8721, -122.2578], # lat, lon around which to center the map tiles='Stamen Watercolor', width=1000, # the width & height of the output map height=600, # in pixels zoom_start=10) # the zoom level for the data to be displayed # Add the census tracts gdf layer # setting the style of the data folium.GeoJson(ca_counties_gdf, style_function = lambda x: { 'weight':3, 'color':"black", 'opacity':1, 'fillColor':"none", 'fillOpacity':0.6 } ).add_to(map1) map1
_____no_output_____
MIT
12_OPTIONAL_Interactive_Mapping_with_Folium.ipynb
reeshav-netizen/Geospatial-Fundamentals-in-Python